Quantcast
Channel: Studies from the Cato Institute
Viewing all 298 articles
Browse latest View live

Making Sense of the Minimum Wage: A Roadmap for Navigating Recent Research

$
0
0

Jeffrey Clemens

The new conventional wisdom holds that a large increase in the minimum wage would be desirable policy. Advocates for this policy dismiss the traditional concern that such an increase would lower employment for many of the low-skilled workers that the increase is intended to help. Recent economic research, they claim, demonstrates that the disemployment effects of increasing minimum wages are small or nonexistent, while there are large social benefits to raising the wage floor.

This policy analysis discusses four ways in which the case for large minimum wage increases is either mistaken or overstated.

First, the new conventional wisdom misreads the totality of recent evidence for the negative effects of minimum wages. Several strands of research arrive regularly at the conclusion that high minimum wages reduce opportunities for disadvantaged individuals.

Second, the theoretical basis for minimum wage advocates’ claims is far more limited than they seem to realize. Advocates offer rationales for why current wage rates might be suppressed relative to their competitive market values. These arguments are reasonable to a point, but they are a weak basis for making claims about the effects of large minimum wage increases.

Third, economists’ empirical methods have blind spots. Notably, firms’ responses to minimum wage changes can occur in nuanced ways. I discuss why economists’ methods will predictably fail to capture firms’ responses in their totality.

Finally, the details of employees’ schedules, perks, fringe benefits, and the organization of the workplace are central to firms’ management of both their costs and productivity. Yet data on many aspects of workers’ relationships with their employers are incomplete, if not entirely lacking. Consequently, empirical evidence will tend to understate the minimum wage’s negative effects and overstate its benefits.

Introduction

For decades, debates over the minimum wage have been tense among advocates, policy­makers, and professional researchers alike. While professional economists were once broadly skeptical of the benefits of a minimum wage, that consensus has eroded.

Shifts in the views of media, advocates, policymakers, and researchers each have their own story. A striking example comes from the New York Times. In 1987, the Times editorialized that “The Right Minimum Wage” is $0.1 But in 2015, it opined that “fifteen dollars, phased in gradually … would be adequate and feasible.”2 Even more recently, it claimed that

a living wage is an antidepressant. It is a sleep aid. A diet. A stress reliever. It is a contraceptive, preventing teenage pregnancy. It prevents premature death. It shields children from neglect.3

In the eyes of the Times, the minimum wage has taken a 30-year journey from zero to hero. There is no ill, it seems, that a higher minimum wage cannot alleviate, if not outright cure.

Following decades of moderate minimum wage changes, select cities and states have recently passed substantial increases. In Seattle, San Francisco, and New York City, the minimum wage has already reached the milestone of $15. Recent laws passed by California, Illinois, Maryland, Massachusetts, New Jersey, and New York call for statewide increases to $15 in the coming years. Early in February of 2019, the U.S. House Committee on Education and Labor held a hearing to advance the agenda to take a $15 wage floor nationwide.4

An erosion of the consensus among academic economists predates this lurch in public policy. Cracks in this consensus emerged in earnest when David Card and Alan Krueger wrote their book Myth and Measurement: The New Economics of the Minimum Wage in the 1990s.5 Even so, a 2005 survey found that only 17 percent of economists favored increasing the federal minimum wage from the then floor of $5.15 per hour to $6.15.6 A more recent wave of research has coincided with a broader shift among academic economists. In 2013, nearly half the respondents to a survey by the University of Chicago agreed that a $9 federal minimum wage would be “desirable policy.”7 In 2015, only 26 percent of economists in a subsequent University of Chicago survey worried that a $15 minimum wage would significantly reduce employment for low-wage workers.8

Proponents of high minimum wages argue that their position is supported by the best evidence, giving them the scientific high ground. But does the research really justify this confidence and the accompanying shift in the conventional wisdom? Though proponents of a higher wage can cite many papers to support their view, their reading of recent research is incomplete. The research these proponents ignore has many strengths, including trans­parent research methods, analyses of high-quality data, and a truly randomized experiment. In contrast to the research emphasized by advocates, the broader body of work regularly finds that increases in minimum wages cause job losses for individuals with low skill levels.

Another problem with advocates’ calls for a much higher minimum wage is that the theoretical basis for their claims is far more limited than they seem to realize. Advocates offer rationales for why wage rates might be suppressed relative to competitive market values. These arguments are reasonable to a point, but they are a weak basis for making claims about the effects of large minimum wage increases.

Third, economists’ empirical methods have blind spots. Notably, firms’ responses to minimum wage changes can occur with nuanced dynamics. I discuss why economists’ methods will predictably fail to capture such dynamics in their totality.

Finally, the details of employees’ schedules, perks, fringe benefits, and the organization of the workplace are central to firms’ management of both their costs and productivity. Yet data on many dimensions of workers’ relationships with their employers are incomplete, if not entirely lacking. Consequently, empirical evidence tends to understate the minimum wage’s negative effects and overstate its benefits.

What Can We Conclude from Recent Research?

Media coverage of minimum wage changes provides a window into the minimum wage research landscape. Changes in states’ minimum wage rates bring news stories on the wage gains workers will receive and the number of workers who are ostensibly poised to receive them. As reported on December 27, 2018, in a headline from USA Today, “From California to New York, States Are Raising Minimum Wages in 2019 for 17 Million Workers.”9 The article does not consider that some of those workers may lose employment under the higher wage. It does not mention how employers might offset the minimum wage’s effects on their costs or how such changes might affect workers’ lives.

Where do the authors of such articles turn for their facts? The USA Today article draws on calculations by the National Employment Law Project (NELP). Similar articles from CBS, NPR, and other news outlets draw on calculations from the Economic Policy Institute (EPI).10 In turn, these organizations cite academic research to support their views.

Minimum wage analyses from NELP and EPI draw on research papers that have challenged the traditional view that minimum wage increases reduce employment. Key research in this vein includes a 2010 paper by Arindrajit Dube, T. William Lester, and Michael Reich;11 a 2011 paper by Sylvia Allegretto, Dube, and Reich;12 a 2017 paper by Allegretto, Dube, Reich, and Ben Zipperer;13 and a 2019 paper by Doruk Cengiz, Dube, Attila Lindner, and Zipperer.14 Each of those papers analyzes a large set of minimum wage changes enacted by U.S. states or the federal government that spans several decades. In every case, the authors conclude that there is no evidence to support the view that minimum wage increases cause job losses. In a recent piece of congressional testimony, Reich used this research to argue that minimum wage increases up to $15 have “no negative employment effects.”15

In addition to influencing policy discussions, the papers previously referenced have been influential within the professional research community. Importantly, these studies are not extreme outliers. A 2016 analysis by Paul Wolfson and Dale Belman found that the estimated effects of minimum wage increases on employment have been, on average, quite small in recent studies.16

At the same time, a great deal of recent research finds that minimum wage increases cause job losses among low-skilled population groups. In the remainder of this section, I discuss four strands of research that fit this description. In the first, a number of papers use the same data to study the same minimum wage changes as the papers referenced previously, but arrive at different conclusions. The second strand of research analyzes more compactly defined episodes of minimum wage increases within the recent experience of U.S. cities and states. A third strand analyzes minimum wage changes using high-quality administrative data from Europe. Finally, I discuss a paper that analyzes a truly randomized experiment involving the imposition of minimum wages in an online labor market.

Research on the Long History of U.S. Minimum Wage Changes

The research most often discussed by U.S. media analyzes over three decades of U.S. state and federal minimum wage changes. In what follows, I focus on the substantive issues at stake in the debate within this strand of research. Readers interested in references to key entries in this debate can find a roadmap in the endnotes.17

Researchers estimate the effects of minimum wage changes by making comparisons between states that increased their minimum wages and states that did not. The goal is to infer whether an increase in minimum wages led to the number of jobs changing differently than it otherwise would have. The key question for evaluating the quality of these analyses is whether the states being compared are “good counterfactuals.” That is, do the states being compared reliably allow us to infer how employment would have changed if states had not increased their minimum wages? Debates between researchers are in large part debates over which approaches to selecting comparisons generate “good counterfactuals” and hence “unbiased estimates.”

In their 2008 book Minimum Wages, David Neumark and William Wascher summarized existing research as being broadly supportive of the view that minimum wages adversely affect low-skilled workers.18 Card and Krueger’s work notwithstanding, Neumark and Wascher argued that the weight of the evidence implied that minimum wage increases reduce employment. In their own empirical research, Neumark and Wascher have relied on the broadest possible set of comparisons between states that increased minimum wages and states that did not. In contrast, papers finding that minimum wage changes have no effect on employment typically rely on subsets of the available comparisons. Because their comparisons are less selected, Neumark and Wascher’s analyses are less prone to charges of data mining. This makes their approach the natural default unless there is a compelling case that their method would result in systematically biased estimates. Critics of their research argue that such biases do exist and are so severe that Neumark and Wascher’s estimates are not “credible.”

The claim that a scholarly work lacks credibility is a strong one, but does the strength of the evidence match the strength of the claim? The answer is no, because there is remarkably little fire behind the smoke. To date, direct evidence for the strengths and weaknesses of alternative research methods is in surprisingly short supply. In their own terminology, the biases alleged by Dube, Lester, and Reich are “unobserved.” That is, their argument is not built on evidence of specific economic forces that, in their telling, give rise to systematic biases. If anything, states appear to enact minimum wage increases when their labor markets are expanding more rapidly than the labor markets in other states. This will tend to bias analyses toward finding that minimum wage increases have a positive effect on employment, which is the opposite of what Neumark and Wascher’s critics allege.

As Neumark observes in a 2018 review of recent research, papers using a variety of best-practice methodologies have concluded that minimum wage increases reduce employment.19 Indeed, several recent papers use methods that are designed to account for precisely the kind of unobserved forces that Dube, Lester, and Reich claim bias traditional minimum wage research. Two examples that analyze roughly the same history of U.S. minimum wage changes include a 2017 paper by David Powell and a 2012 paper by Yusuf Baskaya and Yona Rubinstein.20 Both papers estimate substantial negative effects of minimum wage increases on teen employment, echoing the traditional research finding.

In summary, the segment of the minimum wage literature that simultaneously analyzes three decades of minimum wage changes remains contentious. Relative to Neumark and Wascher’s early estimation frameworks, some methodologies for accounting for nuanced biases yield smaller estimates, while others yield larger estimates. Because direct evidence in favor of one approach and against others is in short supply, strong conclusions based on this strand of research alone are unwarranted.

Research on Recent U.S. Minimum Wage Changes

The debate described above is difficult to evaluate because key differences between competing studies are opaque. The studies in question attempt to analyze hundreds of distinct events simultaneously. An advantage of this approach is that it may provide evidence for the average effect of minimum wage increases across a broad range of settings. But when estimates are in dispute, a drawback of such an analysis is that it becomes difficult to determine why competing studies of the same events arrive at different conclusions.

A number of recent studies take an alternative approach: they analyze compact historical episodes in isolation. The key benefit of this approach is that differences between studies can be transparently debated with reference to the events surrounding a single historical episode. Transparency of this sort is crucial for evaluating competing studies. For this reason, the approach of focusing on compact historical episodes is standard practice in other areas of economic research, including analyses of major health and tax policy reforms.

My own work on the minimum wage has separately considered two distinct historical episodes. In a recently published work, Michael Wither and I estimate the effects of the federal minimum wage changes enacted during the Great Recession.21 The 2007-2009 federal increases had greater effects in some states than others, depending on the initial level of a state’s minimum wage. We use data that follow individuals over time, which allows us to separate minimum wage workers from workers with moderately higher skills. We find that employment among minimum wage workers declined far more in states that were “fully bound” by the federal minimum wage changes than in states that were not. Notably, employment among moderately higher-skilled individuals does not exhibit this pattern; changes in the employment of these workers were comparable between the two groups of states. This bolsters the case that our analysis is not biased by differences in the severity of states’ underlying recessions. Indeed, housing market indicators reveal that our estimates are more likely to be biased toward finding positive effects of minimum wage increases than negative effects. We estimate that the federal minimum wage increases enacted during the Great Recession reduced employment among low-skilled individuals by hundreds of thousands of jobs.

Like other minimum wage research that has drawn public attention, our work has its detractors. Zipperer replicated the findings Wither and I reported in an earlier version of our paper, but he contested our interpretation and conclusions.22 Wither and I responded to Zipperer’s critiques with a series of additional analyses.23 We leave interested readers to digest the details of this debate by reading the studies themselves.

A number of papers have analyzed state and local minimum wage changes enacted in recent years. In a widely discussed study by researchers at the University of Washington, administrative records from Washington State’s unemployment insurance system were used to analyze the effects of a recent series of increases in Seattle’s minimum wage.24 The research team found evidence that hours worked by low-wage employees declined substantially in the wake of the series of increases. Indeed, the decrease for all these workers together was so large that their overall earnings declined slightly. Subsequent work by the Seattle team found evidence that employment fell only a little, if at all, for workers with prior experience in low-wage jobs.25 This suggests that employment declined primarily because of reductions in hiring rather than increases in firing.

At this point, readers may be unsurprised to learn that the conclusions of the Seattle minimum wage study are in dispute. Most notably, the study’s initial findings were contested in a memo from Reich to the office of Seattle mayor Ed Murray.26 This memo was complemented by critical analyses by Zipperer and John Schmitt, which were disseminated through the EPI.27 In revisions to their analyses, the Seattle team has responded to several of the initial criticisms leveled against their work. Although they have only modestly revised their original conclusions, it is unclear what economists’ final verdict on this episode will be.

Many U.S. states have enacted substantial minimum wage changes in recent years. The early phases of these changes have been analyzed in a 2017 paper by Radha Gopalan, Barton Hamilton, Ankit Kalda, and David Sovich.28 These authors analyze administrative employment records from Equifax, which allow them to track roughly one million hourly wage workers. Using data from 2011 through 2015, they find that establishments that employ low-wage workers reduced employment following minimum wage increases. This occurred through reductions in hiring rather than layoffs of existing low-wage workers, which is consistent with the findings of the Seattle minimum wage study.

In additional research, Michael Strain and I are analyzing recent minimum wage changes using precommitted research designs.29 That is, to avoid the pitfalls of data mining, we are reporting the results of analyses to which we committed after analyzing data that extended through 2015. Thus far, our estimates suggest that the effects of recent minimum wage changes have been highly varied. The largest of states’ minimum wage increases are negatively associated with employment among those in low-skilled groups. Further, the employment declines associated with large minimum wage changes have grown in magnitude as we have incorporated data from 2016, 2017, and 2018 into our analyses. In contrast, small changes have had modest and possibly positive relationships with employment.

Recent evidence points to important roles for subtle yet conventional labor market forces. That is, the evidence suggests that the dynamics of labor demand are crucial for understanding the minimum wage’s effects. During the Great Recession, for example, a combination of low demand and substantial churn may have set the stage for the relatively sharp effects of the 2007-2009 federal minimum wage increases on employment. In contrast, it may be the case that only large minimum wage changes have large enough effects on firms’ costs to alter their hiring during an economic expansion. When labor markets are tight, firms may effectively ignore small minimum wage increases, enabling such increases to have their intended effects on wages.

Research from European Contexts

A number of recent papers have analyzed minimum wage changes using high-quality administrative data from European countries. Recent country-specific analyses examine Denmark, Greece, Hungary, the Netherlands, Sweden, and Germany. While estimates vary substantially among these analyses, each case provides evidence that firms respond in traditional ways to increases in labor costs.

Claus Kreiner, Daniel Reck, and Peer Skov use Danish administrative data from 2012 to 2015 to analyze the employment effects of an age-specific increase in the minimum wage.30 They find that the higher wage floor applicable to 18-year-olds substantially reduces their employment compared to 17-year-olds, for whom the wage floor is much lower. The employment drop is large enough to ensure that the total earnings of 18-year-olds are no greater than the total earnings of 17-year-olds, despite their higher wage floor.

Constantine Yannelis uses administrative employment records to analyze reductions in Greece’s minimum wage rates.31 The minimum wage changes he analyzes were implemented in 2012 in accordance with International Monetary Fund bailout terms. These wage reductions were disproportionately large for young workers relative to older workers. Yannelis finds that these changes led firms to significantly increase their employment of young workers relative to older workers.

Peter Harasztosi and Attila Lindner analyze a large national minimum wage increase enacted by Hungary.32 They use firms’ administrative tax filings to classify the extent to which each firm was affected and to track changes in firms’ employment over time. Harasztosi and Lindner conclude that roughly 1 in 10 workers affected by Hungary’s dramatic minimum wage increase lost employment. Because the wage increase was quite large, the wage bills of strongly affected firms increased substantially. In this setting, the authors find that the bulk of the minimum wage increase’s costs were borne by consumers through increases in prices.

Jan Kabatek looks at the Netherlands.33 Like Denmark, this is a case of minimum wage rates that rise significantly with age. Using data that track individuals over time, Kabatek concludes that workers become substantially more likely to lose their jobs in the two months prior to birthdays on which their minimum wage rises. He finds that these individuals gradually return to employment over subsequent months.

Emmanuel Saez, Benjamin Schoefer, and David Seim analyze Swedish payroll tax reductions implemented between 2007 and 2009.34 These tax changes were meant to reduce the cost of young workers to firms. From the perspective of firms, the tax changes were economically similar to a reduction in negotiated wage rates. Using Swedish administrative records, which are renowned for their high quality, the authors found that these tax changes led to substantial increases in the employment of younger workers relative to older workers.

Finally, Marco Caliendo, Carsten Schröder, and Linda Wittbrodt summarize research, including their own work with Alexandra Fedorets and Malte Preuss, on the 2015 introduction of Germany’s statutory minimum wage.35 The German experience was novel because it involved a shift from collectively bargained wages to a statutory minimum wage floor, as opposed to an increase in an existing minimum wage. These authors conclude that the introduction of the minimum wage caused a small reduction in the number of low-wage jobs. Consistent with work on recent U.S. minimum wage changes, employment declines have come primarily through reductions in hiring rather than increases in firing. Among those individuals with jobs, reductions in hours were large enough to ensure that the monthly incomes of low-wage workers changed little.

An Actual Experiment

A final piece of research that deserves emphasis is a 2018 paper by John Horton.36 He analyzes an online labor market in which firms contract with workers for tasks including programming, data entry, and graphic design. In contrast with the papers discussed thus far, Horton identified an opportunity to deploy a randomized controlled trial to study the effects of minimum wage increases. As the designer of the study, he could impose differences in firms’ minimum wage requirements through random assignment. He finds that firms make significant shifts in the workers they employ when they are required to pay higher wages. In other words, they shift away from workers who are the least skilled and toward workers who demonstrate higher productivity on past jobs. High minimum wage rates thus reduce the employment opportunities of workers who are less productive.

Does the Evidence Justify the Shift in the Traditional Consensus?

Why has the consensus on minimum wages shifted? This is a difficult question, and any answer is necessarily speculative. In this section I discuss several issues that arguably are underappreciated by the new conventional wisdom.

Mistake 1: An Incomplete Reading of the Recent Research

The new conventional wisdom has to an unwarranted degree focused on the debate over the long history of minimum wage changes in the United States — that is, it has focused on the research discussed at the beginning of the previous section of this paper. It has focused less on other lines of research. In particular, it has focused less on recent research from European contexts, including Denmark, Germany, Greece, Hungary, the Netherlands, and Sweden, as well as on research that transparently analyzes compact historical episodes in the U.S. experience.

The emphasis of the new conventional wisdom is unfortunate because other lines of research have desirable features. In research on the effects of taxes, unemployment benefits, and other public policy initiatives, three attributes of studies have, with good reason, emerged as attributes toward which researchers strive. The first is a preference for data from individual-level administrative records over both aggregate data and survey data. The second is a preference for running experiments whenever possible. The third is an emphasis on implementing transparent research methods.

The research that forms the basis of the new conventional wisdom tends to lack all three of these attributes. Even when these studies’ methods appear transparent and intuitive, opaque choices tend to determine both the sets of events that are studied and the comparisons underlying the estimates. In contrast, the research with which many audiences are less familiar includes truly randomized experiments and makes regular use of transparent methods and individual-level administrative records.

Mistake 2: Shortcomings in the Application of Economic Thinking

In addition to taking a narrow view of the recent literature, the shifting consensus on the minimum wage has roots in several short­comings in the application of basic economic ideas to real-world markets. The first involves discussions of labor market imperfections. The second involves the fact that there is more to a job than its wage. The third involves the time horizons over which firms can respond to changes in policy.

Conceptions of Perfect Competition vs. Imperfect Competition. In economic theory, the minimum wage’s effects depend on how wages are set within labor markets. If a market is perfectly competitive, then pay aligns perfectly with a worker’s productivity. Under perfect competition, a binding minimum wage is by definition a wage that exceeds some workers’ productivity. In this framework, a binding minimum wage will inevitably cause some workers to be laid off by firms.

Contrast that with models of markets with imperfect competition. The key feature of these models is that market wages are suppressed relative to their perfectly competitive levels — that is, workers are paid less than the value of what they produce. Consequently, in these models it is possible for a minimum wage increase to improve workers’ earnings without excluding them from employment. Firms are willing to pay a minimum wage that exceeds what they would otherwise have paid as long as that wage does not exceed a given worker’s productivity. In discussions of such models, “monopsony” and “frictions” are the jargon with which readers may be increasingly familiar.

The first chapter of Alan Manning’s influential 2003 book Monopsony in Motion begins with the following thought experiment: What happens if an employer cuts the wage they pay their workers by one cent?37 Because a penny is very small, the answer to this question is nothing. From this thought experiment, Manning concludes that “it is monopsony, not perfect competition, that is the best simple model to describe the decision problem facing an individual employer.”38 This shift in framing is of great consequence. The textbook mono­psony model is one in which a modest minimum wage can actually increase employment among low-skilled workers. It is a model in which the minimum wage can be used to combat inefficiencies linked to employer market power.

But the transition from the one-penny thought experiment to a monopsony-centric view of the labor market merits scrutiny. A model’s importance stems from the power of its broad predictive and explanatory content, not from an illusory to-the-penny precision. Whether a competitive or monopsony-centric model is more useful depends on key details of both the labor market and the policy changes one is attempting to understand.

The practical implications of Manning’s thought experiment hinge on the size of the frictions that give firms market power. Workers do not leave their employers over pennies; it costs more than pennies to find a new job. It is the cost of finding a new job that determines the power held by a worker’s employer to set wages.

Both data and intuition suggest that employers wield only modest market power over low-skilled workers. One need only enter a mall, with its food court and retail outlets, to appreciate the large number of employers to which most low-wage workers can potentially apply. Real-world data concur; the value of the time it would take most minimum wage workers to find a competitive job offer is unlikely to exceed $1,000-$2,000.39 For full-time workers, these amounts are equivalent to $0.50-$1.00 in hourly pay. A wage differential of $1 is thus far more likely to lead workers to seek new jobs than the penny from Manning’s thought experiment.

Real-world search costs appear to have quite modest implications for the market power employers can exert over workers in low-wage industries, such as food service and retail sales. The facts suggest that the monopsony framework may be useful for analyzing modest minimum wage increases from modest initial levels. But for large minimum wage changes, a model approaching the benchmark of perfect competition should be the more reliable guide.

Fringe Benefits and Other Attributes of Jobs. Many analyses of the minimum wage adopt a narrow view of relationships between workers and employers. Specifically, they simplify the relationship to two factors: wages and employment. In analyses of this sort, the minimum wage’s effect on a worker’s well-being is deceptively simple. If the wage rises and the worker remains employed, naïve models imply that the worker is necessarily better off.

But in practice, when we negotiate with our employers, we appreciate that jobs have many subtle but important characteristics. Work hours can be at the convenience of the worker or at the convenience of the firm. The pace of work can be fast or slow, safer or riskier, and can require more or less mental energy. Compensation can either include or exclude health insurance, retirement contributions, and other benefits. A job’s location can be more or less preferable, and opportunities for advancement (within or outside the firm) can be more or less ample.

All these factors affect both workers’ well-being and firms’ bottom lines. Most minimum wage commentary sweeps these factors under the rug, but nuanced models recognize that they are central for understanding the minimum wage’s effects. Adjustments to nonwage factors are among the most obvious and inexpensive adjustments a firm can make. Reducing noncash compensation and requiring increases in a worker’s effort are straight­forward ways for employers to align costs and revenues following minimum wage increases. Crucially, actions along these margins will tend to offset any wage increase’s effects on a worker’s well-being. Because these factors are often unmeasured, our awareness of their importance makes it appropriate to embrace humility regarding the strength of the conclusions we can draw from available data.

Economists have long been aware that a job’s nonwage characteristics can be central to its value to workers. In a 1986 chapter from the Handbook of Labor Economics, Sherwin Rosen observes that the framework of “compensating wage differentials” has been with the economics profession since Adam Smith’s The Wealth of Nations.40 There has recently been a wave of high-quality research on this theme. Several recent papers highlight the value of worker-driven schedules.41 One paper by Nicole Maestas, Kathleen Mullen, David Powell, and others finds that workers are willing to pay substantially for improvements in workplace conditions.42 Complementary research by Isaac Sorkin finds that nonwage aspects of jobs account for a large fraction of total variation in workers’ valuations of jobs among different firms.43

Despite the obvious importance of nonwage factors, research on the extent to which these factors are affected by minimum wage increases is quite limited. Because of data limitations, the primary nonwage factor that can be incorporated into minimum wage studies is whether workers have employer-provided health insurance (EPHI). Analyses of historical minimum wage changes tend to find weak evidence of a relationship between minimum wage increases and EPHI. In contrast, analyses of more recent minimum wage changes tend to find negative effects.44 On a qualitatively different but important margin, papers by Hyejin Ku and by Decio Coviello, Erika Deserranno, and Nicola Persico find that low-productivity workers increase their work effort in the wake of minimum wage increases.45 But little if any evidence exists on a rich set of potentially important margins, including the flexibility of work schedules.

Dynamics. When estimating the effects of minimum wage increases, economists struggle to capture subtleties in the timing with which firms might respond. An example involving the payment-processing technologies in which fast-food chains can invest illustrates several points.

Fast-food chains can choose either employee-operated cash registers or auto­mated kiosks. An important aspect of this choice is that it involves upfront investments in equipment that may depreciate gradually over many years. For new firms, high minimum wages may tip the cost calculation in favor of automated kiosks. New entrants to the fast-food market may thus adopt less labor-intensive business models soon after high minimum wages go into effect. But for continuing firms, the calculation may be quite different. This will be particularly true for those that made investments in standard cash registers prior to a minimum wage increase’s passage. If the minimum wage rises modestly, such firms may continue operating with cash registers until their equipment requires replacement. Consequently, their response to a minimum wage increase might not occur until years after the change has gone into effect. This difference between new entrants and continuing firms highlights that a minimum wage change’s overall effects may unfold gradually.

Economists have little evidence on how firms adjust their capital investments in response to changes in minimum wages. Efforts to study firms’ production technologies have to date been indirect. For example, recent studies by Dan Aaronson and Brian Phelan and by Grace Lordon and David Neumark find that minimum wage increases predict declines in employment among workers in occupations whose tasks are readily replaced with technology.46 Related analyses emphasize the productivity of the workers within each occupation. In his randomized experiment in an online labor market, John Horton finds evidence that firms shift from lower-productivity workers toward higher-productivity workers. Lisa Kahn, Jonathan Meer, and I similarly find that recent increases in states’ minimum wages predict increases in the average age and education of workers in low-wage occupations.47

Minimum wage changes often come with long lags between the dates when they are legislated and the dates when they are implemented. In an analysis of recent legislative histories, Duncan Hobbs, Michael Strain, and I find that recent state-initiated minimum wage increases had lags averaging six months between the date of their passage and the date a first increase was implemented.48 Lags between the date of legislation and the final date of multistep increases are much longer.

Empirical methods in the minimum wage literature account poorly for lags between legislative activity and implementation. When an increase is signed into law, forward-thinking firms know to take cost implications into account. Some firms may thus change their technologies before a minimum wage increase goes into effect. Firms’ forward-looking responses undermine the ways many economists deploy statistical tests to estimate a minimum wage change’s effects. When estimating those effects, economists worry that their estimates will be biased if the labor markets in states that enact minimum wage increases were trending differently than the labor markets in other states. Unfortunately, these differential trends cannot easily be distinguished from forward-looking responses of firms. The standard practice in recent research has been to lump these phenomena together — that is, forward-looking responses have been conflated with “divergent pre-existing trends.” In turn, they are assumed to be evidence that estimates are likely to be biased. Standard practice thus biases researchers against detecting negative effects of minimum wage increases on employment.

Although this bias remains pervasive in recent minimum wage research, its relevance has been recognized for quite some time. The implications of investments by forward-looking firms were developed in papers by Sorkin and by Aaronson, Eric French, Sorkin, and Ted To.49 A key empirical aspect of these insights was highlighted in work by Jonathan Meer and Jeremy West,50 who show that common techniques for accounting for “divergent trends” may in fact bias analyses toward incorrectly concluding that minimum wages have no effect on employment. These authors show that in some cases this bias can be resolved by analyzing employment growth rather than employment levels. Although Cengiz, Dube, Lindner, and Zipperer have recently criticized the empirical analysis of Meer and West, the theoretical thread connecting the analyses of Meer and West to those of Aaronson, French, Sorkin, and To is unchallenged. The key conceptual point is strongly intuitive and appears to be well founded.

Conclusion: Where Do We Go from Here?

The “Fight for $15” has shifted from the advocacy fringes to the political mainstream. News media increasingly report that a $15 federal minimum wage would benefit low-skilled workers at little cost. This essay pushes against that shift on several grounds: the new conventional wisdom’s reading of recent evidence is incomplete, its grounding in theory is far more limited than its supporters let on, and it ignores significant blind spots in economists’ empirical methods.

Because $15 wage floors have been narrowly and only recently applied, there is no evidence to support the sweeping claim that a $15 federal minimum wage would benefit disadvantaged households at little cost. This is particularly true when we consider regions where low housing and labor costs support the social and labor market integration of both immigrants and low-skilled native-born workers. More than doubling the minimum wage, from $7.25 to $15.00, risks radically altering the entry-level opportunities on which these individuals rely.

Recent minimum wage changes have been substantial, with scheduled increases approaching 70 percent of the initial minimum wage in several states. Large differences in states’ minimum wage policies have now been sustained for several years. Recent experience may thus provide the best opportunity in decades to learn about the medium-run effects of substantial minimum wage changes. As data on recent labor market developments pour in, the next several years will be an exciting time for both minimum wage research and minimum wage researchers.

Notes:

1 Editorial Board, “The Right Minimum Wage: $0.00,” New York Times, January 14, 1987.

2 Editorial Board, “The Minimum Wage: Getting to $15,” New York Times, September 4, 2015.

3 M. Desmond, “Dollars on the Margins,” New York Times Magazine, March 8, 2019.

4 Committee on Education and Labor, “Full Committee Hearing: ‘Gradually Raising the Minimum Wage to $15: Good for Workers, Good for Businesses, and Good for the Economy,’” U.S. House of Representatives, February 7, 2019.

5 D. Card and A. B. Krueger, Myth and Measurement: The New Economics of the Minimum Wage (Princeton, NJ: Princeton University Press, 1995).

6 R. Whaples, “Do Economists Agree on Anything? Yes!,” Economists’ Voice, November 2006.

7 Initiative on Global Markets Economic Experts Panel, “Minimum Wage,” Chicago Booth School of Business, February 26, 2013.

8 Initiative on Global Markets Economic Experts Panel, “$15 Minimum Wage,” Chicago Booth School of Business, September 22, 2015.

9 J. Herron, “From California to New York, States Are Raising Minimum Wages in 2019 for 17 Million Workers,” USA Today, December 27, 2018.

10 I. Ivanova, “Five Million U.S. Workers Will Get Raises in 2019,” CBS News, December 31, 2018; and S. Raphelson, “Minimum Wages Rising in 20 States and Several Cities,” NPR, December 30, 2018.

11 A. Dube, T. W. Lester, and M. Reich, “Minimum Wage Effects across State Borders: Estimates Using Contiguous Counties,” Review of Economics and Statistics 92, no. 4 (2010): 945-64.

12 S. Allegretto, A. Dube, and M. Reich, “Do Minimum Wages Really Reduce Teen Employment? Accounting for Heterogeneity and Selectivity in State Panel Data,” Industrial Relations: A Journal of Economy and Society 50, no. 2 (2011): 205-40.

13 S. Allegretto et al., “Credible Research Designs for Minimum Wage Studies: A Response to Neumark, Salas, and Wascher,” Industrial & Labor Relations Review 70, no. 3 (2017): 559-92.

14 D. Cengiz et al., “The Effect of Minimum Wages on Low-Wage Jobs,” Quarterly Journal of Economics. Forthcoming.

15 M. Reich, “Likely Effects of a $15 Federal Minimum Wage by 2024,” Policy Report, Center on Wage and Employment Dynamics, Institute for Research on Labor and Employment (Berkeley: University of California, February 7, 2019).

16 P. Wolfson and D. Belman, “Fifteen Years of Research on U.S. Employment and the Minimum Wage,” Tuck School of Business Working Paper no. 2705499, December 20, 2015 (revised December 14, 2016).

17 Dube, Lester, and Reich challenged research by David Neumark and William Wascher. In their 2008 book Minimum Wages (Cambridge, MA: MIT Press, 2008), Neumark and Wascher concluded that, Card and Krueger’s work notwithstanding, the weight of the evidence continued to support the traditional view that minimum wages reduce employment. Dube, Lester, and Reich advanced a case that Neumark and Wascher’s estimates were prone to econometric biases. Allegretto, Dube, and Reich (2011) further advanced this case. Debate over econometric methods for analyzing three decades of U.S. minimum wage changes subsequently intensified. In two papers from 2014, Neumark, Salas, and Wascher responded to the critiques of Neumark and Wascher’s earlier work; see D. Neumark, J. M. Salas, and W. Wascher, “Revisiting the Minimum Wage — Employment Debate: Throwing Out the Baby with the Bathwater?,” Industrial & Labor Relations Review 67, no. 3 (2014): 608-48; D. Neumark, J. M. Salas, and W. Wascher, “More on Recent Evidence on the Effects of Minimum Wages in the United States,” IZA Journal of Labor Policy 3, no. 1 (2014): 24. This work prompted a 2017 rejoinder from Allegretto, Dube, Reich, and Zipperer, which was published alongside a response from Neumark and Wascher; see S. Allegretto et al., “Credible Research Designs for Minimum Wage Studies: A Response to Neumark, Salas, and Wascher,” Industrial & Labor Relations Review 70, no. 3 (2017): 559-92; D. Neumark and W. Wascher, “Reply to ‘Credible Research Designs for Minimum Wage Studies,’” Industrial & Labor Relations Review 70, no. 3 (March 7, 2017): 593-609. Cengiz, Dube, Lindner, and Zipperer further advance the claim that there is no evidence that three decades of U.S. minimum wage increases have at any point reduced employment; see Cengiz et al., “The Effect of Minimum Wages on Low-Wage Jobs.” Forthcoming.

18 Neumark and Wascher, Minimum Wages.

19 D. Neumark, “The Econometrics and Economics of the Employment Effects of Minimum Wages: Getting from Known Unknowns to Known Knowns,” National Bureau of Economic Research Working Paper no. 25043, September 17, 2018.

20 D. Powell, “Synthetic Control Estimation beyond Case Studies: Does the Minimum Wage Reduce Employment?,” working paper, RAND Corporation, Labor & Population, Santa Monica, CA, July 2017; and Y. S. Baskaya and Y. Rubinstein, “Using Federal Minimum Wages to Identify the Impact of Minimum Wages on Employment and Earnings across the U.S. States,” working paper, Department of Economics Workshop, University of Chicago, 2012, unpublished, PDF file.

21 J. Clemens and M. Wither, “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers,” Journal of Public Economics 170 (February 2019): 53-67.

22 B. Zipperer, “Did the Minimum Wage or the Great Recession Reduce Low-Wage Employment? Comments on Clemens and Wither (2016),” working paper, Washington Center for Equitable Growth, December 2016.

23 See J. Clemens, “The Minimum Wage and the Great Recession: A Response to Zipperer and Recapitulation of the Evidence,” ESSPRI Working Papers Series no. 20171, June 14, 2017; J. Clemens, “Pitfalls in the Development of Falsification Tests: An Illustration from the Recent Minimum Wage Literature,” ESSPRI Working Papers Series no. 20172, June 14, 2017; and J. Clemens and M. Wither, “Additional Evidence and Replication Code for Analyzing the Effects of Minimum Wage Increases Enacted during the Great Recession,” ESSPRI Working Papers Series no. 20173, June 14, 2017.

24 E. Jardim et al., “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle,” National Bureau of Economic Research Working Paper no. 23532, June 2017.

25 E. Jardim et al., “Minimum Wage Increases and Individual Employment Trajectories,” National Bureau of Economic Research Working Paper no. 25182, October 2018.

26 Michael Reich letter to Seattle Mayor’s Office, Institute for Research on Labor and Employment, June 26, 2017.

27 B. Zipperer and J. Schmidt, “The ‘High Road’ Seattle Labor Market and the Effects of the Minimum Wage Increase,” Economic Policy Institute, June 26, 2017.

28 R. Gopalan et al., “State Minimum Wage Changes and Employment: Evidence from Two Million Hourly Wage Workers,” Social Science Research Network Electronic Journal, January 2017.

29 An early paper in this research is J. Clemens and M. R. Strain, “The Short-Run Employment Effects of Recent Minimum Wage Changes: Evidence from the American Community Survey,” Contemporary Economic Policy 36, no. 4, (October 2018): 711-22. The initial precommitment concept can be found in J. Clemens and M. R. Strain, “Estimating the Employment Effects of Recent Minimum Wage Changes: Early Evidence, an Interpretative Framework, and a Precommitment to Future Analysis,” National Bureau of Economic Research Working Paper no. 23084, January 2017.

30 C. T. Kreiner, D. Reck, and P. E. Skov, “Do Lower Minimum Wages for Young Workers Raise Their Employment? Evidence from a Danish Discontinuity,” Review of Economics and Statistics, forthcoming, https://doi.org/10.1162/rest_a_00825.

31 C. Yannelis, “The Minimum Wage and Employment Dynamics: Evidence from an Age-Based Reform in Greece,” working paper, Royal Economic Society Annual Conference, April 2014.

32 P. Harasztosi and A. Lindner, “Who Pays for the Minimum Wage?,” American Economic Review. Forthcoming.

33 J. Kabátek, “Happy Birthday, You’re Fired: The Effects of Age-Dependent Minimum Wage on Youth Employment Flows in the Netherlands,” IZA Discussion Paper no. 9528, November 2015.

34 E. Saez, B. Schoefer, and D. Seim, “Payroll Taxes, Firm Behavior, and Rent Sharing: Evidence from a Young Workers’ Tax Cut in Sweden,” American Economic Review, forthcoming.

35 M. Caliendo, C. Schröder, and L. Wittbrodt, “The Causal Effects of the Minimum Wage Introduction in Germany: An Overview,” IZA Discussion Paper no. 12043, 2018; M. Caliendo et al., “The Short-Run Employment Effects of the German Minimum Wage Reform,” Labour Economics 53 (August 2018): 46-62.; and M. Caliendo et al., “The Short-Term Distributional Effects of the German Minimum Wage Reform,” IZA Discussion Paper no. 11246, 2017.

36 J. Horton, “Price Floors and Employer Preferences: Evidence from a Minimum Wage Experiment,” working paper, Leonard N. Stern School of Business, New York University, July 17, 2018, unpublished, PDF file.

37 A. Manning, Monopsony in Motion: Imperfect Competition in Labor Markets (Princeton, NJ: Princeton University Press, March 3, 2003).

38 Manning, Monopsony, p. 3.

39 Unemployment insurance data reveal that the typical unemployment spell lasts roughly 10 weeks. See Federal Reserve Economic Data (website), “Median Duration of Unemployment (UEMPMED),” Federal Reserve Bank of St. Louis. Data in the American Time Use Survey (ATUS) suggest that job-seekers spend just over two hours actively searching for work on days during which they search; see C. Adams, J. Meer, and C. Sloan, “The Minimum Wage and Search Effort,” National Bureau of Economic Research Working Paper no. 25128, October 2018. Surprisingly, the unemployed report spending two hours on searching roughly one day per week. Multiplied by 10 weeks, this suggests that the typical job search entails roughly 20 hours of active search. A more generous estimate might assume two hours of search on five days each week. This suggests 100 hours of search over the course of a 10-week unemployment spell, or 200 hours over a 20-week spell. Because the data imply far fewer days of search per week, this is a strong upper bound on the search time consistent with the ATUS.

40 S. Rosen, “The Theory of Equalizing Differences,” in Handbook of Labor Economics, eds. O. Ashenfelter, P. R. G. Layard (Amsterdam: Elsevier North Holland Publishing Co., 1986) 1: 641-92.

41 One recent paper finds, for example, that real-time flexibility of hours has high value to Uber drivers. See M. K. Chen et al., “The Value of Flexible Work: Evidence from Uber Drivers,” National Bureau of Economic Research Working Paper no. 23296, March 2017; additional research similarly finds that a subset of workers place quite high valuations on flexible work arrangements. See A. Mas and A. Pallais, “Valuing Alternative Work Arrangements,” American Economic Review 107, no. 12 (2017): 3722-59; additional research uses a field experiment to pin down evidence that, conditional on two jobs having the same pay, individuals are more likely to apply for the jobs with the more flexible schedule. See H. He, D. Neumark, and Q. Weng, “Do Workers Value Flexible Jobs? A Field Experiment on Compensating Differentials,” National Bureau of Economic Research Working Paper no. 25423, January 2019.

42 N. Maestas et al., “The Value of Working Conditions in the United States and Implications for the Structure of Wages,” National Bureau of Economic Research Working Paper no. 25204, October 2018.

43 I. Sorkin, “Ranking Firms Using Revealed Preference,” Quarterly Journal of Economics 133, no. 3 (2018): 1331-93.

44 For an example of earlier work, see R. Kaestner and K. Simon, “Do Minimum Wages Affect Nonwage Job Attributes? Evidence on Fringe Benefits,” Industrial & Labor Relations Review 58, no. 1 (2004): 52-70; for examples of more recent work, see J. Clemens, L. Kahn, and J. Meer, “The Minimum Wage, Fringe Benefits, and Worker Welfare,” National Bureau of Economic Research Working Paper no. 24635, May 2018. Two recent conference presentations suggest that other researchers are seeing similar negative correlations between minimum wages and EPHI in recent data from both the American Community Survey and the Current Population Survey. See A. Gooptu and K. Simon, “The Effect of Minimum Wage Laws on Employer Health Insurance: Do Outside Options Matter?,” 39th Annual Fall Research Conference, Association for Public Policy Analysis & Management, November 4, 2017 (conference paper); see also C. Eibner et al., “Do Minimum Wage Changes Affect Employer-Sponsored Insurance Coverage?,” 7th Conference of the American Society of Health Economists, June 11, 2018 (conference paper).

45 H. Ku, “Does Minimum Wage Increase Labor Productivity? Evidence from Piece Rate Workers,” working paper, Department of Economics and CReAM, University College London, April 2018, unpublished, PDF file; and D. Coviello, E. Deserranno, and N. Persico, “Minimum Wage and Individual Worker Productivity: Evidence from a Large U.S. Retailer,” working paper, Workforce Science Project of the Searle Center for Law, Regulation, and Economic Growth, Northwestern University, February 1, 2018, unpublished, PDF file.

46 D. Aaronson and B. J. Phelan, “Wage Shocks and the Technological Substitution of Low-Wage Jobs,” Economic Journal 129, no. 617 (January 2019): 1-34; and G. Lordan and D. Neumark, “People Versus Machines: The Impact of Minimum Wages on Automatable Jobs,” Labour Economics 52 (June 2018): 40-53.

47 J. Clemens, L. Khan, and J. Meer, “Dropouts Need Not Apply: The Minimum Wage and Skill Upgrading,” (unpublished working paper, September 3, 2018) PDF file.

48 J. Clemens, D. Hobbs, and M. Strain, “A Database on the Passage and Enactment of Recent State Minimum Wage Increases,” IZA Institute of Labor Economics Discussion Papers no. 11748, August 2018.

49 I. Sorkin, “Are There Long-Run Effects of the Minimum Wage?,” Review of Economic Dynamics 18, no. 2 (April 2015): 306-33; and D. Aaronson et al., “Industry Dynamics and the Minimum Wage: A Putty-Clay Approach,” International Economic Review 59, no. 1 (February 2018): 51-84.

50 J. Meer and J. West, “Effects of the Minimum Wage on Employment Dynamics,” Journal of Human Resources 51, no. 2 (2016): 500-22.

Jeffrey Clemens is an associate professor of economics at the University of California, San Diego. His research focuses on health economics, public finance, and the economics of the minimum wage.

Restoring Responsible Government by Cutting Federal Aid to the States

$
0
0

Chris Edwards

The federal government has a large presence in state and local policy activities such as education, housing, and transportation.That presence is facilitated by “grants-in-aid” programs, which are subsidies to state and local governments accompanied by top-down regulations.

Federal aid spending was $697 billion in 2018, which was distributed through an estimated 1,386 separate programs. The number of programs has tripled since the 1980s, indicating that the scope of federal activities has expanded as spending has grown.

Rather than being a positive feature of American federalism, the aid system produces irresponsible policymaking. It encourages excessive and misallocated spending. It reduces accountability for failures while generating costly bureaucracy and regulations. And it stifles policy diversity and undermines democratic control.

Cutting federal aid would reduce federal budget deficits, but more importantly it would improve the performance of federal, state, and local governments. The idea that federal experts can efficiently solve local problems with rule-laden subsidy programs is misguided. Decades of experience in many policy areas show that federal aid often produces harmful results and displaces state, local, and private policy solutions.

This study describes the advantages of cutting federal aid. It discusses 18 reasons why it is better to fund state activities with state revenues rather than with aid from Washington. Shrinking the aid system would improve American governance along many dimensions.

Growth in Federal Aid

Under the Constitution, the federal government was assigned specific limited powers and most government functions were left to the states. To ensure that people understood the limits on federal power, the Framers added the Constitution’s Tenth Amendment: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” The amendment embodies federalism, the idea that federal and state governments have separate policy areas and that proper federal activities are “few and defined,” as James Madison noted in Federalist 45.

The federal government generally kept out of state and local affairs for the first century and a half of the nation. But in recent decades, Congress has increasingly intervened in state and local activities with federal aid or grant programs. The expansion of the aid system has created advantages for elected officials, but it has created costs and few benefits for the public.

Figure 1 shows the number of federal aid programs for state and local governments over the past century.1 In the 19th century, aid to the states was rare other than grants of federal land. In the early 20th century, the number of cash aid programs began growing steadily.2 The biggest change came in the 1960s, when the aid system greatly expanded under President Lyndon Johnson. His administration added hundreds of programs for housing, urban renewal, education, and other local activities. Johnson and other policymakers at the time were optimistic that federal experts could solve virtually any local problem. At the same time, moves to decentralize decisionmaking within Congress empowered members to seek benefits for local activities in their states.3

The optimism of the 1960s was short-lived. President Richard Nixon in 1971 lambasted “the idea that a bureaucratic elite in Washington knows best what is best for people everywhere.”4 Nixon and subsequently President Gerald Ford pursued modest reforms to the aid system by turning narrow grants into broader block grants. After Ford, President Jimmy Carter promised a “concentrated attack on red tape and confusion in the federal grant-in-aid system.”5

In academia, “the mainstream of economic research into fiscal federalism became increasingly critical of federal grants-in-aid in the late 1970s and early 1980s.”6 Also at that time, the Advisory Commission on Intergovernmental Relations (ACIR) was regularly publishing studies about the aid system’s complexity and ineffectiveness.7 The ACIR was a bipartisan body consisting of federal, state, and local officials that produced expert studies on federalism issues. It was abolished in 1996.

President Ronald Reagan came into office criticizing the “confused mess” of federal grants, and he pushed to cut the system under the theme of “New Federalism.”8 He had more success with reforms than his White House predecessors and was able to cut the number of grant programs in his first term by about one-quarter.9

Unfortunately, Reagan’s efforts to trim the federal aid system were later reversed. The number of aid programs rose from 463 in 1990 to 653 in 2000. That increase happened despite promises by Republicans in 1995 to “return power to our states and our people,” as Senate Majority Leader Bob Dole promised, and to “return money, power and responsibility to the states,” as House Budget Committee chair John Kasich remarked.10

The number of federal aid programs jumped to 967 by 2010 and then to 1,386 by 2018. The 2018 figure is based on a new count of aid programs for state and local governments in the Catalog of Federal Domestic Assistance (CFDA).11 The CFDA lists all federal benefit or subsidy programs, but the program count here includes only programs for state, local, and tribal government recipients that were funded in 2018.

Table 1 shows the number of aid programs and spending by federal department. Federal aid spending was $697 billion in 2018 and is expected to jump to $750 billion in 2019.12 Aid programs allocate funds to the states either by mathematical formulas or by a competitive process as project grants.13 Some aid is distributed as a lump sum, while other aid requires recipient states to partly match the federal funding amount.

The largest federal aid program is Medicaid, which accounts for 56 percent of overall aid. Other large aid programs are for highway funding, school breakfasts and lunches, rental housing, and K-12 education. Aside from these, there are many smaller aid programs for a vast range of activities, including rural housing, local police and fire services, nursing workforce diversity, boating safety, indoor radon, arts in education, sport fishing, brownfields redevelopment, healthy marriage promotion, and farmers markets.

All of this federal spending on state and local activities is misguided. Experience has shown that federal aid and related regulations are not effective at solving state and local problems. State and local funding and control of government programs is preferable, as this study discusses.

Cutting aid to the states should be a bipartisan goal. Cuts should appeal to con­servative lawmakers because aid programs tend to be bureaucratic, inefficient, and beset by waste. Cuts should also appeal to liberal lawmakers because the aid system undermines democracy, diversity, choice, and local control in government.

The Trump administration has proposed trimming some aid programs, including programs for health, housing, and community development.14 But the administration has also proposed new aid programs for infrastructure, even though infrastructure aid has the same shortcomings as other aid, as discussed below.

Table 2 contrasts two ways of funding state programs: federal aid and state funding. The table essentially summarizes 18 disadvantages of federal aid compared to state funding, and these are discussed in order in the balance of the report. Federal aid distorts government spending levels and spending allocations, and it undermines program efficiency, program quality, and good governance.

In Table 2 and the balance of the report, the term “states” generally refers to both state and local governments.

Effects of Federal Aid

1. Deficit Effect

Supporters of federal aid often talk as if state governments lack resources to pursue spending programs, while the federal government has endlessly deep pockets. But every dollar of federal aid that supports state and local governments ultimately comes from taxpayers who live in the 50 states. There is no special, costless source of money that funds the federal budget.15

It is true that the federal government has a much greater ability to run deficits than state governments, which gives the illusion of deep pockets.16 But the fact that the federal government can run large deficits is an argument against the aid system, not for it. By pushing funding for state activities up to the federal level, the aid system biases American government in favor of imprudent deficit financing.

It is better to fund state spending activities at the state level because state governments must generally balance their budgets and limit their debt issuance.17

2. Politics Effect

The aid system inflates the political benefits of spending and reinforces pro-spending advocacy. With a state-funded program, state policymakers must balance the benefit of the spending with the cost of raising taxes to pay for it. But if a program is partly funded with federal aid, both federal and state policy­makers can claim credit for the spending but may only be responsible for part of the tax cost. In this way, aid programs increase the ratio of the political benefits of spending to the tax costs, thus inducing excess spending.

One can notice this political effect when federal aid goes toward a project such as a local transit line or highway improvement. Federal, state, and local politicians all show up for photos at the groundbreaking and issue press releases claiming credit, yet each level of government may only pay part of the cost. Economist Gordon Tullock called this a kind of “double counting” benefit to politicians of aid programs.18

Support for aid programs is buttressed by the promotional efforts of multiple levels of government, and aid programs allow for multiple entry points into the legislative process for lobby groups. Even when the federal government pays all a program’s costs, federal policymakers gain from the support of state policymakers and interest groups. In this case, aid programs still provide a “mutual profit for political purposes,” as Tullock noted.19

When federal agencies hand out grants to state and local governments, they coordinate with the related members of Congress so that the members can claim credit. The purpose of more than one-third of press releases from U.S. senators is to claim credit for federal spending in their states.20Members of Congress dedicate staff to helping local governments get aid, and they hold “grants workshops” in their districts.21 At the same time, employees of federal agencies “make some grant awards strategically in order to maintain or expand political support for their program.”22 Aid programs are a team effort and federal agencies are the quarterbacks.

3. Flypaper Effect

The federal government creates state aid programs because it wants the states to increase spending on activities that federal policy­makers think are important. Put bluntly, the purpose of aid is to “drag states into programs they would otherwise not pursue,” notes federalism expert Michael Greve.23 The sections below discuss why that top-down approach to policy is misguided. But we should first ask whether aid programs actually do raise state spending on the targeted activities.

Basic economic theory suggests that states will mainly use federal aid to reduce state taxes or increase other nontargeted spending in their budgets. Money is fungible, and aid is simply like a state receiving a boost in overall income. States will mainly use aid directed at, say, education to reduce state taxes and increase spending on other programs. That is the basic theoretical result for lump-sum or nonmatching aid programs.

However, decades of empirical studies find that this is not what actually happens. Federal aid aimed at a particular activity, such as education, mainly sticks on that target and is only partly reallocated to tax cuts or other spending. This is called the “flypaper effect.” Empirical studies generally find that each aid dollar increases state spending on the targeted activity by about 50 cents or more.24

Economists have proposed numerous explanations for the flypaper effect. It may be simply that state policymakers decide that they get more political benefit from boosting spending on targeted activities than from using aid for other purposes.25 Federal aid seems free to state policymakers, so there is no downside to spending all the aid they get. This includes spending it on activities chosen by the federal government that the states themselves view as lower value. Also, federal policy­makers add features to programs to induce states to increase spending on the targeted activities.Many programs include maintenance of effort (MOE) rules, which bar states from reducing state funding of a program when they take federal aid for it. A problem with MOE rules is that they discourage states from finding efficiencies in programs and saving taxpayer money. For example, state-level reforms in Wisconsin allowed local governments to save hundreds of millions of dollars on teacher health insurance plans.26 But federal MOE rules prevented school districts from using the savings to trim their budgets, so schools spent the extra cash on lower-value items.

Another spending dynamic to note is that states put large efforts into finding state costs that can be shifted to the federal government. It is common, for example, for states to hire consulting firms to mine their program data­bases for people currently receiving state-funded welfare who could be moved onto federally funded welfare. By shifting costs to the deficit-fueled federal budget, these efforts contribute to the overspending problem.

4. Matching Effect

Many federal aid programs include a matching feature to stimulate added state funding of an activity. Since the beginning of the aid system a century ago, a common match has been 50-50, meaning that for every dollar the federal government spends on a program, recipient states must chip in a dollar of their own. When the federal match is open ended, states can endlessly expand programs and draw additional federal cash. Matching aid programs stimulate more state spending than nonmatching programs.27

Medicaid is an open-ended matching aid program. Currently, the federal government pays 60 percent of the overall program costs and states pay 40 percent.28 So, on average, the states can proactively increase spending on Medicaid and send a bill to Washington for about 60 percent of the added costs. Because of this feature, state policymakers have a strong incentive to expand Medicaid eligibility and covered services, and a reduced incentive to cut waste and fraud because only part of such cost savings would go to state taxpayers.

Most matching aid programs use a closed-ended match, meaning that there is a cap on the federal contribution. The spending incentive is not as strong as on open-ended matching programs, but the purpose is the same—to induce states to increase their own funding of the targeted activities.29

Federal policymakers may require a high state match on a program to try to induce more state spending, but if the state match rate is too high it may prompt some states to reject the aid altogether. Grants may also vary in the stringency of MOE rules, and some education programs not only have MOE rules but also “supplement not supplant” rules to buttress state spending levels.

Whether spending is boosted or not, aid programs increase bureaucracy, reduce accountability, create a vehicle to impose costly federal regulations, and produce other harms as discussed below. Instead of federal funding, it makes more sense for state policymakers to directly balance the benefits of a spending program with the state tax costs. Thus, regardless of how much state spending is stimulated by federal aid, this study argues that aid programs are misguided.30

5. Spending Allocations across the States

Supporters of aid hope that federal experts can efficiently allocate funds to high-value activities across the nation. But there is little reason to think that federal officials are better able than state officials to target resources for education, housing, transportation, and other activities.

For one thing, the allocation formulas used in aid programs are blunt tools that do not measure need very well. One study found, for example, that highway aid formulas are biased against states that have larger highway systems and more highway use, and thus biased against states that have greater needs.31 Some states with growing populations consistently get shortchanged. Texas, for example, has accounted for an average of 10 percent of gas taxes paid into the federal highway account over the past decade but has received only 8 percent of the spending from it.32 One study found that the deadweight or inefficiency losses from federal highway aid misallocation amounted to 40 percent of the value of the spending.33

Numerous studies find that politics explains aid allocations better than public-interest theories.34 In theory, aid should be targeted to the neediest states or targeted to fix interstate externalities, such as when one state’s transportation policies affect neighboring states. But according to an Advisory Commission on Intergovernmental Relations (ACIR) study, “the record indicates that federal aid programs have never consistently transferred income to the poorest jurisdictions or individuals. Neither do most existing grants accord with the prescriptions of ‘externality’ theory.”35And the ACIR noted, “The log­rolling style … through which most grant programs are adopted frequently precludes any careful ‘targeting’ of fiscal resources.”36

Summarizing the academic literature, economists Rainald Borck and Stephanie Owings noted that the public-interest view of aid “does not fare well in empirical studies. Most papers find more evidence for politically motivated transfers.”37 Borck and Owings, for example, point to evidence that a disproportionate amount of aid goes to rural and less-populated states.38

One can see this bias with federal aid for airports, which is tilted toward smaller rural airports and away from the largest airports where it would generate the most benefit.39 There has been a similar bias in homeland security aid, whereby rural areas with low terrorism risks have received an unduly large share of the grants, which in the years after 9/11 resulted in much low-value spending.40 This bias is caused by the power of smaller-population states in the U.S. Senate.41 This small-state spending distortion has apparently grown in recent decades because of differences in population growth across the states.42

A large share of federal aid goes toward anti-poverty programs, including Medicaid, Section 8 housing, and Temporary Assistance for Needy Families (TANF). Program supporters want to target resources to the lowest-income parts of the nation. But every member of Congress wants a share of the aid, so anti-poverty programs usually expand into broad-based handouts that subsidize rich and poor congressional districts alike.43 What economist Richard Nathan calls the “spreading effect” of sloshing aid money around for political reasons has always predominated over the desire to help the poorest areas.44

A 1946 study of the aid system by a Senate committee found that the 10 highest-income states received $70 per capita in federal aid, while the lowest-income states received $49 per capita.45 A 1975 study found that “federal expenditures per capita were $1,059 in the nation’s poorest counties … while the counties with above-average incomes received an above average allocation of $1,665.”46

In a major 1981 study, the ACIR concluded that the “Robin Hood principle of fiscal redistribution—‘take from the rich, give to the poor’—has always received much more lip service than actual use in aid distribution… . Federal grant-in-aid dollars are commonly dispersed broadly among states and localities, including the relatively rich and poor alike.”47 And the ACIR reiterated, “The record indicates that federal aid programs have never consistently transferred income to the poorest jurisdictions or individuals.”48

ACIR’s conclusions still hold today. For 2019, the federal budget estimates state-by-state data for $666 billion of federal aid spending.49 By my calculations, the 10 highest-income states received $2,354 per capita while the 10 lowest-income received $2,068. That pattern holds for many individual aid programs, including Medicaid, Section 8 rental housing, public housing, TANF, and Community Development Block Grants (CDBG).

The website for the CDBG program states that the purpose is to “provide services to the most vulnerable in our communities.”50 But an Urban Institute study found that the program’s allocation of funding to the neediest governments has diminished over time, and it is “uncertain” whether governments “adequately direct funding to low and moderate-income people.”51 The 2020 federal budget said of the CDBG program, “Studies have shown that the allocation formula poorly targets funds to the areas of greatest need.”52

As for Medicaid, its allocation formula is based on state per capita income, so poorer states receive a higher federal match rate. However, the match has encouraged wealthier states to expand Medicaid more than poorer states, so wealthier states end up getting relatively more dollars.53 This sort of adverse result for matching programs has been observed for decades. The 1946 Senate committee found, “as the matching principle came into use, the poorer states often found it impossible to match federal grants to the same extent as the wealthier states.”54

The main federal aid program for disadvantaged K-12 schools (Title 1) does provide more aid per capita to the poorest states, but nonetheless much of the funding goes to well-off school districts. A U.S. News and World Report investigation found that “billions of dollars end up in districts that are richer on average, while many of the nation’s poorest districts receive little Title I funding.”55 For example, schools in Shelby County, Tennessee, received $926 per poor child in 2016 in federal aid, but schools in Philadelphia received $2,000 per poor child.

Even when aid programs appear to target need or demand, the outcome is not necessarily efficient. Consider federal disaster aid. Some states—such as Florida and Texas—are hit by many hurricanes and receive more federal disaster aid than other states.56 Disaster aid seems to follow need.

The problem is that federal disaster aid encourages people to live in dangerous places, such as on hurricane-prone seacoasts. Federal subsidies for the seacoasts include funds for disaster rebuilding, beach replenishment, flood control structures, and flood insurance—all of which have encouraged development in risky areas. Partly as a result, the number of Americans living in official flood hazard areas has increased 60 percent since 1970.57 So federal subsidies can have the negative effect of undermining prudent state and local decisionmaking.

In sum, federal aid tends not to be allocated the way that public interest theories suggest it should be. Aid is often allocated bluntly and has never followed the Robin Hood principle consistently, even if that were a good idea.58 Finally, even in cases where aid distribution does seem to match state needs, it may undermine prudent decisionmaking by state policymakers.

6. Spending Allocations within the States

Federal aid warps state and local spending decisions. It induces states to spend more on federally subsidized activities, and less on other activities that state residents may value more. For example, the rapid growth in state Medicaid spending—induced by generous federal matching payments—has likely squeezed out other activities in state budgets.

Urban transit provides another example of how aid warps state budgets. Since the 1970s, federal aid for transit has been mainly for capital costs, not for operations and maintenance. That has induced dozens of cities to purchase systems with big up-front costs, which usually means expensive rail systems rather than cheaper bus systems, even though the latter are usually more efficient, flexible, and safer.59 The number of U.S. cities with rail transit has grown from eight in 1975 to 42 today, and the construction costs of nearly all these new systems were subsidized with federal aid.60

One consequence of the bias toward rail is that many cities are now getting stung by huge rail maintenance costs years after federal aid induced them to build the systems. U.S. transit systems have deferred maintenance costs of more than $90 billion, and systems across the nation are suffering from breakdowns, delays, and safety hazards.61 The New York City and Washington, DC, subway systems, for example, are in poor shape. Yet those cities have been prompted by federal aid to keep expanding their systems rather than ensuring the good performance of the lines they already have.

A 2017 New York Times investigation of the Metropolitan Transit Authority found lavish spending on new projects—subsidized by federal aid—and at the same time a shocking neglect of subway maintenance. The result has been declining service quality, fires, derailments, and other disasters. The Times noted:

The estimated cost of the Long Island Rail Road project, known as East Side Access, has ballooned to $12 billion, or nearly $3.5 billion for each new mile of track—seven times the average elsewhere in the world. The recently completed Second Avenue subway on Manhattan’s Upper East Side and the 2015 extension of the No. 7 line to Hudson Yards also cost far above average, at $2.5 billion and $1.5 billion per mile, respectively. The spending has taken place even as the M.T.A. has cut back on core subway maintenance.62

Meanwhile, the Washington, DC, metro system is building a $5.8 billion subway line to Dulles airport, with $2.9 billion coming from federal grants and loans.63 That dubious expansion is going ahead even though the system has suffered from appalling maintenance and safety failures in recent years and ridership is declining. Delays plague the system, and there have been crashes and dozens of incidents of smoke in tunnels in recent years.64 It is a similar story with the Massachusetts Bay Transportation Authority, which faces $7 billion in maintenance backlogs, but continues to build new lines.65

A recent boondoggle in Albuquerque, New Mexico, illustrates how federal aid can also encourage cities to spend on ill-suited bus systems. City leaders sprang for an expensive $133 million electric bus system because federal subsidies covered more than half of the costs. But the Los Angeles Times reports that the “project resulted in parts of what’s now Central Avenue being ripped up to host dedicated lanes for the electric buses, which are currently out of commission and have so many problems that [Mayor] Keller freely calls them ‘a bit of a lemon.’ ”66 Residents did not want the buses, local businesses hated them, and dozens of businesses along the dedicated bus route have closed.

Another recent boondoggle is a 20-mile rail project in Honolulu, which has soared in cost from $5 billion to more than $9 billion. The Wall Street Journal reported on some of these problems in 2019:

Honolulu pushed ahead before fully planning the project… . Officials misled the public about the train line’s shaky finances … [and] an audit by the city found HART’s [Honolulu Authority for Rapid Transportation] financial plan in disarray, with hundreds of millions of dollars unaccounted for.67

This wasteful project was likely only approved because of the lure of federal aid secured by Hawaii’s late senator Daniel Inouye.

Federal aid induces state and local governments to make decisions that are divorced from the actual needs of their own citizens. A classic example was the urban renewal or “slum clearing” wave of the mid-20th century, which used billions of federal aid dollars beginning in 1949 to bulldoze poor neighborhoods in favor of grand development schemes.68 A 1963 analysis of these federally driven projects found that “wholesale clearance of slum areas and pillar-to-post relocation of the families who lived there have generated wide discontent. Members of racial and ethnic minorities who have seen the slum buildings they occupied replaced by luxury apartment houses have grown resentful of city planning that rarely seems to make adequate provision for their needs.”69 At the time, urbanist Jane Jacobs said of these projects: “This is not the rebuilding of cities. This is the sacking of cities.”70

One infamous federal-aid project in the early 1980s was the demolition of the Poletown neighborhood of Detroit. The City of Detroit condemned more than 1,300 homes over 465 acres and removed 4,200 people through eminent domain so that General Motors could build a new plant. The city demolished 143 businesses and 16 churches.71 Economist William Fischel argues that the Poletown expropriation would not have happened without hundreds of millions of dollars of federal grants and loans as well as state subsidies.72 Many residents protested, but Ralph Nader noted that citizen activists were “muzzled by the grants machine that Washington provided city governments.”73 Local politicians would be much more cautious before proceeding with grandiose and harmful projects if they had to balance the expected benefits with local tax costs.

The dangling of federal and state money causes cities to make decisions that their own citizens do not want. Fischel, for example, says that grants to cities encourage the excessive use of eminent domain, and he points to the 2005 Kelo v. City of New London case in Connecticut as another example of top-down subsidies inducing a local government to expropriate private property for the sake of developers. Federal and state subsidies prompt city politicians to disenfranchise their own residents and spend on dubious projects that the cities would not pursue if they had to raise their own local funds.

7. Bureaucracy

Experts have been criticizing the large bureaucracy of the aid system for decades. As the system has grown, new programs are overlaid haphazardly on old programs, and few are ever repealed. A 1946 report by a Senate committee found:

The present situation on federal grants to state and local governments is extremely chaotic… . One federal-aid program has been piled on top of another—without sufficient effort to appraise the general effect of federal aid upon state and local activities or to achieve coordination among the innumerable federal-aid programs… . The net effect of our present federal-aid program, which has simply grown like Topsy, is a wild morass of red tape and administrative confusion.74

In 1980, an ACIR report on federalism concluded that the aid system is a “bewildering maze” in which the federal government’s role has become “more pervasive, more intrusive, more unmanageable, more ineffective, more costly, and above all, more unaccountable.”75 At the time, there were 434 aid programs; today there are 1,386.

More recently, the Government Accountability Office (GAO) said, “The federal grant system continues to be highly fragmented, potentially resulting in a high degree of duplication and overlap among federal programs.”76 The auditing agency, for example, identified 80 federal aid programs that provide funding for local economic development.77

Aid programs need legions of federal and state administrators, accountants, consultants, and lawyers to prepare and review applications, draft program plans and procedures, file reports, submit waivers, audit recipients, litigate disagreements, and comply with regulations. The federal rules for each aid program can run to thousands of pages. The Individuals with Disabilities Education Act (IDEA) is a good example. The statute is 94 pages long, while the regulations are more than 1,700 pages long.78 A recent annual report to Congress from IDEA’s administrators is 328 pages of dense text.79 Federal aid programs are not just simple, costless transfers of money to the states.

The federal administrative costs of aid programs range from a few percent of the value of the aid to more than 10 percent. That includes the costs of federal salaries, benefits, travel, office rent, and supplies. For example, federal administrative costs were about

  • 5 percent of the value of the Department of Housing and Urban Development’s aid of $38 billion in 2018;80
  • 7 percent of the value of school lunch and breakfast programs aid of $24 billion in 2018;81
  • 13 percent of the value of the Economic Development Administration’s aid of $299 million in 2018;82 and
  • 18 percent of the value of the federal disaster aid to the states in a typical year.83

On top of federal costs, there are state and local administrative costs. Bureaucracy expert Paul Light estimated that federal grants directly support 1.6 million state and local employees such as schoolteachers.84 In addition, he figured that roughly 4.6 million state and local government jobs exist to carry out federal mandates—both the rules tied to federal aid programs and other regulations for environmental, labor, and other social policies.85

Light’s estimate of 4.6 million may be too high, but there do appear to be millions of state and local government employees tethered to the federal government. Consider that between 1960 and 1980 the aid system and the number of federal social mandates were growing rapidly, and state-local government employment correspondingly doubled from 5.6 million to 11.2 million.86 Then, during the 1980s, aid spending and mandate production slowed and state-local employment in turn was flat.

Consider the large bureaucracy for Community Development Block Grants (CDBGs). The GAO found that local governments spent an average of 17 percent of CDBG funds on administration.87 You can appreciate where the money goes by looking at the State of California’s CDBG webpage.88 It has more than 170 links to forms, documents, and spreadsheets that local governments within the state must deal with for the program—applications, procedure guides, compliance instructions, reporting templates, certifications, demographic analyses, verifications, checklists, training videos, and much more. Note that, as a block grant, the CDBG program is supposed to be a simpler type of grant with fewer rules than normal categorical grants.

Now consider federal aid for K-12 schools, which flows from the federal government to state bureaucracies to local school agencies and then to schools. In a study for Wisconsin, the Badger Institute found that state-level administration consumed about 7 percent of the federal aid flowing to local school agencies.89 In a poll, two-thirds of K-12 school administrators and board members found that the reporting requirements for federal aid programs were “very” or “extremely” “time-consuming.”90

The Badger Institute investigated the funding sources of employee salaries. In Wisconsin’s Department of Public Instruction, for example, 49 percent of the employees are paid with federal funds, while in the Department of Workforce Development, 73 percent are paid with federal funds. Across a number of departments, Badger found that the function of a bit less than one-third of these employees was simply to handle federal paperwork.91

Competitive grants generate a particularly large amount of bureaucratic waste. That is because state and local agencies must prepare lengthy proposals to request grants, but then many of the requests are denied. For example, in three rounds of TIGER grants the Department of Transportation (DOT) awarded $2.6 billion for 172 projects, but more than 3,000 state and local agencies sent in applications.92 Thus, the efforts of 2,800 or so agencies were wasted.

In 2018, the DOT handed out $1.5 billion in BUILD grants to 91 out of 851 applicants. The DOT said that BUILD “applications were evaluated by a team of 222 career staff in the department.”93 One of the winning projects was a $14 million grant to widen Highway 157 near Cullman, Alabama. A local newspaper noted, “Mayor Woody Jacobs said a lot of time and expertise was used to prepare the grant application.”94 Another city official said, “It is a critical need that’s been important to us a long time.”95 But if that is true, then Alabama should have funded the project itself.

The Obama administration handed out $4.3 billion in Race to the Top school grants. In the first round, just 2 of the 40 states that applied received aid, and in the second round just 10 of 30 states received aid.96 The state applications for Race to the Top were generally more than 600 pages long, which would have required large teams of state employees to complete.97

Finally, consider the federal Assistance for Arts Education Development and Dissemination program. In 2018, it awarded $12 million to school boards in 22 grants out of 96 applications received.98 Each application was more than 50 pages in length.99 That is a large paperwork effort for a small amount of federal money.

In sum, funding state and local government programs from Washington adds a substantial bureaucratic cost that would be avoided if state and local governments funded their own programs.

8. Waste

Many federal aid programs suffer from high levels of waste, fraud, and abuse. State administrators have little incentive to reduce such costs because the funds come “free” from Washington. At the same time, members of Congress have little incentive to reduce waste in aid programs because all federal spending in their districts is generally seen as a political positive.

The largest aid program, Medicaid, has huge amounts of fraudulent and erroneous spending, referred to as “improper payments.” The GAO estimates that $37 billion in Medicaid spending in 2017 was improper, which was 10 percent of the program’s total cost.100 As a matching program, the incentive for state administrators to reduce Medicaid waste is low because they would need to find more than two dollars of waste to save state taxpayers one dollar. Indeed, the states themselves abuse Medicaid with dubious schemes to inflate the matching dollars they receive from Washington.101

The school lunch and breakfast programs are subject to widespread abuse, with families taking benefits they are not eligible for. The improper payment rate for school lunches is 16 percent and for breakfasts is 25 percent.102 Local governments do little verification of recipient eligibility because they have no incentive to.103 Indeed, school administrators have been caught illegally inflating the number of children receiving benefits.104 When federal auditors have examined applications in detail, they have found that about half of them claim excessive benefits.105

Government infrastructure funded by federal aid is plagued by cost overruns. Boston’s Big Dig highway project more than quadrupled in cost from $2.6 billion to $14.6 billion, of which $8.5 billion came from the federal government.106 Cost overruns are common on small projects as well. In Arlington, Virginia, the local government built a single bus shelter that cost $1 million, whereas a “typical bus shelter costs between $10,000 and $20,000” noted the Washington Post.107 Arlington chose to build a Taj Mahal bus shelter—with heated floors—because the federal and state governments were paying 80 percent of the costs.108

Urban transit has suffered from bloated costs since the 1960s when federal aid began and private systems were taken over by city governments. Construction cost overruns have averaged 43 percent on 64 major rail projects tracked by the federal government since 1990.109 With respect to operating costs, excessive union pay in transit systems has been sustained by large subsidies, while productivity has plunged. Transit trips per operating employee across U.S. cities fell from about 60,000 in the 1960 to fewer than 30,000 today.110

The unneeded imposition of federal bureaucracy on local infrastructure projects causes delays that push up costs. The GAO points to the “fragmented approach as five DOT agencies with 6,000 employees administer over 100 separate programs with separate funding streams for highways, transit, rail, and safety functions. This fragmented approach impedes effective decision making.”111 New York’s World Trade Center rail station, completed in 2015, doubled in cost from $2 billion to $4 billion. A Wall Street Journal investigation pointed to bureaucratic delays and complexities: “In public and private clashes,” federal, state, and local government agencies “each pushed to include their own ideas, making the site’s design ever more complex, former project officials said. These disputes added significant delays and costs to the transit station.”112

In their 600-page book on fiscal federalism, Robin Boadway and Anwar Shah describe the general perception across countries of the wastefulness of aid from national to sub­national governments:

Perceptions of intergovernmental fin­ance are generally negative. Many fed­eral officials believe that giving money and power to subnational governments is like giving whiskey and car keys to teenagers. They believe that grant moneys enable these governments to go on a spending binge and the national government then is faced with the con­sequences of its reckless spending behaviors.113

The authors are not necessarily saying they agree with these perceptions, just that these are the sorts of views on federal aid they have come across in their studies of numerous countries.

For the United States, such views are well founded. Government programs funded through federal aid tend to be executed inefficiently. State administrators do not treat federal money in a frugal manner, and the involvement of multiple levels of governments in programs adds costs, complexity, and delays.

9. Regulations

The regulations that come part and parcel with federal aid create a great deal of inefficiency. Since the first aid program in 1862 for land-grant colleges, the federal government has imposed on states detailed rules for operating programs and for reporting to Washington. The aid system includes rules that are tied to particular programs, as well as rules that apply to a broad range of programs, which are called cross-cutting regulations. The latter type greatly increased in the 1960s and 1970s as the federal government imposed dozens of labor, environmental, safety, and other social requirements on aid recipients.114

Federalism expert John Kincaid says that during the 1960s and 1970s, the “conditions of aid, mandates, preemptions, and federal court orders experienced unprecedented increases. Consequently, state and local governments took on the mantle of administrative arms of the federal government.”115

The rules tied to federal aid raise state and local costs. For example, Davis-Bacon labor rules require that workers on federally funded construction projects be paid “prevailing wages,” generally meaning higher union wages. These rules increase wage costs on highway projects by an average of 22 percent, while also slowing projects and piling paperwork on contractors.116

Federal environmental rules tied to aid push up construction costs and cause delays. A report for the Obama administration found that the average time to complete federal environmental studies for infrastructure projects increased from 2.2 years in the 1970s to 6.6 years in recent years.117 The number of federal environmental laws and executive orders that transportation projects must comply with increased from 26 in 1970 to about 70 today.118

In education, the Bush administration’s No Child Left Behind (NCLB) law of 2002 imposed many costly rules. To receive NCLB grants, for example, the states had to implement extensive testing structures, create complex measurement systems, and adopt new rules for teacher qualifications. The National Conference of State Legislatures found that the Act’s requirements cost the states about $10 billion more per year than the federal government covered with aid funding.119

Perhaps some NCLB rules made sense for some schools in some states, but the law bluntly imposed a large array of costly rules on schools nationwide. Many education experts argued that NCLB did not just generate bureaucracy, but also caused active harm.120 Teachers and state policymakers revolted against NCLB, and dozens of states passed resolutions and statutes to counter the federal law.

The Obama administration pursued its own micromanagement of the nation’s schools. The 2009 economic stimulus bill provided the administration funding for its Race to the Top grants, which required recipient states to impose all kinds of changes, including—essentially—the adoption of the Common Core national standards.

The administration also used “waivers” on aid programs in a uniquely aggressive manner to micromanage the schools. The states were clamoring for waivers from the costly NCLB rules, so the administration created 18 “sets of policy commitments” that states had to agree to before waivers were granted.121 One of the commitments was, essentially, to adopt Common Core.

Waivers have long been used as a pressure valve to release the states from costly federal rules, but the Obama administration used them for the opposite purpose—to impose new rules on America’s schools. Education scholar Rick Hess said that the Obama administration’s “aggressive approach politicized nearly all that it touched, leaving in its wake unnecessarily divisive national debates over issues like Common Core.”122

A final example of the cost-increasing effect of federal aid concerns the Federal Emergency Management Agency (FEMA) grants for local firefighting agencies, which total more than $600 million a year. The grants fund the employee compensation and capital costs of local fire departments. A few years ago, San Diego was ready to break ground on two new fire stations funded by local revenues. Then the city heard that it could apply for a federal grant to pay for the buildings. The city eventually received the federal aid, but its new stations were far behind schedule and cost $2.2 million more than they would have without the aid because of aid-related regulations.123

10. Management

Federal aid programs tend to be poorly managed by both federal and state governments. Federal policymakers are too distracted to investigate failures and pursue improvements, while state policymakers cannot manage programs effectively because they are tied in federal regulatory knots. The GAO has noted with respect to aid programs that the “sheer number of actors creates immense coordination problems” and that “high costs appear inevitable” in the aid system.124

At the federal level, the huge size and scope of the government overwhelms the ability of lawmakers to oversee programs. At more than $4 trillion, the federal budget is 100 times larger than the average state government budget of about $40 billion. Economist Milton Friedman observed, “Because government is doing so many things it ought not to be doing, it performs the functions it ought to be performing badly.”125 Federal bureaucracy expert Paul Light has found that the number of major federal failures has increased over the past three decades.126

Congress is supposed to oversee the 1,386 aid programs it has enacted, but members do not have the time or the expertise to do so effectively. Committees hold occasional oversight hearings, but most members attend only briefly and make a few perfunctory comments aimed at the home-state media. Members often miss their committee hearings altogether.127

Economist Alice Rivlin observed that with the proliferation of programs, the federal government resembles “a giant conglomerate that has acquired too many different kinds of businesses and cannot coordinate its own activities or manage them all effectively from central headquarters.”128 In markets, business conglomerates are forced to shed low-value activities, but in government there is no similar mechanism.

When the aid system was initially expanding in the early 20th century, lawmakers naïvely thought that federal programs would be superior to state programs. President Woodrow Wilson and other Progressives favored centralization so that experts could plan activities for the nation. Wilson thought that power was too “dispersed” in America and ought to be concentrated.129 Economist and later U.S. senator Paul Douglas was also optimistic about the expansion of aid. In a 1920 essay about federal aid, he said that it “insures relatively economical expenditure of federal funds and prevents their misuse” while being “purely voluntary” for the states.130

In a 1928 book about the growing federal aid system, political scientist Austin Macdonald captured the spirit of the times: “The old line of division between state and national powers is manifestly unsuited to present-day conditions” and the “bewildering patchwork” of state policies is unsatisfactory.131 Diversity is old-fashioned—the modern approach to government management is national standards imposed with “infinite tact and skill” by federal officials, claimed Macdonald.132

Not everyone was convinced. Gov. Albert Ritchie of Maryland pushed back hard against aid, saying in 1925, “the system ought to be abolished, root and branch.”133 The same year, President Calvin Coolidge warned in his State of the Union address that federal encroachment on local governments created the danger of “encumbering the national government beyond its wisdom to comprehend, or its ability to administer” sound policies.134 And in 1926, Coolidge opposed spending $109 million that was budgeted for state aid, saying:

I am convinced that the broadening of this field of activity is detrimental both to the federal and state governments. Efficiency of federal operations is impaired as their scope is unduly enlarged. Efficiency of state governments is impaired as they relinquish and turn over to the federal government responsibilities which are rightfully theirs. I am opposed to any expansion of these subsidies.135

Coolidge turned out to be right. Federal lawmakers have far too much on their plates these days. In his 2014 book on federalism, former U.S. senator James Buckley noted, “Congress’s current dysfunction is rooted in its assumption, over the years, of more responsibilities than it can handle.”136 Rather than focusing on national issues such as defense, members are focused on securing grants to fill hometown potholes. Buckley writes that grants “absorb major portions of congressional time, thereby diverting Congress from its core national responsibilities.”137

Members are focused on the amount of spending in their districts, not on sound program management. In a 2012 report on FEMA grants, then senator Tom Coburn of Oklahoma said that his colleagues are preoccupied with the amount of spending in their states, not on “how the money is spent, or whether it is needed in the first place.”138 State officials are similarly distracted from sound management. Referring to federal aid, political scientist Steven Teles noted that “the multiplicity of overlapping and bewildering federal programs for K-12 education creates a compliance mentality among school leaders … pushing them to focus on staying on the right side of the rules rather than on improving their schools.”139

State policymakers are distracted by the need to lobby the federal government. State governments have long had lobbying offices in Washington, and hundreds of local governments hire Washington lobbying firms.140 The number of local governments hiring federal lobbyists “has been on an upward trend for more than 30 years.”141 State and local leaders do regular “fly-ins” to Washington to twist arms on Capitol Hill.

There are nationwide lobbying groups, such as the National League of Cities; there are regional groups, such as the Northeast-Midwest Institute; and there are state-specific groups, such as the California Institute for Federal Policy Research. All these groups track federal aid and try to increase their share of funding. Some state governments have special state offices that track federal aid, and there is an industry of consulting firms that train people on how to secure federal grants.142

There are also many lobbying organizations representing state and local government employees who rely on federal aid. The National WIC Association, for example, lobbies the federal government on behalf of the 2,000 state and local government agencies that administer the $6 billion Women, Infants, and Children program. And a slew of government-related groups lobbies the federal government to spend more on “economic development” programs, including the National Association of Development Organizations, the National Association for County, Community, and Economic Development, and a dozen others. The federal Economic Development Administration helpfully lists these lobbying groups on its website.143 Federal bureaucracies and these state groups have the same interest in higher federal aid spending. But for state officials, such lobbying distracts from what they should be focused on, which is efficiently managing state and local services.

Federal aid has also undermined efficient state management by creating new layers of government. Thousands of water authorities, public housing authorities, conservation districts, air quality regions, and other government entities have been created as a requirement of receiving federal aid.144The number of such “special district” governments in the nation increased from 12,000 in 1952 to 35,000 by 2002.145 Transportation aid provides an example of such “capacity building” in government:

[Federal transportation law] requires that a Metropolitan Planning Organization (MPO) be designated for each urbanized area with a population of more than 50,000 people in order to carry out the metropolitan transportation planning process, as a condition of federal aid. As a result of the 2010 decennial Census, 36 new urbanized areas were identified. These areas will either have to establish and staff a new MPO, or merge with an existing MPO.146

The proliferation of such structures has tied the hands of elected state and local policy­makers. They are blocked from reallocating funds and restructuring programs because of the rules tied to aid. Federal aid has balkanized state and local governments. The GAO found, for example, that an array of 16 separate fed­eral aid programs for first responders has created fragmented disaster response planning.147 The rise in federal aid has produced disjointed and uncoordinated state and local management.

11. Diversity

Residents of each state may have different preferences for policies on education, highways, transit, and other items. They may have different views on taxes and spending. In America’s federal system, state and local governments can maximize value by tailoring policies to the preferences of their residents.148 At the same time, individuals can improve their own lives by moving to juris­dictions that suit them best. Economist Gordon Tullock noted, “The fact that people can ‘vote with their feet’ and thus sort themselves out into different areas with different collections of public goods is one of the great advantages of federalism.”149

Federal aid and related regulations undermine such beneficial state policy diversity. A good example was the 55-mile-per-hour national speed limit, which was enforced between 1974 and 1995 by federal threats of withdrawing highway aid. Such one-size-fits-all rules destroy value because they ignore state variations in geography, traditions, and resident preferences.

President Reagan’s 1987 executive order on federalism noted, “The nature of our constitutional system encourages a healthy diversity in the public policies adopted by the people of the several states according to their own conditions, needs, and desires. In the search for enlightened public policy, individual states and communities are free to experiment with a variety of approaches to public issues.”150 But the states cannot be free to experiment if Washington is calling the shots.

Reagan was a conservative, but diversity is also a social ideal championed by liberals. It was liberal Supreme Court justice Louis Brandeis who said that with federalism each state can “serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”151 Unfortunately, most policymakers on the left have been strong supporters of the federal aid system even though it undermines diversity and local choice.

Brandeis put his finger on something important—it is less risky to pursue policy experiments at the state level than at the federal level. Federalism expert Adam Freedman notes, “When states are in charge, policy mistakes are localized,” but “when the federal government is in charge, all mistakes are Big Mistakes.”152 By contrast, he writes, with decentralization, “the failures stay local while the successes go national,” as states freely copy good ideas from other states.153

A good example of a Big Mistake was federal aid for high-rise public housing projects in the mid-20th century. Those projects are now widely regarded as a policy disaster.154 The projects bred crime and social dysfunction, and government housing authorities allowed buildings to deteriorate rapidly. Why did many major American cities bulldoze neighborhoods in slum-clearing operations and erect unsightly concrete fortresses for the poor? Because the federal government was paying for it and promoting it.

A more recent example of a Big Mistake generated by federal aid is light-rail transit. Since the 1970s, federal aid has induced dozens of cites to install these expensive systems even though they are less efficient and flexible than buses. In city after city, aid-backed rail systems have had large construction-cost overruns, a fraction of the riders originally promised, and severe maintenance problems.155 Many cities would not have made the mistake without subsidies from Washington. Instead, they would have likely explored other transportation options better tailored to local circumstances.

12. Timeliness

Dependence on federal aid causes delays in state and local projects such as infrastructure. Governments may stall needed projects for years as they wait for federal grants to be approved. And then after aid is received, aid-related regulations can raise costs and delay completion.

Charleston, South Carolina, has long needed to dredge its seaport to accommodate larger ships. Completion of the project is crucial to the state’s economy, but the project has moved slowly while the state has been waiting for federal funding.156 The federal government finally kicked in money for the dredging in 2017. A local news source reported:

The Charleston Harbor deepening project has been allocated $17.5 million in federal funding, enabling construction to begin… . The project will deepen Charleston Harbor to 52 feet. It is estimated to cost $509 million; the state already set aside $300 million for it. The federal dollars bring the full amount of allocated funds to $317.5 million—roughly $192 million short of the total cost. The federal dollars are crucial, though; construction could not begin this year without them. “The significance of this funding for the timeline of our deepening project cannot be overstated—it is tremendous news for Charleston,” S.C. State Ports Authority President and CEO Jim Newsome said in a news release.157

If the federal government withdrew from seaport dredging entirely, state and local governments would proceed with projects when needed with their own funding. Other nations, such as the United Kingdom, have shown that seaports can be funded, operated, and dredged privately without subsidies.158 But because much of U.S. infrastructure is dependent on federal subsidies, upgrades and modernization can lag the privatized infrastructure elsewhere. As another example, the government-run U.S. air traffic control system lags behind the privatized Canadian system on technology upgrades because of federal funding shortfalls and bureaucratic mismanagement.159

Federal aid and related regulations can impede the response to and recovery from natural disasters.160 FEMA’s main role is to hand out money, but the rules it imposes can slow and even block state, local, and private disaster response efforts. During Hurricane Katrina in 2005, federal supply efforts failed, communications broke down, and federal political appointees were plagued by indecision and confusion about complex federal rules and procedures. FEMA obstructed the relief efforts of charitable groups, businesses, doctors, and others who rushed to New Orleans to help.

A New York Times article during Katrina said there was “uncertainty over who was in charge” and “incomprehensible red tape.”161 Today’s disaster-response system “fractionates responsibilities” across multiple governments, one expert noted.162 Another noted that “during the past 50 years, Congress has created a legal edifice of byzantine complexity to cope with natural disasters.”163 FEMA is an unneeded extra layer of bureaucracy that impedes first responders, who mainly work for state and local governments.

Rebuilding after disasters can also be slowed as communities wait for federal funding. It takes FEMA time to review the thousands of projects submitted to it for approval after storms. Disaster expert James Fossett noted that FEMA “requires local governments to obtain advance approval for each project and pay for each project up front before getting federal reimbursement for their costs, which must be exhaustively documented. These lengthy, complex processes inevitably delay recovery and make it difficult to spend money in a timely fashion.”164

In 2019, $4 billion of federal aid for Texas to rebuild after a 2017 hurricane was delayed by the usual bureaucratic slowness in Washington, which caused Texas politicians to be “up in arms,” according to the Wall Street Journal. The Texas leaders were “increasingly worried that the delay is leaving Gulf Coast communities still recovering from Hurricane Harvey vulnerable to more destruction just as another hurricane season is set to begin.”165 But Texas has a massive $1.7 trillion-dollar economy, so the state could have easily afforded to fund the $4 billion of improvements itself, rather than waiting for Washington to act.

13. Freedom

The structure of American government is based on subsidiarity, meaning that “responsibility rests first with the lowest authority, the individual; then, if necessary, with local, state, and finally national officials.”166 At the nation’s founding, that structure “maximized liberty by keeping authority as close to the individual as possible.”167

In discussing how federalism would restrain government power, James Madison said, “A double security arises to the rights of the people. The different governments will control each other; at the same time that each will be controlled by itself.”168

More recently, the idea that federalism undergirds our freedoms was articulated in a 1987 executive order by President Ronald Reagan. The order was aimed at restraining federal overreach and stated: “Federalism is rooted in the knowledge that our political liberties are best assured by limiting the size and scope of the national government… . The people of the States are free, subject only to restrictions in the Constitution itself or in constitutionally authorized Acts of Congress, to define the moral, political, and legal character of their lives.”169

Alas, we have strayed far from the Founders or even Ronald Reagan’s vision of a decentralized federation. The federal government has used aid programs to expand into many areas that should be left to states, businesses, charities, and individuals. That expansion is creating a top-down bureaucratic society that is alien to American traditions. Cutting federal aid and related regulations would reverse the tide. It would expand freedom by limiting government power and moving its exercise closer to the people.

14. Competition

In his book The Upside-Down Constitution, legal scholar Michael Greve says that the Founders did not have a fully articulated view of how federalism would restrain government.170 Nonetheless, he argues, the Constitution they produced enshrined competitive federalism, which was a powerful restraint mechanism. Most government functions were left to the states, and then the states were put in competition with each other.

The Constitution assigned the federal government specific limited powers, while the states had broader powers and could pursue different policies to fit their needs. At the same time, the Constitution ensured open flows of trade, investment, and migration between the states. It also allowed the states to choose their own tax bases and rates, thus setting up interstate tax competition.

Each state can choose a unique package of taxes and public services. States that do not tailor their policies to match resident needs will lose people and investment to other states. The experiences of different states over time will indicate what works and what does not. Such competitive federalism enhances freedom by creating choice and encouraging the states to be responsive to their residents.

Greve’s book discusses how competitive federalism held sway until the early 20th century but has since been undermined by growing federal aid and regulations that impose conformity. Supporters of federal intervention call it “cooperative federalism,” but Greve calls it “cartel federalism” because it undermines diversity and competition. Like business cartels, cartel federalism has inflated costs and reduced performance.

Cartel federalism has turned the states as “laboratories of democracy” from a positive to a negative for limited government. These days, Congress takes state-level experiments in government expansion and imposes them on the whole nation.171 A good example was the 2010 Affordable Care Act, which was partly modeled on a 2006 Massachusetts healthcare law.

Economist Richard Nathan echoes Greve in observing that the “ratcheting-up theory of U.S. federalism” is an important pattern that has developed in federal-state interactions.172 Government expansion through aid programs is akin to “venue shopping” in the judicial world. Advocates find the most favorable jurisdiction to enact a program first, then they move to other states, and ultimately create momentum for a federal takeover through an aid-to-state program.173

The aid system replaces healthy interstate policy competition with an unhealthy competition for federal aid dollars. Aid programs often favor some states over others, which creates an uneven playing field. States that receive more aid for highways, airports, and seaports, for example, gain an economic edge over other states. While state competition over policies generally encourages efficiency, state competition over federal handouts generates little more than unproductive lobbying.

15. Democracy

One of the casualties of the growth in federal aid has been democracy. With aid programs, policy decisions are often made by unelected officials in Washington rather than by elected officials locally. Aid programs move decisions away from the nation’s more than 500,000 elected state and local officials to thousands of unknown and inaccessible federal agency employees.

In theory, the 535 elected members of Congress oversee aid programs, but they have delegated much of their power to the federal bureaucracies. If you do not like a policy in your child’s public school, you can voice your concern to local officials. But if the policy was imposed by Washington, you will have a hard time making your concerns known.

Furthermore, the sheer size of the federal government works against democratic involvement. There is empirical evidence that “both citizen influence and effort increase as the size of the government declines.”174 The federal budget is 100 times larger than the average state budget, so federal policymakers have only a fraction of the time state policymakers would have to handle citizen concerns about a particular program.

The federal government controls a substantial share of state policy. Federal aid accounts for one-quarter of state and local government revenues.175 Another measure of control comes from a study that looked at the share of all state agencies across the nation that receive at least some federal aid. That share increased from one-third in the mid-1960s to four-fifths today.176

Yet another measure of federal control comes from a large project that analyzed 22 policy areas across the 50 states and the fed­eral government every decade between 1790 and 2010.177 With this data, John Kincaid found that nearly all policy areas remained exclusively, or almost exclusively, state controlled from 1790 to 1900. But by 2010, none of the 22 areas were exclusively state controlled, and nearly all areas were a heavy mix of federal and state. The largest expansion in federal control occurred during the 1960s and 1970s.

Interestingly, a separate study using a similar method looked at Canada and found that since that nation’s founding in 1867, the government’s structure has become slightly more decentralized.178 Today, Canada is a substantially more decentralized federation than is the United States, with a larger share of overall taxing and spending at the subnational level.179 Canada has only a handful of federal grants to subnational governments, and they are structured as block grants. The upshot is that centralization is not inevitable. Canada is a high-income democracy with more decentralized governance than the United States.

In the United States, state leaders do not control a substantial part of their own governments anymore. “Citizens are effectively disenfranchised” because of the aid system, noted former U.S. senator James Buckley.180 A similar view about aid comes from Richard Epstein and Mario Loyola: “When Americans vote in state and local elections, they think they are voting on state and local policies. But often they are just deciding which local officials get to implement the dictates of distant and insulated federal bureaucrats, whom even Congress can’t control.”181

Many state employees really “work for” the federal government because that is who funds their salary in full or in part. State agencies know that “even if only a small percent of an employee’s salary or program resources comes from federal aid, loss of that portion can result in a job loss or program cutback.”182 Federal aid is the tail that wags the dog in terms of program control.

Organizations representing state employees funded by federal aid routinely lobby for federal policies counter to the positions of the elected officials of their own states.183 State employee organizations have long been a pro-centralization lobby—state highway officials, for example, were a key lobbying group behind passage of the first federal highway aid bill in 1916.184 The main teachers’ union has pushed for federal subsidies for more than a century.185

Former Nebraska governor Ben Nelson expressed his dismay at the limitations of his office: “I honestly wondered if I was actually elected governor or just a branch manager of the state of Nebraska for the federal government.”186 The U.S. Constitution guarantees to each state a “Republican form of government,” meaning a representative democracy, but that promise is undermined when the states are just “branch managers.”187 In his book on federalism, Adam Freedman says that the rise of federal aid and related regulations is an “assault on democracy because the point of such measures is to coerce states into doing things that their voters do not want, or at least would not be willing to pay for themselves.”188

16. Accountability

Federal aid requirements have spawned the creation and expansion of state and local government agencies. As noted, these agencies relying on aid often have substantial autonomy from state elected officials, and so aid has fragmented state government horizontally.189

At the same time, federal aid has jumbled American government vertically. Originally, the three levels of government were like a tidy layer cake with each layer handling separate functions. Citizens knew whom to praise or blame for policy actions. But with the rise of aid, American government has become like a marble cake with responsibilities mixed across layers.190 Federal, state, and local governments play intermixed roles in such areas as education, housing, and transportation.

In his 1983 budget message, Reagan argued, “During the past 20 years, what had been a classic division of functions between the federal government and the states and localities has become a confused mess.”191 The mess has made it harder for citizens to hold government officials accountable. In the 1780s, one of the concerns of the Anti-Federalists about the U.S. Constitution was the complexity it would add to government. Complex governments “seem to bid defiance to all responsibility … as it can never be discovered where the fault lies,” noted one leading Anti-Federalist.192

The Anti-Federalists were right. Today’s marble cake structure of government allows politicians to point fingers of blame at other levels of government when failures occur. That was clear in the aftermath of Hurricane Katrina in 2005, and it was evident during the water crisis in Flint, Michigan, a few years ago. When every government has a hand in an activity, no government takes responsibility for failures.

Budget expert James Capretta noted that “Medicaid’s current federal-state design also undermines political accountability. Neither the federal government nor the states are fully in charge. As a result, each side has tended to blame the other for the program’s short­comings, and neither believes it has sufficient power to unilaterally impose effective reforms.”193 He concludes that “the fundamental problem in Medicaid is that neither the federal government nor the states are fully in charge.”194

The ACIR noted that the aid system “has become too big, too broad, and too deep for effective operation or control. Where all responsibilities are shared, no one is truly responsible. And, if everyone is responsible for everything, none can fulfill their obligations.”195

Political scientist Steven Teles coined the word “kludgeocracy” to describe a system in which the “complexity and incoherence of our government often make it difficult for us to understand just what the government is doing.”196 Kludgeocracy, he says, creates a “hidden, indirect, and frequently corrupt distribution of” costs, while aiding “those seeking to extract rents from government because it makes it hard to see just who is benefitting and how.”197 The aid system, Teles says, is a key part of the problem. “The complexity of our grant-in-aid system makes the actual business of governing difficult and wasteful,” he concludes.198

17. Crowding Out

In many policy areas, the federal government’s role appears to be crucial because state and local governments and the private sector are not currently addressing public needs. But that is often the case only because the federal government has partly or fully displaced (crowded out) state, local, and private efforts.

For better or worse, the states have usually led the way on expansions in government services over the past century.199 Modern limited-access highways, for example, were pioneered by the states before the federal government passed the Interstate Highway Act of 1956. The Pennsylvania Turnpike opened in 1940, and its success prompted more than a dozen states to launch their own superhighway programs.200 The idea of weaving together state highways into a larger national system also predated the 1956 federal highway law. State efforts to build interstate highways included the Dixie Highway from the Midwest to Florida, the Lincoln Highway from New York to San Francisco, and the Bankhead Highway from Washington, DC, to San Diego.201

Section 3 discussed the extent to which federal spending either displaces or adds to the amounts that states spend on targeted activities. Federal spending on interstate highways likely did increase overall highway spending initially and only partly crowded out state efforts. But, either way, federal aid for highways has come with negative effects, such as raising construction costs, misallocating investments, and creating bureaucracy.

As a separate matter, a less examined phenomenon is how federal aid induces state and local governments to crowd out or displace the private provision of services. This negative effect of federal aid is clear in the provision of transportation infrastructure.

Federal aid has crowded out private highway bridges. A 1932 survey found that nearly two-thirds of 322 toll bridges in the United States were privately owned.202 But then federal and state governments began handing out subsidies to government-owned bridges during the 1930s, and that put private bridges at a competitive disadvantage, as Robert Poole discusses in Rethinking America’s Highways. Because private bridge owners did not receive subsidies and were already suffering from revenue declines during the Great Depression, many succumbed to government takeovers.

Urban transit systems in most American cities were privately owned and operated until the 1960s, but then the private share started falling rapidly. Of the systems in the 100 largest U.S. cities, the private share fell from 90 percent in 1960 to just 20 percent by the late 1970s.203 The rise of automobiles undermined transit; transit firms had difficulty cutting costs because they were unionized; and local governments resisted allowing transit firms to end unprofitable routes. The nail in the coffin for private transit was the Urban Mass Transportation Act of 1964, which provided federal aid to government-owned bus and rail systems. That encouraged state and local governments to take over private systems, and a century of private transit investment came to an end.204

A similar thing happened in aviation. About half of U.S. airports were privately owned in the early years of commercial aviation in the 1920s and 1930s. The main airports in Los Angeles, Miami, Philadelphia, Washington, DC, and other cities were for-profit business ventures. These airports were successful and innovative, but they lost ground from unfair government competition. City governments could issue bonds exempt from federal tax to finance their own airports, giving them a financial edge over private airports. Private airports had to pay taxes while government airports did not. The federal government began handing out aid to government-owned airports during the New Deal, and then the Airport Act of 1946 began regular federal aid funding of government-owned airports. Today, virtually all U.S. commercial airports are in government hands.

Sadly then, during the 20th century, state and local governments—supported by federal aid—displaced entrepreneurs from major parts of America’s transportation industries. Federal aid for government infrastructure, combined with the tax-free status of government bonds, has created a strong bias in favor of government ownership. The effect of the bias is clear when you consider that the global privatization trend in airports of recent decades has mainly bypassed the United States.205

Federal aid has supported the states in crowding out private provision in other areas. The expansion of Medicaid has crowded out private healthcare. Estimates vary, but roughly every two persons added to the program has reduced private health coverage by one person.206 Medicaid long-term care aid has induced many families who would have otherwise paid privately to take advantage of government benefits.207

Government-supported schools have long crowded out private schools, and federal aid has exacerbated the problem. School-choice programs are on the rise in many states, but generally parents wanting to escape a poor-quality public school have had to pay private tuition on top of paying taxes to fund the public system. One of the earliest federal aid programs, passed in 1917, was for subsidizing vocational schools—but only schools owned by governments.208 So federal aid supporting the crowding out of private education goes way back.

As a last example, increasing federal aid for natural disasters may be crowding out state, local, and private efforts. After the 1994 Northridge, California, earthquake, U.S. House and Senate reports concluded that the availability of federal aid had encouraged state and local governments to neglect disaster preparation and mitigation.209 Around the same time, a report from Vice President Al Gore’s “reinventing government” initiative warned that “the ready availability of federal funds may actually contribute to disaster losses by reducing incentives for hazard mitigation and preparedness.”210

In the wake of Hurricane Katrina in 2005, Florida governor Jeb Bush warned against increasing federal intervention. He said, “As the governor of a state that has been hit by seven hurricanes and two tropical storms in the past 13 months, I can say with certainty that federalizing emergency response to catastrophic events would be a disaster as bad as Hurricane Katrina.”211 And, he said, “if you federalize, all the innovation, creativity and knowledge at the local level would subside.”212

When states need help during natural disasters, a better alternative than federal aid is aid from other states. Indeed, the states do help each other with manpower and resources under the Emergency Management Assistance Compact (EMAC), which expedites the legal process of mutual aid. Local governments also share police and fire assets during emergencies, and electric utilities across the nation routinely aid one another with crews and equipment after storms. The EMAC is one of more than 200 interstate compacts in place today.213

When tackling problems that affect multiple states, policymakers should consider state cooperation first before they call for a top-down imposition from Washington. As Governor Bush noted, when the federal government gets involved, it displaces the innovation, creativity, and knowledge that come with nonfederal efforts.

18. Trust

The rise of federal aid and the centralization of power in Washington have coincided with falling trust in the federal government. Public polls show that the share of people who trust the federal government has plunged from about 70 percent in the 1960s to about 20 percent today.214 It is an irony that Americans have grown less fond of the federal government as the number of federal programs ostensibly created to serve them has increased.

Polls find that general anger toward federal policies has increased. A 2015 poll by Pew Research found that 22 percent of Americans feel “angry” about the federal government, and an additional 57 percent or so feel “frustrated” by it, leaving just 18 percent “contented.”215 The anger and the fall in trust may reflect the increasing dysfunction of the federal government as it has expanded.216

The rise in federal aid and top-down regulations have likely contributed to today’s anger and partisan divisions by trying to force policy conformity on a diverse country. The aid system imposes one-size-fits-all policies on the nation when there is no national consensus. The grassroots anger over the attempted imposition of Common Core school standards is a good example of the backlash against enforced conformity.

As John Kincaid noted about the rise of federal intervention into state affairs,

[It] is the root cause of polarization because it has nationalized so many issues, especially sensitive social and cultural issues such as abortion and education that were previously diffused across the fifty state political arenas. The cooperative federalism advanced by the nationalist school of federalism requires a national consensus on such issues, but there is no consensus. Requiring state electorates to implement sometimes hotly contested national policies appears to have considerably exacerbated national conflict in ways that threaten the institutional fiber of the republic.217

Reviving competitive federalism by reducing federal intervention would help heal political divisions. Large majorities of Americans prefer state rather than federal control over education, housing, transportation, welfare, healthcare, and other activities.218 Americans think that state and local governments provide more competent service than the federal government.219 And when asked which level of government gives them the best value for their tax dollars, two-thirds of people say state and local governments and just one-third say the federal government.

For these reasons, there has been a shift in public opinion in recent decades in favor of decentralizing government power.220 Americans are in favor of reviving federalism, but the hard part is convincing federal policymakers to start returning power to the states and private sector.

Conclusions

The $750 billion aid system is a roundabout way to fund state and local activities that the deficit-ridden federal government cannot afford. The aid system does not deliver efficient public services, but rather delivers bureaucracy, overspending, and federal micromanagement. It undermines policy diversity and political accountability.

The states are entirely capable of funding and operating their own programs. President Reagan’s 1987 executive order on federalism noted, “In most areas of governmental concern, the states uniquely possess the constitutional authority, the resources, and the competence to discern the sentiments of the people and to govern accordingly.”221

President Trump’s most recent budget proposed small cuts to federal aid. But that proposed reform provoked a prominent liberal think tank to issue a study defending aid. The study’s first sentence was, “Federal funds that go to state and local governments as grants help finance critical programs and services on which residents of every state rely.”222 But if aid funds “critical” programs, then federal cuts would prompt the states to fill the void with their own programs, and those programs would likely be superior for the reasons discussed.

It is understandable that federal policymakers are eager to try and fix the nation’s many ills. But they should appreciate that the states can handle domestic policies by themselves and that federal intervention is often counterproductive. The optimism of previous decades about the ability of federal aid programs to efficiently solve state and local problems was misguided.

Congress should work with the Trump administration to identify and eliminate low-value federal aid programs. Over the longer run, the aid system should be fully phased out. Americans want more responsive and effective government, and they can get it by devolving power to the states and reviving competitive federalism.

Notes

1. Counts of the number of aid-to-state programs by various sources are somewhat rough. Figure 1 uses counts from the Advisory Commission on Intergovernmental Relations for 1905-1975, the Office of Management and Budget for 1980-2005, the Congressional Research Service for 2010-2015, and my own count for 2018 based on the OMB method. See endnote 11. Emma Wei assisted with the 2018 count.

2. For the early history of aid, see Chris Edwards, “Federal Aid to the States: Historical Cause of Government Growth and Bureaucracy,” Cato Institute Policy Analysis no. 593, May 22, 2007. And see Paul H. Douglas, “The Development of a System of Federal Grants-in-Aid I,” Political Science Quarterly 35, no. 2 (June 1920): 255-71; Austin F. Macdonald, Federal Aid: A Study of the American Subsidy System (New York: Thomas Y. Crowell Company, 1928); and Sam J. Ervin, Jr., “Federalism and Federal Grants-In-Aid,” North Carolina Law Review 43, no. 3 (1965): 487-501.

3. Robert P. Inman, “Federal Assistance and Local Services in the United States: The Evolution of a New Federalist Fiscal Order,” National Bureau of Economic Research Working Paper no. 2283, June 1987. In explaining the growth in aid to states, Inman says, “Congress as an institution for fiscal policy underwent a major transformation in structure from 1969 to 1972, evolving from a legislative body dominated by a few major decision-makers with firm control over fiscal affairs to a largely decentralized forum of individual deal-makers each required to maximize his or her own net gain from legislative decisions.”

4. President Richard Nixon, State of the Union Address, 1971.

5. Memorandum from President Jimmy Carter, September 9, 1977. Quoted in David B. Walker, The Rebirth of Federalism: Slouching toward Washington (New Jersey: Chatham House Publishers, 1995), p. 143.

6. Daniel P. Schwallie, The Impact of Intergovernmental Grants on the Aggregate Public Sector (New York: Quorum Books, 1989), p. 132. There was a shift from the previous view that grants could efficiently solve externalities to the new view that rent-seeking, fiscal illusion, and bureaucratic behaviors better explained the structure of intergovernmental grants.

7. ACIR publications are available at https://digital.library.unt.edu/explore/collections/ACIR.

8. For a discussion of Reagan’s New Federalism, see Editorial Research Reports (CQ Researcher), “Reagan’s New Federalism,” April 3, 1981.

9. The Office of Management and Budget and the Advisory Commission on Intergovernmental Relations have somewhat different historical counts of the number of grants, but in both cases the drop was about one-quarter mainly resulting from the Omnibus Budget Reconciliation Act of 1981.

10. Quoted in Kenneth Jost, “The States and Federalism: Should More Power Be Shifted to the States?” CQ Researcher, September 13, 1996.

11. The figure for 2018 is based on my analysis of the Catalog of Federal Domestic Assistance (CFDA), available at https://beta.sam.gov (formerly www.cfda.gov). I included programs of type A, B, and C for state, local, and tribal governments, while excluding programs for private-sector recipients. Programs with zero obligations were excluded. Emma Wei assisted with the count. Federal aid program counts should be considered rough, and past counts by the ACIR and OMB differed. The Congressional Research Service provided a count for 2017 of 1,319. See Robert Day Dilger, “Federal Grants to State and Local Governments: A Historical Perspective on Contemporary Issues,” Congressional Research Service, R40638, May 7, 2018.

12. This is a fiscal year estimate from the Budget of the U.S. Government, FY2020, Analytical Perspectives (Washington: Government Publishing Office, 2019), p. 232.

13. Aid programs can also be categorized as either categorical grants or block grants. Most are categorical grants, which target a narrow range of activities and include detailed rules for states to follow. By contrast, block grants fund a broader range of activities and give states more flexibility.

14.Budget of the U.S. Government, FY2020, Analytical Perspectives (Washington: Government Publishing Office, 2019), chapter 17.

15. A recent study by a prominent liberal think tank arguing against President Trump’s proposed aid cuts said, “State and local governments do not have the funds to replace the magnitude of funds that could be lost through cuts.” Yet the federal government is running a $900 billion deficit and does not “have the funds” either. See Iris J. Lav and Michael Leachman, “At Risk: Federal Grants to State and Local Governments,” Center on Budget and Policy Priorities, March 13, 2017.

16. Federalism expert John Kincaid notes, “Just as grants create the illusion of free money for state and local taxpayers, federal deficit spending encourages state and local officials to try to shift costs to the federal government because it appears to be costless and because state and local officials face comparatively hard budget constraints in the forms of constitutional or statutory tax, expenditure, and borrowing limits.” John Kincaid, “The Eclipse of Dual Federalism by One-Way Cooperation Federalism,” Arizona State Law Journal 49, no. 3 (Fall 2017): 1075. The first book about the new and growing federal aid system published in 1928 captured the political appeal of federal funding: “The voters have clamored loudly for better standards of service—more and better schools, more and better teachers, more and better roads. At the same time they have voiced no less insistently their demand for lower taxes. State legislators … have cast about for new sources of revenue. One of the richest finds has been the federal treasury.” Austin F. Macdonald, Federal Aid: A Study of the American Subsidy System (New York: Thomas Y. Crowell Company, 1928), p. 5.

17. In addition to legal limits on debt issuance, state budgeting is disciplined by credit ratings on state bond debt. Some states have large unfunded obligations in their worker retirement plans, and so they are not fiscal saints. However, state and local debt and unfunded obligations is a smaller problem than federal government debt and unfunded obligations. Also, some states are quite prudent and have very low debt and unfunded obligations.

18. Gordon Tullock, The New Federalist (Vancouver, Canada: The Fraser Institute, 1994), pp. 74, 128.

19. Tullock, The New Federalist, pp. 74, 128.

20. Dino P. Christenson, Douglas L. Kriner, and Andrew Reeves, “All the President’s Senators,” Legislative Studies Quarterly 42, no. 2 (May 2017): 3.

21. For example, Rep. G. K. Butterfield (D-NC) holds annual grants workshops. See https://butterfield.house.gov/services/grants.

22. Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 50.

23. Michael S. Greve, “Big Government Federalism,” Federalism Outlook no. 5, American Enterprise Institute, March 2001.

24. James R. Hines, Jr. and Richard H. Thaler, “Anomalies: The Flypaper Effect,” Journal of Economic Perspectives 9, no. 4 (Fall 1995): 217-26. And see Robert P. Inman, “The Flypaper Effect,” National Bureau of Economic Research Working Paper no. 14579, December 2008. And see Jason Sorens, “Vertical Fiscal Gaps and Economic Performance: A Theoretical Review and an Empirical Meta-analysis,” Mercatus Center, February 2016. There may be a time dimension to the flypaper effect. That is, grants may initially raise state spending, but over the longer term the stimulus may subside. See Nora Gordon, “Do Federal Grants Boost School Spending? Evidence from Title I” Journal of Public Economics 88 (2004): 1771-92.

25. Robert P. Inman, “The Flypaper Effect,” National Bureau of Economic Research Working Paper no. 14579, December 2008.

26. Mike Nichols, Federal Grant$tanding: How Federal Grants Are Depriving Us of Our Money, Liberty, and Trust in Government—and What We Can Do about It (Wisconsin: Badger Institute, 2018), p. 49.

27. Shama Gamkhar and Wallace Oates, “Asymmetries in the Response to Increases and Decreases in Intergovernmental Grants: Some Empirical Findings,” National Tax Journal 49, no. 4 (December 1996): 501-12.

28. Robin Rudowitz, “Medicaid Financing: The Basics,” Kaiser Family Foundation, December 2016.

29. With a closed-ended grant, the state spending incentive depends on whether spending is below or above the cap amount. If spending is above the cap amount, further increases do not trigger additional funds from Washington.

30. Economists who support federal aid point to two main advantages. They argue that aid may address externalities or spillovers that states may impose on one another and that redistribution is better carried out by the central government. See Wallace E. Oates, “An Essay on Fiscal Federalism,” Journal of Economic Literature 37, no. 3 (September 1999): 1120-49. But to properly address spillover effects, federal planners would need detailed local information that they usually do not have, and they would need to be guided by the public interest, not political pressures. Experience over the past century shows that aid programs are generally not created and designed to address spillovers. Also note that many actual spillovers, such as those relating to interstate water resources, can be handled by interstate compacts rather than federal programs. Regarding redistribution, aid programs do not redistribute resources to low-income states in many cases, even if that were a good idea. On these two theoretical advantages of aid, the ACIR concluded in a major 1981 study on federalism, “The record indicates that federal aid programs have never consistently transferred income to the poorest jurisdictions or individuals. Neither do most existing grants accord with the prescriptions of ‘externality theory.’ ” Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 94. And see pp. 53, 54. Finally, note that any possible advantages of aid need to be balanced by the disadvantages, as discussed in this study. A thorough, cross-country examination of the pros and cons of aid is in Robin Boadway and Anwar Shah, Fiscal Federalism: Principles and Practice of Multiorder Governance (New York: Cambridge University Press, 2009).

31. Pengyu Zhu and Jeffrey R. Brown, “Donor States and Donee States: Investigating Geographic Redistribution of the U.S. Federal-Aid Highway Program 1974-2008,” Transportation 40, no. 1 (January 2013): 203-27.

32. Author’s calculation for 2006 to 2015 of the HTF’s highway account. See Federal Highway Administration, “Highway Statistics 2015,” August 2016, Table FE-221B.

33. This is Inman’s interpretation of Knight’s statistical results. Robert P. Inman, “The Flypaper Effect,” National Bureau of Economic Research Working Paper no. 14579, December 2008; and Brian Knight, “Parochial Interests and the Centralized Provision of Local Public Goods,” National Bureau of Economic Research Working Paper no. 9748, June 2003.

34. Some studies that have found political biases in aid allocations include: Dino P. Christenson, Douglas L. Kriner, and Andrew Reeves, “All the President’s Senators: Presidential Copartisans and the Allocation of Federal Grants,” Legislative Studies Quarterly 42, no. 2 (May 2017); Thomas A. Garrett and Russell S. Sobel, “The Political Economy of FEMA Disaster Payments,” Economic Inquiry 41, no. 3 (July 2003): 496-509; David Albouy, “Partisan Representation in Congress and the Geographic Distribution of Federal Funds,” National Bureau of Economic Research Working Paper no. 15224, August 2009; Pengyu Zhu and Jeffrey R. Brown, “Donor States and Donee States: Investigating Geographic Redistribution of the U.S. Federal-Aid Highway Program 1974-2008,” Transportation 40, no. 1 (January 2013): 203; and Massimiliano Ferraresi, Gianluca Gucciardi, and Leonzio Rizzi, “The 1974 Budget Act and Federal Grants: Exploring Unintended Consequences of the Status Quo,” May 29, 2018, available at SSRN.com.

35. Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 94.

36. Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System,” p. 106.

37. Rainald Borck and Stephanie Owings, “The Political Economy of Intergovernmental Grants,” Regional Science and Urban Economics 33, no. 2 (2003): 140. Similarly, Robert Inman concluded: “Two alternative hypotheses are examined. The first—that aid is allocated to correct market or political failures in the local public economy or to equalize the provision of meritorious local public goods—generally fails to account for the distribution of federal aid over the past thirty years. The second hypothesis—that aid is allocated to ease the fiscal pressure in the state-local sector when, and only when, it is in the political interests of congressional representatives to do so—is supported by the recent data.” Robert P. Inman, “Federal Assistance and Local Services in the United States,” National Bureau of Economic Research Working Paper no. 2283, June 1987.

38. Rainald Borck and Stephanie Owings, “The Political Economy of Intergovernmental Grants,” Regional Science and Urban Economics 33, no. 2 (2003): 140.

39. Clifford Winston, “On the Performance of the U.S. Transportation System: Caution Ahead,” Journal of Economic Literature 51, no. 3 (September 2013): 790.

40. Dean E. Murphy, “Security Grants Still Streaming to Rural States,” New York Times, October 12, 2004. And see Chris Edwards, “Terminating the Department of Homeland Security,” DownsizingGovernment.org, Cato Institute, November 1, 2014.

41. Richard Johnson, “Weighing the Costs: The Unequal Impact of Equal State Apportionment in the United States Senate,” Nuffield College, Oxford, October 20, 2012.

42. Adam Liptak, “Smaller States Find Outsize Clout Growing in Senate,” New York Times, March 10, 2013.

43. For example, the number of governments receiving Community Development Block Grants has increased over the years. See Tracy Gordon, “Harnessing the U.S. Intergovernmental Grant System for Place-Based Assistance in Recession and Recovery,” Hamilton Project, Brookings Institution, September 2018, p. 8.

44. Quoted in Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 50.

45. Cited in K. Lee, “Apportionment of Federal Grants,” CQResearcher (formerly Editorial Research Reports), October 16, 1946.

46. A 1975 Congressional Budget Office study cited in Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 49.

47.Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 48.

48. Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System,” p. 94.

49.Budget of the U.S. Government, FY2020, Analytical Perspectives (Washington: Government Publishing Office, 2019), Table 17-4.

50.“Community Development Block Grant Program—CBDG,” HUD.gov.

51. Brett Theodos, Christina Plerhoples Stacy, and Helen Ho, “Taking Stock of the Community Development Block Grant,” Urban Institute, April 2017, pp. 6, 8.

52.Budget of the U.S. Government, Fiscal Year 2020, Major Savings and Reforms (Washington: Government Publishing Office, 2019), p. 50.

53. Joseph Antos, “The Structure of Medicaid,” in The Economics of Medicaid, ed. Jason J. Fichtner (Arlington, VA: Mercatus Center, 2014), p. 9.

54. Quoted in K. Lee, “Apportionment of Federal Grants,” CQ Researcher (formerly Editorial Research Reports), October 16, 1946.

55. Lauren Camera and Lindsey Cook, “Title 1: Rich School Districts Get Millions Meant for Poor Kids,” U.S. News and World Report, June 1, 2016.

56. Victoria L. Elliott, “Stafford Act Declarations 1953-2016: Trends, Analyses, and Implications for Congress,” Congressional Research Service, R42702, August 28, 2017.

57. Since 1970, the estimated number of Americans living in coastal areas designated as Special Flood Hazard Areas by FEMA increased from 10 million to more than 16 million. See Chris Edwards, “The Federal Emergency Management Agency: Floods, Failures, and Federalism,” DownsizingGovernment.org, Cato Institute, December 1, 2014.

58. It is true, however, that the aid system is funded by the graduated or progressive federal income tax.

59. Randal O’Toole, Romance of the Rails: Why the Passenger Trains We Love Are Not the Transportation We Need (Washington: Cato Institute, 2018), Chapter 13.

60. O’Toole, Romance of the Rails, pp. 165, 168, 215.

61. O’Toole, Romance of the Rails, p. 213. And see the transit section of www.infrastructurereportcard.org.

62. Brian M. Rosenthal, “The Most Expensive Mile of Subway Track on Earth,” New York Times, December 28, 2017. And see Brian M. Rosenthal, Emma G. Fitzsimmons, and Michael LaForgia, “How Politics and Bad Decisions Starved New York’s Subways,” New York Times, November 18, 2017.

63. Lori Aratani and Katherine Shaver, “Trump Budget Plan Would Deal Blow to Washington Region’s Transit; Purple Line at Risk,” Washington Post, March 16, 2017.

64. O’Toole, Romance of the Rails, p. 209.

65. O’Toole, Romance of the Rails, p. 211.

66. Gustavo Arellano, “Albuquerque’s $133 Million Electric Bus System Is Going Nowhere Fast,” Los Angeles Times, February 17, 2019.

67. Dan Frosch and Paul Overberg, “How a Train through Paradise Turned Into a $9 Billion Debacle,” Wall Street Journal, March 22, 2019.

68. The Housing Act of 1949 launched a large federal effort of urban renewal, slum clearing, and public housing projects.

69. W. B. Dickinson Jr., “Urban Renewal under Fire,” CQ Researcher (formerly Editorial Research Reports), August 21, 1963.

70. Jane Jacobs, The Death and Life of Great American Cities (New York: Random House, 1961), p. 4.

71. James T. Bennett, Corporate Welfare: Crony Capitalism That Enriches the Rich (New Brunswick, NJ: Transaction Publishers, 2015), chapter 5.

72. William A. Fischel, “Before Kelo,” Regulation 28, no. 4 (Winter 2005): 32-35.

73. Quoted in James T. Bennett, Corporate Welfare: Crony Capitalism That Enriches the Rich, p. 134.

74. Quoted in K. Lee, “Apportionment of Federal Grants,” CQResearcher (formerly Editorial Research Reports), October 16, 1946.

75. Advisory Committee on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” December 1980, Introduction. This is the “In Brief” summary volume.

76. Government Accountability Office, “Federal Assistance: Grant System Continues to Be Highly Fragmented,” GAO-03-718T, April 29, 2003.

77. Government Accountability Office, “Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue,” GAO-11-318SP, March 2011, p. 42.

78. The statute is 20 U.S.C. § 33. The regulation page count is mentioned in National Education Association, “NEA Assesses Final IDEA Regulations,” http://www.nea.org/home/18903.htm.

79. U.S. Department of Education, “40th Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act, 2018,” December 2018.

80. This is HUD’s entire costs of compensation and purchases ratioed for the share of HUD outlays that was for state aid in 2018. Budget of the U.S. Government, Fiscal Year 2019, Appendix (Washington: Government Publishing Office, 2018).

81. Calculated based on data in Budget of the U.S. Government, Fiscal Year 2019, Appendix (Washington: Government Publishing Office, 2018), p. 159.

82. Calculated based on data in Budget of the U.S. Government, Fiscal Year 2019, Appendix, p. 182.

83. Government Accountability Office, “Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction’s Capability to Respond and Recover on Its Own,” GAO-12-838, September 2012, p. 41.

84. Paul C. Light, “The True Size of Government,” The Volcker Alliance (website), October 2017.

85. Paul C. Light, “Fact Sheet on the New True Size of Government,” Brookings Institution 2003. And see Paul C. Light, The True Size of Government (Washington: Brookings Institution, 1999), pp. 26-36.

86. From 1960 to 1980, state-local spending from federal aid grew rapidly while state-local spending from state-local own-source revenues grew slowly. See Budget of the U.S. Government, Fiscal Year 2019, Historical Tables (Washington: Government Publishing Office, 2018), Table 14.3. Between 1960 and 1980, state-local spending from own-source revenues increased from 8.4 percent to 9.5 percent of gross domestic product, but state-local spending from federal aid jumped from 0.7 percent to 2.4 percent. For employment data, see U.S. Bureau of Economic Analysis, National Income and Product Accounts, Table 6.5B.

87. Government Accountability Office, “Community Development Block Grants: Program Offers Recipients Flexibility but Oversight Can Be Improved,” GAO-06-732, July 2006, p. 14.

88.“Grants and Funding Program Forms,” California Department of Housing and Community Development, hcd.ca.gov.

89. Mike Nichols, Federal Grant$tanding: How Federal Grants Are Depriving Us of Our Money, Liberty and Trust in Government—and What We Can Do about It (Wisconsin: Badger Institute, 2018), p. 69.

90. Nichols, Federal Grant$tanding, p. 32.

91. Nichols, Federal Grant$tanding, p. 17.

92. Congressional Budget Office, “Federal Grants to State and Local Governments,” March 2013, p. 31.

93. U.S. Department of Transportation, “U.S. Department of Transportation Secretary Elaine L. Chao Announces $1.5 Billion in BUILD Transportation Grants to Revitalize Infrastructure Nationwide,” December 11, 2018.

94. David Palmer, “Cullman Seeks $14 million Grant for Alabama 157 Widening,” Cullman Times, July 17, 2018.

95. Palmer, “Cullman Seeks $14 Million Grant for Alabama 157 Widening.”

96. Patrick McGuinn, “From No Child Left Behind to the Every Student Succeeds Act: Federalism and the Education Legacy of the Obama Administration,” Publius: The Journal of Federalism 46, no. 3 (2016): 396.

97. For example, California’s application was 606 pages in length and Colorado’s was 762 pages.

98. This program is CFDA 84.351D. Information on it is at https://innovation.ed.gov.

99. This figure is from a sampling of applications for 2014. Recent award applications are not posted.

100. Government Accountability Office, “Medicaid: Further Action Needed to Expedite Use of National Data for Program Oversight,” GAO-18-70, December 2017.

101. Healthcare provider taxes are a widely criticized example, but there are also other dubious schemes. Regarding provider taxes, see Brian C. Blase, “Medicaid Provider Taxes: The Gimmick That Exposes Flaws with Medicaid’s Financing,” Mercatus Center, February 2016.

102. Government Accountability Office, “School-Meals Programs: USDA Has Enhanced Controls, but Additional Verification Could Help Ensure Legitimate Program Access,” GAO-14-262, May 2014, p. 15.

103. Government Accountability Office, “School-Meals Programs,” p. 9. A scandal in Chicago public schools made this clear. See Monica Eng and Joel Hood, “School Free-Lunch Program Dogged by Abuses at CPS,” Chicago Tribune, January 13, 2012.

104. Eng and Hood, “School Free-Lunch Program Dogged by Abuses at CPS.”

105. U.S. Department of Agriculture, Office of Inspector General, “FNS-National School Lunch and School Breakfast Programs,” April 2015, p. 4.

106. Chris Edwards and Nicole Kaeding, “Federal Government Cost Overruns,” DownsizingGovernment.org, Cato Institute, September 1, 2015.

107. Patricia Sullivan, “Arlington County to Hire Independent Contractor to Review $1 Million Bus Stop,” Washington Post, June 24, 2013.

108. Chris Edwards, “Update on Arlington’s $1 Million Bus Stop,” DownsizingGovernment.org, Cato Institute, June 26, 2013.

109. Randal O’Toole, Romance of the Rails: Why the Passenger Trains We Love Are Not the Transportation We Need (Washington, DC: Cato Institute, 2018), p. 217.

110. O’Toole, Romance of the Rails, p. 141. See also Randal O’Toole, “Charting Public Transit’s Decline,” Cato Institute Policy Analysis no. 853, November 8, 2018, p. 10.

111. Government Accountability Office, “Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue,” GAO-11-318SP, March 2011, p. 48.

112. Eliot Brown, “Complex Design, Political Disputes Send World Trade Center Rail Hub’s Cost Soaring,” Wall Street Journal, September 3, 2014.

113. Robin Boadway and Anwar Shah, Fiscal Federalism: Principles and Practice of Multiorder Governance (New York: Cambridge University Press, 2009), p. 354.

114. David B. Walker, The Rebirth of Federalism: Slouching toward Washington (New Jersey: Chatham House Publishers, 1995), p. 238.

115. John Kincaid, “The Eclipse of Dual Federalism by One-Way Cooperation Federalism,” Arizona State Law Journal 49, no. 3 (Fall 2017): p. 1070.

116. James Sherk, “Repealing the Davis-Bacon Act Would Save Taxpayers $10.9 Billion,” Heritage Foundation, February 14, 2011.

117. AECOM and Build America Investment Initiative for the Department of the Treasury, “40 Proposed U.S. Transportation and Water Infrastructure Projects of Major Economic Significance,” December 2016, p. 7.

118. Associated General Contractors of Alaska, The Alaska Contractor, Fall 2012, p. 10.

119. National Conference of State Legislatures, “Mandate Monitor,” vol. 6, no. 1, April 1, 2008.

120. Valerie Strauss, “Are States Really Trying to Overcome the Harmful Legacy of No Child Left Behind?,” Washington Post, February 12, 2018.

121. Patrick McGuinn, “From No Child Left Behind to the Every Student Succeeds Act: Federalism and the Education Legacy of the Obama Administration,” Publius: The Journal of Federalism 46, no. 3 (2016): 399.

122. McGuinn, “From No Child Left Behind to the Every Student Succeeds Act,” p. 408.

123. Chris Edwards, “The Federal Emergency Management Agency: Floods, Failures, and Federalism,” DownsizingGovernment.org, Cato Institute, December 1, 2014.

124. Government Accountability Office, “Perspectives on Intergovernmental Policy and Fiscal Relations,” GGD-79-62, June 28, 1979, pp. 2, 9.

125. Milton Friedman, “Why Government Is the Problem,” Hoover Institution Essays in Public Policy no. 39, 1993.

126. Paul C. Light, “A Cascade of Failures: Why Government Fails, and How to Stop It,” Brookings Institution, July 14, 2014.

127. Luke Rosiak, “Many House Members Miss More Than Two-Thirds of Their Committee Meetings,” Washington Examiner, September 29, 2014.

128. Quoted in Kenneth Jost, “The States and Federalism: Should More Power Be Shifted to the States?,” CQ Researcher, September 13, 1996.

129.Quoted in Adam Freedman, A Less Perfect Union: The Case for States’ Rights (New York: Broadside Books, 2015), p. 243.

130. Paul H. Douglas, “The Development of a System of Federal Grants-in-Aid II,” Political Science Quarterly 35, no. 4 (December 1920): 540, 542.

131. Austin F. Macdonald, Federal Aid: A Study of the American Subsidy System (New York: Thomas Y. Crowell Company, 1928), pp. 4, 12.

132. Macdonald, Federal Aid, p. 267.

133. Macdonald, Federal Aid, p. 238.

134. President Calvin Coolidge, State of the Union Address, December 8, 1925.

135. Quoted in George B. Galloway, “Federal Subsidies to the States,” CQ Researcher (formerly Editorial Research Reports), December 13, 1924.

136.James L. Buckley, Saving Congress from Itself: Emancipating the States and Empowering Their People (New York: Encounter Books, 2014), p. xv.

137. Buckley, Saving Congress from Itself, p.xi.

138. Chris Edwards, “The Federal Emergency Management Agency: Floods, Failures, and Federalism,” DownsizingGovernment.org, Cato Institute, December 1, 2014.

139. Steven M. Teles, “Kludgeocracy in America,” National Affairs, Fall 2013.

140. Rebecca Goldstein and Hye Young You, “Cities as Lobbyists,” American Journal of Political Science 61, no. 4 (2017): 864-76. And see Ana Radelat, “State, Local Governments Hire Lobbyists for Influence in DC,” CT Mirror, January 15, 2015. Also see Rick Brundrett, “Millions Spent by S.C. Municipalities on Federal Lobbyists,” The Nerve (website), November 14, 2012.

141. Matt W. Loftis and Jaclyn J. Kettler, “Lobbying from Inside the System: Why Local Governments Pay for Representation in the U.S. Congress,” Political Research Quarterly 68, no. 1 (2014): 194.

142. Management Concepts, for example, offers a couple dozen different courses on aspects of the federal grants process. See www.managementconcepts.com.

143. U.S. Economic Development Administration (website), National Economic Development Organizations.

144. Advisory Commission on Intergovernmental Relations, “Fiscal Balance in the American Federal System,” vol. 1, October 1967, pp. 164, 165, 258.

145. Bureau of the Census, Statistical Abstract of the United States (Washington: Government Publishing Office, 2006), Table 415.

146. U.S. Department of Transportation (website), “Metropolitan Planning Organization (MPO) Database.”

147. Government Accountability Office, “Federal Assistance: Grant System Continues to Be Highly Fragmented,” GAO-03-718T, April 29, 2003, pp. 13-14.

148. Economist Wallace Oates notes that even aside from the possibility of interjurisdictional migration, the optimal level of public services will vary from place to place because preferences vary from place to place. But Oates and other economists favoring aid believe that that factor is balanced by other factors favoring centralized provision. Wallace E. Oates, “An Essay on Fiscal Federalism,” Journal of Economic Literature 37, no. 3 (September 1999): 1124.

149. Gordon Tullock, The New Federalist (Vancouver, Canada: Fraser Institute, 1994), p. 119.

150. Exec. Order No. 12612, 52 Fed. Reg. 41685 (October 26, 1987).

151.New State Ice Co. v. Liebmann, 285 U.S. 262 (1932).

152. Adam Freedman, A Less Perfect Union: The Case for States’ Rights (New York: Broadside Books, 2015).

153. Freedman, A Less Perfect Union, p 235.

154. For example, see D. Bradford Hunt, Blueprint for Disaster: The Unraveling of Chicago Public Housing (Chicago: University of Chicago Press, 2009).

155. Randal O’Toole, Romance of the Rails: Why the Passenger Trains We Love Are Not the Transportation We Need (Washington: Cato Institute, 2018).

156. Ari Ashe, “South Carolina to Fill Federal Gap in Charleston Port Deepening Dollars,” Journal of Commerce, June 7, 2018.

157. Liz Segrist, “Charleston Harbor Deepening Project Allocated $17.5 Million in Federal Funding,” Charleston Regional Business Journal, May 25, 2017.

158. For example, see “Delivering Jobs and Driving Growth,” Associated British Ports (website).

159. Chris Edwards, “Privatizing Air Traffic Control,” DownsizingGovernment.org, Cato Institute, April 8, 2016.

160. Chris Edwards, “The Federal Emergency Management Agency: Floods, Failures, and Federalism,” DownsizingGovernment.org, Cato Institute, December 1, 2014.

161. Scott Shane, “After Failures, Government Officials Play Blame Game,” New York Times, September 5, 2005.

162. James F. Miskel, Disaster Response and Homeland Security: What Works, What Doesn’t (Redwood City: Stanford University Press, 2008), p. 6.

163. Rutherford H. Platt, Disasters and Democracy: The Politics of Extreme Natural Events (Washington: Island Press, 1999), p. 277.

164. James W. Fossett, “A Tale of Two Hurricanes: What Does Katrina Tell Us about Sandy?” Nelson A. Rockefeller Institute of Government, January 15, 2013.

165. Dan Frosch and Rebecca Elliott, “Texas Relief Money Caught in Trump Administration Dispute with Puerto Rico,” Wall Street Journal, April 6, 2019.

166. Roger Pilon, “Federalism, Then and Now,” inFocus Quarterly (Washington: Jewish Policy Center, 2015), p. 4.

167. Pilon, “Federalism, Then and Now,” p. 4.

168. James Madison, Federalist no. 51.

169. Exec. Order No. 12612, 52 Fed. Reg. 41685 (October 26, 1987).

170. Greve discusses errors that James Madison made on the issue. He notes that the Constitutional Convention rejected Madison’s proposal of a federal veto on state laws on three occasions, indicating that the Founders wanted to minimize federal government involvement in state and local affairs. Michael S. Greve, Real Federalism: Why It Matters, How It Could Happen (Washington: American Enterprise Institute Press, 1999), pp. 51-57.

171. Greve, Real Federalism, pp. 195-96.

172. Richard P. Nathan, “Updating Theories of American Federalism,” in Intergovernmental Management for the Twenty-First Century (Washington: Brookings Institution Press, 2008), pp. 13-25.

173. In an interesting article, Chris Pope quotes a 1961 essay by leftist Canadian academic and future prime minister Pierre Trudeau stating, “Socialists must consider federalism as a positive asset… . The drive towards power must begin with the establishment of bridgeheads … allowing dynamic parties to plant socialist governments in certain provinces, from which the seed of radicalism can slowly spread.” Chris Pope, “Degenerate Federalism,” National Review, May 10, 2018.

174. Wallace Oates quoting Robert Inman and Daniel Rubinfeld in Wallace E. Oates, “An Essay on Fiscal Federalism,” Journal of Economic Literature 37, no. 3 (September 1999): 1138.

175. U.S. Bureau of Economic Analysis, National Income and Product Accounts, Table 3.3, https:apps.bea.govitableindex.cfm.

176. Chung-Lae Cho and Deil S. Wright, “Perceptions of Federal Aid Impacts on State Agencies: Patterns, Trends, and Variations across the 20th Century,” Publius: The Journal of Federalism 37, no. 1 (Winter 2007): 111.

177. John Kincaid, “Dynamic De/Centralization in the United States, 1790-2010,” Publius: The Journal of Federalism 49, no. 1 (Winter 2019): 166-93

178. André Lecours, “Dynamic De/Centralization in Canada, 1867-2010,” Publius: The Journal of Federalism 49, no. 1 (Winter 2019): 57-83.

179. Chris Edwards, “Did Canada Steal Our Tenth Amendment?,” Cato at Liberty (blog), Cato Institute, October 18, 2011.

180. James L. Buckley, Saving Congress from Itself: Emancipating the States and Empowering Their People (New York: Encounter Books, 2014), xii.

181. Richard A. Epstein and Mario Loyola, “The United State of America,” The Atlantic, July 31, 2014.

182. John Kincaid, “The Eclipse of Dual Federalism by One-Way Cooperation Federalism,” Arizona State Law Journal 49, no. 3 (Fall 2017): 1074.

183. Kincaid, “The Eclipse of Dual Federalism by One-Way Cooperation Federalism,” p. 1081.

184. The National Association of Highway Engineers wrote the 1916 Federal Aid Road Act, while the American Association of State Highway Officials helped lobby for its passage. See Paul H. Douglas, “The Development of a System of Federal Grants-in-Aid I,” Political Science Quarterly35, no. 2 (June 1920): 255-71.

185. Neal McCluskey, “Cutting Federal Aid for K-12 Education,” DownsizingGovernment.org, Cato Institute, April 21, 2016.

186. Quoted in Stanley Kurtz, “The Politics of the Administrative State,” National Review Online, January 8, 2018.

187. The Founders thought that a republican form of government had popular rule and the rule of law, and was not a monarchy. Edwin Meese, Matthew Spalding, and David F. Forte, The Heritage Guide to the Constitution (Washington: Regnery Publishing, 2005), p. 282.

188. Adam Freedman, A Less Perfect Union: The Case for States’ Rights (New York: Broadside Books, 2015), p. 219.

189. Chung-Lae Cho and Deil S. Wright, “Perceptions of Federal Aid Impacts on State Agencies: Patterns, Trends, and Variations across the 20th Century,” Publius: The Journal of Federalism 37, no. 1 (Winter 2007).

190. The marble cake metaphor was coined by political scientist Morton Grodzins.

191. Ronald Reagan, “Budget Message of the President,” Budget of the U.S. Government, Fiscal Year 1983 (Washington: Government Publishing Office, February 1982), p. M22.

192. Quoting [Maryland] Farmer. Herbert J. Storing, What the Anti-Federalists Were For: The Political Thought of the Opponents of the Constitution (Chicago: University of Chicago Press, 1981), p. 56.

193. James C. Capretta, “A New Safety Net: Medicaid,” American Enterprise Institute, February 2017.

194. James C. Capretta, “Reforming Medicaid,” in The Economics of Medicaid: Assessing the Costs and Consequences, ed. Jason J. Fichtner (Arlington, VA: Mercatus Center, 2014), p. 143.

195. Advisory Commission on Intergovernmental Relations, “The Federal Role in the Federal System: The Dynamics of Growth,” no. A-86, June 1981, p. 95.

196. Steven M. Teles, “Kludgeocracy in America,” National Affairs, no. 17 (Fall 2019): 97-114.

197. Teles, “Kludgeocracy in America.”

198. Teles, “Kludgeocracy in America.”

199. Richard Nathan provides many examples in Richard P. Nathan, “Updating Theories of American Federalism,” in Intergovernmental Management for the Twenty-First Century (Washington: Brookings Institution, 2008). Adam Freedman also provides numerous examples in Adam Freedman, A Less Perfect Union: The Case for States’ Rights (New York: Broadside Books, 2015).

200. Robert W. Poole, Jr., Rethinking America’s Highways: A 21st-Century Vision for Better Infrastructure (Chicago: University of Chicago Press, 2018), p. 7.

201. Poole, Rethinking America’s Highways, p. 35.

202. Poole, Rethinking America’s Highways, p. 40.

203. Randal O’Toole, Romance of the Rails: Why the Passenger Trains We Love Are Not the Transportation We Need (Washington: Cato Institute, 2018), p. 136.

204. Transportation Research Board-National Research Council, Special Report no. 258, Contracting for Bus and Demand-Responsive Transit Services: A Survey of U.S. Practice and Experience (Washington: National Academy Press, 2001), p. 35.

205. Chris Edwards and Robert W. Poole, Jr., “Privatizing U.S. Airports,” DownsizingGovernment.org, Cato Institute, November 28, 2016.

206. Chris Edwards, “Medicaid Reforms,” DownsizingGovernment.org, Cato Institute, May 1, 2018.

207. Mark Warshawsky, “Mark Warshawsky: Millionaires on Medicaid,” Wall Street Journal, January 6, 2014.

208. Paul H. Douglas, “The Development of a System of Federal Grants-in-Aid II,” Political Science Quarterly 35, no. 4 (December 1920): p. 523.

209. Rutherford H. Platt, Disasters and Democracy: The Politics of Extreme Natural Events (Washington: Island Press, 1999), p. 91.

210. Quoted in Platt, Disasters and Democracy, p. 58.

211. Jeb Bush, “Think Locally on Relief,” Washington Post, September 30, 2005.

212. Quoted in House of Representatives, Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina, “A Failure of Initiative,” February 15, 2006, p. 322.

213. See National Center for Interstate Compacts (website).

214. Pew Research Center, “Public Trust in Government: 1958-2017,” December 14, 2017.

215. Pew Research Center, “Beyond Distrust: How Americans View Their Government,” November 23, 2015.

216. An empirical study found “strong evidence that two aspects of government size—transfer payments and regulatory activity—exhibit persistent long-run association with trust in government.” Steven Gordon, John Garen, and J. R. Clark, “The Growth of Government, Trust in Government, and Evidence on Their Coevolution,” John H. Schnatter Institute for the Study of Free Enterprise, June 2017, p. 30.

217. Kincaid, “The Eclipse of Dual Federalism by One-Way Cooperation Federalism,” p. 1089.

218. John Samples and Emily Ekins, “Public Attitudes toward Federalism: The Public’s Preference for Renewed Federalism,” Cato Institute Policy Analysis no. 759, September 23, 2014, pp. 3-4.

219. Samples and Ekins, “Public Attitudes toward Federalism,” pp. 21, 23. And see Peter H. Schuck, Why Government Fails So Often: And How It Can Do Better (Princeton: Princeton University Press, 2014), pp. 95-98.

220. Samples and Ekins, “Public Attitudes toward Federalism,” pp. 3-4.

221. Exec. Order No. 12612, 52 Fed. Reg. 41685 (October 26, 1987).

222. Iris J. Lav and Michael Leachman, “At Risk: Federal Grants to State and Local Governments,” Center on Budget and Policy Priorities, March 13, 2017.

Chris Edwards is director of tax policy studies at the Cato Institute and editor of www.DownsizingGovernment.org.

A Reform Agenda for the Next Indian Government

$
0
0

Swaminathan S. Anklesaria Aiyar

India’s economic reforms since 1991 have largely been a tale of private-sector success, government failure, and institutional erosion. Prime Minister Narendra Modi won the 2014 election with the slogan “Minimum Government, Maximum Governance.” Many mistakenly thought that he would be a radical reformer who would reduce the reach of government and improve the quality of governance. In fact, he has been only an incremental reformer. He has not done much to reduce the heavy hand of the government over the economy. He has done even less to improve the quality of governance or the supply of high-quality public goods. He has eroded the strength and independence of institutions.

India will elect a new Parliament in May 2019, and regardless of which party wins, India needs less government interference and better governance. The next government should shift India’s policy from its current plethora of wasteful, corrupt subsidies toward targeted cash transfers to the deserving; curb the fiscal deficit; privatize several public-sector corporations and banks; liberalize the markets for labor, land, and capital; and roll back Modi’s rising protectionism.

The next government should also improve governance by reforming the moribund legal system; improving educational quality and teacher attendance in schools; improving the provision of basic health and public health services; ensuring security for religious minorities; and ending efforts to subvert independent institutions.

Introduction

When Narendra Modi came to power in the 2014 election, one of his slogans was “Minimum Government, Maximum Governance.”1 This led some analysts to mistakenly view him as a radical free-marketeer in the mold of Margaret Thatcher or Ronald Reagan. Modi turned out to be, at best, an incremental reformer, liberalizing limited parts of the economy in small steps, even while increasing controls on other sectors of the economy. Indeed, the overall thrust of Modi’s five years was characterized by rising welfarism instead of rising economic freedom.2 The quality of governance improved in some areas, such as reducing corruption at the top levels of government. But it worsened in several other areas, from the independence of institutions to mob lynching of Muslims suspected of transporting cows for slaughter.3

Modi failed to minimize government interventions or maximize the quality of governance. Yet those remain eminently worthy goals for the next government, which will come to power after the general election in May 2019.

The Problem of Too Much Government

India has experienced gradual, erratic economic liberalization since 1991. The path of liberalization has often been two steps forward and one step backward, with several sidesteps. The reforms have enabled Indian GDP growth to average more than 7 percent since 2003, qualifying for the mantle of “miracle economy.” However, the unfinished economic agenda is almost as large as what has been achieved so far.

In its annual Index of Economic Freedom, the Heritage Foundation divides countries into five categories — free, mostly free, moderately free, mostly unfree, and repressed. India comes in as “mostly unfree.” In its 2019 report, which uses data from the second half of 2017 to the first half of 2018, India ranks as low as 129th out of 186 countries.4

In its 2014 report, Heritage ranked India 120th, nine places higher than it is today. India’s freedom index score was 55.7 in 2014 and is down marginally to 55.2 in 2019. That is a sad commentary on the patchy progress (or lack thereof) in the Modi era.5

A more upbeat assessment comes from The Human Freedom Index (published by the Cato Institute, the Fraser Institute, and the Friedrich Naumann Foundation for Freedom), which looks at economic, civil, and personal freedoms. It shows India moving up from the 121st rank in 2014, when Modi came to power, to the 110th rank today.6 The Fraser Institute also measures economic freedom independently. In this respect, India has improved from the 122nd position to the 96th position. However, its index score has gone up only modestly, from 6.23 to 6.63. Any improvement is welcome, but India remains at a low position in freedom rankings.7

India’s performance looks much better in the World Bank’s Doing Business index. On this measure, India’s ranking is up from 142nd place in 2014 to 77th place in 2019.8 However, this has been achieved by the government focusing on reforms in Mumbai and New Delhi, the only two cities from which the World Bank draws data for this index. This cloaks the lack of progress in the rest of the country, and in parameters not measured by the index.

A Cornucopia of Subsidies and Fiscal Deficits

In most democracies, political parties compete for votes mainly on the basis of rival policies. In India they compete mainly by offering subsidies, freebies, and caste-based quotas rather than in broad policy issues. This is best illustrated by the state of Tamil Nadu, the champion of freebies. In the last state election in 2016, the winning party, the All India Anna Dravida Munnetra Kazhagam, offered free cellphones for ration-card holders; free laptops with internet connections for 10th- and 12th-grade students; maternity assistance of $257 (Rs 18,000 (rupees)); an increase in maternity leave from six to nine months; one hundred free electricity units every two months; waiver of all farm loans, at a cost of $5.7 billion (Rs 399 billion); assistance to fishermen to be hiked to $71 (Rs 5,000); a 50 percent subsidy to women to buy mopeds or scooters; an eight-gram gold coin for women getting married; free women’s hygiene kits, including sanitary napkins; and much more. Earlier competition between parties had already yielded 20 kilograms per month of free rice, free color TVs, a free mixer-grinder, and a free fan per family.9

Almost half the population is still dependent on agriculture, and canal water and electricity for pumping groundwater for irrigation are free or highly subsidized in most states. Urea, a nitrogenous fertilizer, is provided to farmers at just $4 (Rs 280) per 50-kilogram bag, one-fifth the commercial price, thus encouraging its diversion to chemical industries and smuggling into neighboring countries, such as Bangladesh.10

Central government subsidies include seven kilos of rice or wheat per member of supposedly poor families for a few cents per kilo. In practice this covers more than half the population. Subsidies of petrol and diesel have been abolished but continue on cooking gas and kerosene.

Subsidies are warranted for merit goods, such as basic education and health services, and for safety nets for the poor and disadvantaged. But a recent research paper showed that subsidies, broadly defined, were more than 12 percent of GDP in 2015-2016. Half of these were merit subsidies (on items such as basic education, basic health services, and sanitation) but 6 percent represented non­merit subsidies. To put these figures in perspective, the entire tax revenue of the central and state governments is only 17 percent of GDP. So nonmerit subsidies claim a big share of tax revenue that could better be spent on improving the woeful state of public goods, such as security, justice, and basic infrastructure.11

Competition between political parties ensures that subsidies are not channeled to only the needy but are typically spread over the bulk of the population in order to try and get more votes. What economists often decry as the leakages of subsidies to the nondeserving are viewed by politicians as a desirable electoral strategy. One consequence of the freebie culture is that, despite efforts to reduce the fiscal deficits of central and state governments, the combined total deficit remains at roughly 6.5 percent of GDP — one of the highest levels in the world. Moreover, creative accounting hides the true extent of deficits. A more comprehensive measure of deficits, the total Public Sector Borrowing Requirement, which takes into account borrowings by government corporations and trusts, is estimated at more than 8.2 percent of GDP. This again is among the highest rates in the world and is roughly equal to the entire net tax revenue of the central government.12

A high fiscal deficit erodes the capacity of the government to provide essential public goods and infrastructure. It is also one reason for high real interest rates in India. Consumer price inflation in the last year has been no more than 2-3.5 percent and has averaged around 4 percent in the Modi era. Yet the central bank’s repo rate — its collateralized lending rate to commercial banks — is 6.5 percent; companies borrow from banks at up to 18 percent, and shadow banks lend at up to 24 percent. High real interest rates increase the distress of poor indebted families, exacerbate already high corporate defaults to banks, and discourage investment.13

An aggressive drive to open bank accounts for every family has, in practice, covered 80-90 percent of families. This has made it feasible to give direct cash transfers to the needy. This, in turn, has sparked competition between parties to give bigger and bigger cash handouts. The ruling Telangana Rashtra Samithi in the state of Telangana gave an outright grant of $114 (Rs 8,000) per acre to farmers before the 2018 state election, and swept the poll. This was not because of the freebie alone — the party also swept urban constituencies. But it established a trend that others are fast following.14

In Odisha state, the government has offered outright cash grants of $143 to $179 (Rs 10,000 to Rs 12,500) for all rural families owning fewer than five acres; the amount depends on whether these families are farm owners, tenants, or landless workers. Modi has decided to follow a similar populist path at the all-India level. In preparation for the coming election in May 2019, his budget in February provided for a cash grant of $86 (Rs 6,000) per year to all farmers holding five acres or less.15

The Congress Party has promised the mother of all freebies in its election manifesto. If elected, it promises $1,029 (Rs 72,000) to each of the poorest 50 million families, at a cost of nearly 2 percent of GDP. This is in addition to giving farm loan waivers throughout India, and almost doubling health and education spending. The party does not say how this will be financed.16

There are sound economic arguments for replacing the vast array of subsidies with a cash grant instead. Idealists on the left have long argued for a universal basic income for all citizens, simply as a dividend for being citizens. On the market-oriented side of the spectrum, Milton Friedman argued that instead of giving a multitude of badly designed and distortionary subsidies, governments should replace these with cash grants.17 Thus, the idea of some sort of universal basic income has backing across the ideological spectrum. Unfortunately, political competition between parties in India makes it electorally risky to abolish any existing subsidy, so the cash being offered will be in addition to, not in place of, the existing plethora of freebies.

This is the wrong approach. The next government should slash subsidies on many goods and services and convert these into cash grants. That will mean bribe-free and rapid distribution of benefits, fewer opportunities for middlemen to siphon off benefits meant for the poor, and an end to distortions such as the smuggling of subsidized urea to neighboring countries. Such an approach also requires a concerted effort to open bank accounts for all those who are left out today.

Privatization: Grasp This Thorny Nettle

“Minimum government” should have meant the privatization of some of the hundreds of public-sector corporations in India. Modi set up a think tank, Niti Aayog (which in Hindi means Policy Commission), that prepared a list of more than 40 corporations for privatization, including Air India, which has lost enormous sums over the years. Modi attempted only one privatization, that of Air India, but he attached so many conditions on it that there were no bidders.18

Modi has sold minority stakes in several government corporations to try and reduce the fiscal deficit, but he has maintained government control. Often government corporations with high reserves have been asked to buy out the government’s stock in other corporations. This balance sheet jugglery changes nothing in reality, but it cuts the fiscal deficit on paper.

The right way forward would be the outright sale of government corporations to the private sector. Once a few sales take place, the entire process will gain credibility, and future sales will become much easier. That will release funds to be invested in public goods that are badly needed.

The Factor Markets: Labor, Land, and Capital

Since 1991, the product markets in India have been liberalized to a substantial extent. But the factor markets — for labor, land, and capital — remain highly constrained.

India has more than 200 labor laws, 52 of which are central government laws. The two most restrictive ones are the Industrial Disputes Act and the Industrial Employment (Standing Orders) Act. The Industrial Disputes Act requires companies with 100 or more workers to get government permission to downsize or lay off any worker, and this is rarely granted. As a result, firms are reluctant to hire workers for fear of being stuck with excess labor if business conditions change. Many industries (such as the garment industry) are seasonal, but companies dare not hire the number of workers needed for peak demand. The Standing Orders Act requires employers in firms with 100 or more workers (50 or more in some states) to seek permission for changing the job description of any employee (i.e., reassignment to a different task).19

Modi has encouraged states to liberalize labor laws but has avoided doing so at the national level. The Industrial Disputes Act threshold for sacking workers has been raised from 100 to 300 workers in several states. But such modest steps are not remotely enough to persuade companies to set up giant factories employing tens of thousands of people (in garments, footwear, and electronics) as in China or Bangladesh.20

Most Indian garment factories are tiny: nine-tenths deliberately remain in the “unregistered” sector to avoid compliance with sundry labor and industrial laws. An estimated 78 percent of firms employ fewer than 50 workers, and only 10 percent employ more than 500 workers.21

The next government should overhaul India’s labor and industrial laws to allow entrepreneurs to build giant factories that can compete with India’s neighbors. The sharp rise of wages in China is a golden opportunity to attract investment moving out of China, but so far very little of that investment seems to be migrating to India.

Land markets in India are rigid, opaque, and distorted. High taxes on property sales, exceeding 10 percent in many states, discourage transactions and induce massive under­declaration of the true sale values to avoid taxes since much of the sale money is being paid under the table in cash. This has made real estate and farmland favorite outlets for crooked people with unaccounted money, thus bloating prices of real estate. In some states, farmland can only be sold to other farmers, or to people from the same state. Land reform laws in the 1950s provided so much security for tenants that they became virtual land owners and impossible to evict. Owners are therefore reluctant to lease out land, fearing they will be dispossessed. The rental market operates largely through informal, unwritten deals. If taxes on property transactions are slashed and the leasing of land is encouraged, millions of tiny plots could be pooled to create large farms that could compete with the best in the world.22

Many government projects, especially roads, railways, canals, and mines, are delayed for years by land-acquisition disputes. The Congress Government in 2013 enacted a law that increased compensation to farmers almost fourfold and added time-consuming procedures. Modi asked the state governments to use their local laws to expedite land acquisitions, but many projects remain stuck because of legal disputes and other problems. The rules should be overhauled to be fair to farmers whose land is being acquired through use of eminent domain, but the rules also need to be simple and quick.23

Modernization requires the rezoning of agricultural land into nonagricultural land. Today, such rezoning is a massive racket: politicians extract huge bribes to rezone agricultural land as industrial or commercial land. New laws are required: rezoning should be simple and automatic if it meets certain objective criteria.

India’s capital markets have been liberalized partially since 1991, but public-sector banks still account for almost 70 percent of loans, even though they have a high proportion of bad debts and have required massive recapitalization to stay alive. No political party is willing to privatize these banks, since government ownership enables politicians to direct bank credit to favored lobbies (farmers, small-scale industries). Laws for faster resolution and bankruptcy procedures have been introduced by Modi. They go in the right direction, yet they need to be overhauled to ensure that dud loans are detected early and resolved before the assets of the companies are eroded to almost nothing. The government obliges all banks to lend 40 percent of loans to “priority sectors” (which includes agriculture, small and medium industry, exports, education, and housing), a form of intervention that crimps efficiency and profitability. The priority sectors should be whittled down and gradually dismantled.24

Reverse Rising Protectionism

From independence in 1947 to 1991, India aimed at creating a self-sufficient economy, which was seen as a form of economic independence to buttress political independence. After 1991, a reformist government reduced import tariffs from a peak rate of more than 300 percent to 10 percent by 2008. This period witnessed a surge in Indian exports and competitiveness. However, India remained the biggest single user of World Trade Organization antidumping suits to help domestic producers. Modi’s Bharatiya Janata Party is strongly influenced by the Rashtriya Swayamsevak Sangh, a cultural nongovernmental organization that has traditionally been suspicious of foreign investment and trade and that seeks to create and protect national corporate champions. The Rashtriya Swayamsevak Sangh has always favored the slashing of red tape and internal liberalization to assist Indian industry, but not external liberalization. Modi has gone in this direction.25

The Modi government wants India to get into global value chains that have enabled China’s economy to take off, and so has offered protection through import tariffs and government subsidies to the electronic and solar industries. It has offered a capital subsidy of up to 40 percent for setting up silicon wafer fabrication plants, although this has not yet translated into any major investment. In 2017 India raised import duties on several electronic items, including phone components, TVs, and microwave ovens. This was in pursuance of a so-called Phased Manufacturing Program aiming to check massive imports from China and ensure that cellphone assembly and manufacture are done mostly in India. Protective rates range up to 25 percent for different components, and the government also levies duties of 25 percent on solar panels. In theory, the high duties will be slashed after scale economies are attained and production costs fall, but no sunset clause has been prescribed for the duties. The 2018-2019 budget imposed import duties of up to 50 percent on more than 40 items regarded as “simple manufactures” that India did not need to import, ranging from candles and kites to sunglasses and fruit juices. The problem is that the definition of “simple” can be stretched more and more, and the risk is that India will increasingly become an inefficient, high-cost producer, wasting national resources and encouraging inefficiency. India has always been a major textile exporter, yet Modi has imposed unnecessary import duties on more than 400 textile items to thwart competition from China and Bangladesh.26

The United States has long protested against India’s unwarranted protectionism. India has banned imports of dairy products unless they can be certified to be from cattle not fed with animal products. India has imposed stringent price controls on pharmaceuticals and medical implants, especially heart stents. It is now insisting that all data collected by international companies must be stored in India for security reasons. Bilateral Indo-U.S. talks on trade have broken down, and so the United States has recently given notice of its intent to withdraw duty-free treatment of Indian exports under the U.S. trade program’s Generalized System of Preferences for developing countries. This will affect no less than $5.6 billion (Rs 392 billion) of Indian exports. Hopes still linger that the matter can be resolved amicably.27

The Problem of Too Little Governance

Modi came to power after the preceding Congress-led government was tainted by many corruption scandals. He promised “maximum governance,” and industrialists say that large-scale corruption in New Delhi has largely dissipated since Modi came to power but continues in state governments and the bureaucracy.28 The Modi government has enacted a law obliging central and state governments to auction all mineral deposits, ending the earlier corrupt allocations of mineral blocks to cronies in return for bribes. It has reduced the rigors of the “inspector raj,” the old practice of government inspectors who can close down units with few checks and who demand bribes from industries for allowing them to function. Transparency International’s Corruption Perceptions Index shows India improving from 85th position in its 2014 report to 76th in its 2018 report. This is positive but far from revolutionary.29 Indeed, the Congress Party has alleged that a few cronies have benefited greatly in the Modi era, and that high corruption accompanied an Indian contract to buy Rafale aircraft from France.30

Good governance is about far more than reducing corruption. It means comprehensive provision of high-quality public goods such as policing, physical security, justice, redress of grievances, basic education and health, safety nets, and environmental protection. It is also about creating strong, independent institutions that can withstand political pressures and private bribes. Indeed, distinguished academics including Douglass North,31 Daron Acemoglu, and James Robinson have claimed that good institutions are crucial for economic development, especially for moving from middle-income to high-income status (which is what India needs to do).32

On these counts, Modi has not performed well. He has done little to improve the abysmal quality of essential government services. For years, the central and state governments have sought to curb their fiscal deficits by simply not filling vacant posts. Today an estimated 2.4 million posts in the central and state governments are unfilled.33 Some departments, such as the railways and government telecom, are hugely overstaffed. But the unfilled vacancies have arisen in services badly needed by the public — education, health, law enforcement, and the judiciary. Modi has eroded the independence of several national institutions, such as the Reserve Bank of India and the police, making them more subject to political whims. He has instilled fear in Muslims and Christians by failing to protect them from violence by Hindu thugs, who often get implicit protection from the Bharatiya Janata Party (BJP)-ruled state governments.34

The Legal System Needs Drastic Overhaul

India holds the world record of 33 million pending legal cases in courts. These could take 320 years to clear, according to Andhra Pradesh high court judge V. V. Rao. The Law Commission of India, which periodically reviews the functioning of laws and their enforcement, has recommended the appointment of 50 judges per million population (in the United States, the ratio is much higher, at 107 per million). Judicial posts created so far amount to just 17 per million, and unfilled vacancies are as high as 23 percent in the lower courts, 44 percent in higher courts, and 19 percent in the Supreme Court. No wonder the staggering backlog of cases does not diminish, and most people are reluctant to litigate to redress their grievances.35

Lengthy procedures and constant adjournments mean that cases can linger for decades or even more than a century. In the case of the 1975 murder of L. N. Mishra, a prominent politician, 20 different judges took 38 years to reach a verdict. Of the 39 witnesses called by the defense, 31 died before the case ended. When the accused sought to have the case dismissed on the grounds that the long delay had made justice impossible, the court denied the motion.36

India has 123 policemen per 100,000 population — little more than half the UN’s recommended level of 220 and far below the levels in the United States (352) and Germany (296). Massive unfilled vacancies are common in all states. The police are notoriously inefficient and corrupt. In many states, they will not even register complaints without a bribe.37

Those convicted after lengthy cases in the lower courts can appeal to the relevant state’s High Court and then to the Supreme Court, all of which have long clogged pipelines of pending cases. The result is that influential people using the best legal advice to prolong proceedings are likely to die of old age before the conclusion of their appeals. The system rewards lawbreakers and penalizes law abiders, eroding fairness and quality in everything from business and politics to education and health. A market economy depends on the rule of law and effective enforcement of contracts. If these conditions do not exist, quasi-mafia and crony capitalists will thrive.

Modi has done nothing to overhaul the police or judiciary. That remains an urgent task.

Education and Health Need Overhaul, Too

India has the fastest rate of GDP growth among six South Asian nations (India, Pakistan, Bangladesh, Nepal, Sri Lanka, and Bhutan). Yet between 1991 and 2011, it slipped in social indicators for health and education behind all, save for trouble-torn Pakistan.38

School enrollment in India is high but students learn very little. The Annual Status of Education Report, 2019, says that one of every four eighth-grade students in rural India is unable to read even a second-grade text. More than half of eighth-grade students cannot solve a problem that involves basic division.39 India has world-class elite educational institutions (such as the Indian Institutes of Technology) but most government schools and colleges are pathetic, producing functionally illiterate high-school graduates and unemployable college graduates.

Many teachers are connected with political parties and aspire to become legislators. Powerful teachers’ trade unions are considered untouchable by state governments. Many teachers do not teach at all: only 48 percent were found teaching at the time of one survey. In a 2009 international competition involving 74 countries (the Program for International Student Assessment) India came next to last, even though it was represented by its two best-educated states.40

Two reforms can greatly improve the accountability of teachers to the communities they serve, and hence incentivize greater teaching quality and attendance. One is educational vouchers. The other is a shift from teachers appointed by state governments to teachers that can be hired, fired, and disciplined by local governments.

India’s public health spending is just 0.93 percent of GDP, far less than that of sub-Saharan Africa (1.82 percent) or China (2.89 percent).41 Many Indian primary health centers barely function, with staff and medicines often missing. India has elite hospitals that attract global customers, but basic health in the villages is appalling. The masses are at the mercy of quacks and practitioners of indigenous medicine. Modi has drawn up a plan for affordable hospital treatment of major problems, but basic healthcare remains a huge but neglected issue.

Indian Minorities Need to Feel Secure

Hindus constitute almost 80 percent of India’s population. The two biggest religious minorities are Muslims and Christians, and both complain of growing violence against them by Hindu thugs, with BJP-ruled state governments doing little to discourage the perpetrators. Modi himself was chief minister of Gujarat when one of the biggest mass killings of Muslims took place in that state in 2002, although he was exonerated of directly encouraging the killing by a special court.

The BJP is a Hindu nationalist party and has carried the Hindu notion of the sacred cow to extremes that encourage mob lynching of Muslims suspected of eating beef or transporting cattle for slaughter.42 After such incidents, Modi has often, after a long silence, condemned the lynchings. But this has not discouraged the lynch mobs, and BJP-ruled states seem keen on blaming Muslims rather than Hindus when there is communal violence. An analysis by IndiaSpend, a non­governmental organization specializing in fact checking and policy analysis, showed that 97 percent of the cow-related violence that has taken place in India from 2010 to 2017 was reported after Modi’s government came to power in 2014 — and half of these cases were from states governed by the BJP.43

The BJP also accuses Muslims of “love jihad” — trapping Hindu girls into marriage with the ulterior motive of converting them. When a Muslim boy marries a Hindu girl, the girl’s parents often complain that their daughter has been kidnapped or brainwashed, and the police in many states are quick to arrest the couple. In one celebrated case, the couple fought all the way to the Supreme Court to get their marriage validated and the charge of forcible marriage thrown out.44

Hindu violence against Christians is largely based on accusations that they are forcibly converting Hindus, whereas Christians say all conversions are voluntary. Religious conversion is legal in India, but not if it is done through inducements or threats. In practice, says John Dayal, secretary general of the All India Christian Council and a well-known TV commentator, charges of forced conversion can mean “anything from praying for Jesus to heal you to offering to put you in a Christian hospital or school, or making a payment in American dollars or British pounds.” The violence is worst in BJP-ruled states. Christianity came to India 2,000 years ago with St. Thomas, yet Christians constitute only 2.3 of India’s population — an indication of the failure, rather than the success, of Christian attempts at conversion.45

India’s constitution bars discrimination by the state or individuals on the grounds of religion, and declares that India is a secular country. Under Modi, religious minorities feel insecure and persecuted, a deplorable situation that the next government should redress.

Strengthening Indian Institutions

Under Modi, the independence of many institutions has seriously eroded. These include the Reserve Bank of India (the central bank), the police-prosecutor system, educational institutions, and cultural organizations. After two professional Reserve Bank governors refused to toe the government line on a variety of issues (including the demonetization of high-value currency notes, expanding bank credit, and handing over bank reserves to the government to spend), Modi avoided appointing another professional and instead appointed a trusted bureaucrat to the post.46 The police have become notorious for being selective in whom they arrest, such as Muslims accused of transporting cows to slaughter. Many Muslim transporters have been lynched, and one was killed by a mob on the mere suspicion that he had eaten beef (a charge that turned out to be false).47 A judge has complained that public prosecutors in one case let Hindu militants off the hook by deliberately presenting a weak case.48 Christians say that any criticism of Hindus leaves them open to arrest on the false ground of attempting forcible conversion (which is illegal). Old colonial laws on sedition have been used freely to lock up inconvenient activists and journalists.49 An 80-year-old writer, Hiren Gohain, activist Akhil Gogoi, and journalist Manjit Mahanta were arrested in Assam for sedition in an attempt to stifle their protests against the national government’s proposed amendments to India’s citizenship law. Delhi police have charged ex-Jawaharlal Nehru University Students’ Union president Kanhaiya Kumar and nine others with sedition merely because of some slogans that were shouted at a student rally.50 Many media corporations are reluctant to criticize Modi by name for fear of retribution.51

In a recent interview, Raghuram Rajan, former governor of the Reserve Bank of India, said that the Modi government had not delivered on its promise of minimum government and maximum governance. He felt that the government had assumed too many new powers without checks and balances. One result was a “dependent and pliant” private sector that felt safety lay in applauding every government decision.52

In the run-up to the 2019 election, a new TV channel called NaMo TV (Na Mo in Hindi are the initials of Narendra Modi) suddenly appeared throughout India. It had no license to operate. Yet all major cable and satellite TV groups felt obliged to carry the channel since it was obviously backed by the prime minister. It spouted pro-BJP rhetoric and carried endless replays of Modi’s speeches. Neither the police nor the administration intervened. The BJP claimed it does not own the channel and that it merely provides content. TataSky, a cable operator, says it is airing NaMo TV not as a separate channel (which would need a license) but as a “special service.” Opposition parties approached the Election Commission and courts to stop the channel. But the very fact that NaMo TV could start operating with impunity, and that all major carriers felt obliged to carry it, speaks volumes for the erosion of independent institutions and the rule of law.53

The Congress Party, which has ruled most years since India’s independence in 1947, was also guilty of trying to subvert independent institutions in its time. But the problem has worsened under Modi in the last five years. The next government should take steps to restore institutional independence.

Conclusion

In sum, Modi’s 2014 election slogan of “Minimum Government, Maximum Governance” exactly epitomized the agenda that India needs. Alas, he neither minimized government nor maximized governance. Whichever party comes to power after the May 2019 elections needs to do so.

Notes

1 Martin Wolf, “India’s Election Remakes Our World,” CNBC.com, May 21, 2014.

2 Menaka Doshi, “Ruchir Sharma on Elections 2019, Narendra Modi and the Indian Economy,” Bloombergquint.com, February 18, 2019.

3 Rana Ayyub, “Mobs Are Killing Muslims in India. Why Is Nobody Stopping Them?” The Guardian (London), July 20, 2018.

4 Terry Miller, Anthony B. Kim, and James M. Roberts, 2019 Index of Economic Freedom (Washington: Heritage Foundation, 2019).

5 Terry Miller, Anthony B. Kim, and Kim R. Holmes, 2014 Index of Economic Freedom (Washington: Heritage Foundation, 2014).

6 Ian Vasquez and Tanja Porcnik, The Human Freedom Index 2018 (Washington: Cato Institute, Fraser Institute, and Friedrich Naumann Foundation, 2018).

7 James Gwartney, Robert Lawson, Joshua Hall, and Ryan Murphy, Economic Freedom of the World: 2018 Annual Report (Vancouver: Fraser Institute, 2018).

8 World Bank Group, Doing Business 2019: Training for Reform (Washington: World Bank, 2018).

9 Swaminathan S. Anklesaria Aiyar, “The Alcoholic Mammaries of the Welfare State,” Times of India (Mumbai), May 8, 2016.

10 Yatish Yadav, “Government Says Urea Smuggled to Nepal and Bangladesh,” New Indian Express (Chennai), February 24, 2015.

11 Sudipto Mundle, “An Unfashionable View on Freebies and Subsidies,” The Mint, December 24, 2018.

12 Sajjid Chinoy, “There Is No Space for an Inadvertent Confluence or Fiscal, Regulatory, Monetary Easing,” Indian Express (Noida), January 11, 2019.

13 Surjit S. Bhalla, “Madness in Monetary Policy? Surjit Bhalla Explains Why that Is So,” Indian Express (Noida), August 19, 2017.

14 Swaminathan S. Anklesaria Aiyar, “Rahul’s Minimum Income Plan Is Fatally Flawed,” Times of India (Mumbai), March 31, 2019.

15 Aiyar, “Rahul’s Minimum Income Plan Is Fatally Flawed.”

16 Aiyar, “Rahul’s Minimum Income Plan Is Fatally Flawed.”

17 Noah Gordon, “The Conservative Case for a Guaranteed Basic Income,” The Atlantic, August 6, 2014.

18“Niti Aayog Readying a New List of PSUs for Privatisation,” Hindu Business Line (Chennai), February 21, 2018.

19 Arvind Panagariya and Jagdish Bhagwati, Why Growth Matters: How Economic Growth in India Reduced Poverty and the Lessons for Other Developing Countries (Washington: Council on Foreign Relations, November 2013).

20 Devashish Mitra, “Impact of Labour Regulations on Indian Manufacturing Sector,” Ideas for India, March 13, 2018.

21 R. Srinivasan, “Endgame for Textile Exports?” The Hindu (Chennai), April 22, 2018.

22 Swaminathan S. Anklesaria Aiyar, “How Not to Displace People,” Times of India (Mumbai), September 6, 2006.

23 Aiyar, “How Not to Displace People.”

24 Richa Roy, Krishnamurthy Subramanian, and Shamika Ravi, “How to Solve Issues of Rising Non-performing Assets in Indian Public Sector Banks,” Brookings Institution, March 1, 2018.

25 Swaminathan S. Anklesaria Aiyar, “India’s New Protectionism Threatens Gains from Economic Reform,” Cato Institute Policy Analysis no. 851, October 18, 2018.

26 Aiyar, “India’s New Protectionism Threatens Gains from Economic Reform.”

27 Amiti Sen, “India Shrugs Off US Move to End Preferential Trade Treatment,” The Hindu (Chennai), March 5, 2019.

28 Swaminathan S. Anklesaria Aiyar, “Modi’s Biggest Feat: Booting Out Big Corruption,” Times of India (Mumbai), May 17, 2015.

29“Corruption Perceptions Index 2018,” Transparency International.

30 Vikas Pandey, “Rafale Deal: Why French Jets Are the Centre of an Indian Political Storm,” BBC.com, September 26, 2018.

31 Lance E. Davis and Douglass C. North, Institutional Change and American Economic Growth (Cambridge: Cambridge University Press, 1971).

32 Daron Acemoglu and James A. Robinson, Why Nations Fail: The Origins of Power, Prosperity, and Poverty (New York: Crown, 2012).

33“Congress Releases Manifesto for 2019 Lok Sabha Elections, Promises Wealth and Welfare,” Economic Times (Mumbai), April 3, 2019.

34 Maaz Husain, “India Minorities Face Increased Sectarian Attacks,” Voice of America, April 28, 2017.

35 Swaminathan S. Anklesaria Aiyar, “Twenty-Five Years of Indian Economic Reform,” Cato Policy Analysis no. 803, October 26, 2016.

36 Aiyar, “Twenty-Five Years of Indian Economic Reform.”

37 Aiyar, “Twenty-Five Years of Indian Economic Reform.”

38 Jean Drèze and Amartya Sen, An Uncertain Glory: India and Its Contradictions (New Delhi: Penguin, 2013).

39 Amandeep Shukla, “ASER 2018: One Out of Every 8 Students in Rural India Can’t Read Simple Texts,” Hindustan Times (New Delhi), January 16, 2019.

40 Aiyar, “Twenty-Five Years of Indian Economic Reform.”

41 World Bank, “Domestic General Government Health Expenditure (% of GDP),” 2019, https://data.worldbank.org/indicator/SH.XPD.GHED.GD.ZS.

42 Barkha Dutt, “Will Modi Stop India’s Cow Terrorists from Killing Muslims?” Washington Post, July 24, 2018.

43 Dutt, “Will Modi Stop India’s Cow Terrorists from Killing Muslims?”

44 Rahul Bhatia, “The Year of Love Jihad in India,” New Yorker, December 31, 2017.

45 Michael Safi, “Christmas Violence and Arrests Shock Indian Christians,” The Guardian (London), December 24, 2017.

46 Mahesh Langa, “Shaktikanta Das Appointed RBI Governor,” The Hindu (Chennai), December 11, 2018.

47 Ayyub, “Mobs Are Killing Muslims in India.”

48“The Crisis in India’s Justice System,” Economic Times (Mumbai), May 29, 2019.

49“Anti-Sedition Law Needs the Bin,” Economic Times (Mumbai), January 15, 2019.

50“Anti-Sedition Law Needs the Bin.”

51 Ravish Kumar, The Free Voice: On Democracy, Culture and the Nation, trans. Chitra Padmanabham, Anurag Basnet, and Ravi Singh (New Delhi: Speaking Tiger, 2018).

52 Press Trust of India, “Raghuram Rajan Questions PM Modi’s Minimum Government Maximum Governance Promise,” Economic Times (Mumbai), March 28, 2019.

53 Amy Kazmin, “Election Activists Want India to Tune Out of Modi’s TV Channel,” Financial Times (London), April 5, 2019.

Swaminathan S. Anklesaria Aiyar is a research fellow at the Cato Institute’s Center for Global Liberty and Prosperity and has been the editor of India’s two largest financial dailies, the Economic Times and Financial Express.

Principles for the 2020 Surface Transportation Reauthorization

$
0
0

Randal O'Toole

America’s surface transportation infrastructure needs significant improvements and rehabilitation, yet Congress is uncertain about how to do this. Some want to significantly increase federal spending on infrastructure. Others want to end deficit financing of transportation and end federal restrictions that reduce the efficiency and effectiveness of the funds that are spent.

To resolve this conundrum, this paper presents three principles that Congress should apply to a new surface transportation funding bill. These principles are pay-as-you-go, user fees, and subsidiarity.

Pay-as-you-go. The Congressional Budget Office estimates that limiting transportation expenditures to actual transportation revenues, rather than relying heavily on borrowing, will reduce deficit spending by at least $116 billion over the next decade. Putting transportation on a pay-as-you-go basis will also make transportation agencies more responsive to the needs of transportation users.

User fees. Congress should rely on and encourage state and local governments to rely more on user fees for transportation. This can be done by eliminating restrictions on road tolling and incorporating user fees into the formulas for distributing funds to the states.

Subsidiarity. Congress should give state and local transportation agencies greater latitude in deciding how to spend their shares of federal funds. This should promote the efficient use of those funds by reallocating decisionmaking closer to voters and taxpayers. Subsidiarity includes distributing funds using formulas that divide the funds between jurisdictions, not competitive grants that often reward inefficient proposals, and using as few funds as possible — preferably two, one for highways and one for transit — rather than the two dozen funds used today.

Together, these principles will increase the efficiency and effectiveness of federal transportation spending.

Introduction

Since Congress created the Interstate Highway System in 1956, it has passed laws authorizing or renewing highway excise fees and federal funding for surface transportation — that is, highways and transit — out of those fees about every six years. The current authorization expires in 2020. Congress is now wrestling with how to fund necessary infrastructure rehabilitation while avoiding unnecessary costs to federal taxpayers. This paper proposes three key principles for a 2020 reauthorization bill aimed at improving the efficiency and effectiveness of federal transportation spending.

The 2020 reauthorization will be written by a divided Congress, with fiscally liberal Democrats leading the House, fiscally moderate Republicans leading the Senate, and an ostensibly fiscal conservative Republican in the White House. Conventional wisdom in recent years is that American infrastructure is in decline and so Congress must pass a huge infrastructure bill.

The crumbling-infrastructure claim is exaggerated. The number of highway bridges considered “structurally deficient” has steadily declined by more than 60 percent: from 137,865 in 1990 to 54,560 in 2017. The average roughness of all categories of roads has also declined. Still, the nation does have infrastructure needs and Congress is likely to address some of those needs in transportation reauthorization. The goal of the three principles outlined here is to make sure those funds are spent as effectively as possible.

The reauthorization bill will include money for both highways and transit. In my previous books and papers, I have argued that virtually all transit and most highway needs should be funded locally. Yet Congress is not likely to give up federal funding of transit in this reauthorization. The principles outlined in this analysis will promote more efficient use of transit funds, benefiting both transit systems and riders.

Principle 1: Pay as You Go

As coauthor of the Federal Aid Highway Act of 1956, Sen. Albert Gore, Sr. (D-TN), insisted that the interstate highways be built on a pay-as-you-go basis: the roads would be built only as fast as the gas taxes and other highway user fees specified in the bill were collected.1 This meant two things. First, the federal government could not spend more than the collected revenues. Second, the states could not sell bonds to finance roadwork that would be repaid out of the states’ future allocations of federal highway funds.

Gore had excellent reasons for this demand. First, the interest on bonds would increase the total cost of the system, either slowing its rate of construction or requiring higher fees from highway users. Second, and perhaps more important, a pay-as-you-go system would provide useful feedback to state highway agencies. In 1956 there was no guarantee that the interstate highways would be used enough to justify their cost. If states sold bonds to build them and then failed to collect enough revenues to repay the bonds, the federal government could be held liable for any state defaults.

The pay-as-you-go system survived for more than 40 years. Congress would authorize a funding bill every six years based on projections of what gas tax and other collections would be. This authority, however, was only the ceiling on how much could be spent. Congress would then appropriate funds every year, tempering those appropriations based on actual fee revenues. If revenues fell short of expectations, Congress would appropriate less than was authorized.

In 1998, however, Congress added a new wrinkle to the reauthorization bill: it made the authorized spending both a ceiling and a floor. If revenues failed to meet expectations, appropriators were required to find funds elsewhere in order to fund the full amount authorized. This provision was repeated in the 2005 reauthorization bill.

This first became an issue in 2008, when the financial crisis led to a reduction in total driving and therefore gas taxes fell short of the anticipated revenues. Since then, Congress has transferred $140 billion in general funds, including $70 billion in the 2015 reauthorization, to keep the highway trust fund solvent.2 In 2016, for example, $36.3 billion in fuel taxes and other user fees were collected for the highway portion of the trust fund.3 Yet Congress required that $39.7 billion be spent from that fund.4 The resulting gap was filled with borrowed money.

The Congressional Budget Office estimates that limiting expenditures to expected revenues would reduce the federal deficit by at least $116 billion over the next decade.5 The agency noted that this system would arguably be fairer because — at least with respect to highways — “those who benefit pay the costs.”6 This leads to the next principle: expanded use of user fees.

Principle 2: Promote User Fees

Ever since Oregon first created a gasoline tax to pay for roads in 1919, user fees have been a major source of funding for surface transportation. As noted in a 2010 Reason Foundation report on restoring trust to the highway trust fund, user fees have several advantages: fairness (those who get the benefits pay the costs); proportionality (those who use transport services most pay the most); self-limiting (fees are set just high enough to cover the costs and do not raise general funds); and predictability (revenues depend on users, not on political whims). Perhaps most important, user fees provide signals to both users and producers, telling users the relative cost of the resources they use and telling producers where more investments are needed.7

These signals impose a discipline on both users and producers. Users who aren’t willing to pay for transportation can’t complain that the transportation system isn’t serving their needs. Transportation providers whose revenues are limited to user fees have incentives to find the most cost-effective means of providing transportation. The departure from the user-fee principle in recent years has reduced that discipline and led to bridges to nowhere and streetcar lines that almost no one rides even when the fares are zero.

Arguably, some forms of infrastructure are what economists call public goods, meaning that if the goods were provided privately, people would receive benefits from the goods even if they avoided contributing to the goods’ cost. That, in turn, would mean not enough of the goods would be supplied — and perhaps none at all. Storm sewers, for example, benefit everyone in a floodplain whether they pay for them or not, so few people will have incentive to pay. As a result, such forms of infrastructure may have to be funded through taxes. Transportation, however, is not a public good. It is relatively easy to exclude people from highways and transit lines if they refuse to pay a user fee.

Some argue that transportation can provide benefits to people who aren’t necessarily users, so some subsidies are justified. Such benefits are called externalities, and virtually everything in the economy has externalities. If Congress accepts the principle that externalities justify subsidies, then the advocates of every infrastructure project — indeed, every project of any kind — will attempt to show that their projects produce the greatest externalities. Since such demonstrations cannot be rigorously proven, this will result in transportation funds being allocated on purely political grounds. That, in turn, likely means an outsized portion of transportation’s benefits will go to the wealthy and powerful rather than to the users who are willing to pay for them. However, the truth is that the vast majority of transportation benefits go to transport users, and not to some mythical side beneficiaries. Thus, the user-fee principle is perfectly applicable to transportation infrastructure.

One quantifiable benefit of user fees is that infrastructure funded by them is better maintained than infrastructure funded with tax dollars. Nationwide, 8.9 percent of bridges are considered structurally deficient. Only 2.6 percent of toll bridges are in this category, along with 5.5 percent of bridges owned by the states, which rely mainly on user fees to pay for roads and bridges. However, local governments rely more on general funds to maintain roads, and 12.2 percent of locally owned bridges are structurally deficient.8 State roads are also smoother than locally owned roads.9 In contrast to roads, transit systems rely exclusively on non-user fees to fund maintenance, and they have a maintenance backlog of nearly $100 billion.10

To improve maintenance, then, what is needed is not a huge infusion of federal dollars but an increased reliance on user fees to pay for infrastructure. One way that Congress can apply this principle is to limit federal transportation expenditures to the fees collected from transport users by the federal government, as described above in Principle 1. Beyond this, Congress can incorporate user fees into the formulas for distributing funds to state and local governments, promote mileage-based user fees, and eliminate all restrictions on the use of highway tolling.

Principle 2a: Incorporate User Fees into Funding Formulas

Early formulas for distributing highway funds to the states relied on such factors as population, land area, and road miles. The 2015 reauthorization, known as the FAST Act, based 2016-2020 distributions on the amount each state received in 2015 with a variety of modifications. One modification, for example, required that states receive no less than 95 percent of the gas taxes their residents pay into the Highway Trust Fund. Transit funds were distributed using a variety of formulas that used such factors as vehicle revenue miles and passenger miles.

To simplify the formulas, both highway and transit funds should be distributed primarily based on the recent distributions of funds. Because grants to transit agencies can vary widely from year to year, a 10-year average should be used as the funding benchmark rather than just a single year, as was done in the FAST Act. Beyond this, Congress should encourage state and local transportation agencies to rely more on user fees by incorporating those fees into the formulas.

User fees include funds collected from highway users and spent on highways, as well as funds collected from transit users and spent on transit. General funds collected for roads and transit and user fees collected for roads that are spent on transit or other purposes should not count toward the federal formula. This would give state and local government a powerful incentive to emphasize user fees for their own funding of transportation facilities, maintenance, and operation.

Basing the distribution of funds solely on user fees would result in a wildly different distribution of funds from historic levels. Because of that, the incorporation of user fees into the formula should be phased in over the six-year reauthorization period. In the first year, the distribution could be 90 percent based on historic funding and 10 percent based on user fees. With each successive year, user fees would be boosted by 5 percent until, in the sixth year, user fees would account for 35 percent of the funding. This would give state and local transportation agencies time and incentives to substitute user fees for other sources of funding.

The federal transit fund could be distributed to transit agencies based on the population and land area served by each transit system, as well as on the total fares collected by each transit agency. To simplify distribution in urban areas that are served by several transit agencies, Congress could give the Department of Transportation the option of distributing funds to states or metropolitan planning organizations, which would then be passed through to the transit agencies.

Principle 2b: Eliminate Tolling Restrictions

While gas taxes are a user fee, they are a poor sort of user fee, roughly similar to charging for groceries based on how far people push their shopping carts through the supermarket rather than what they put into those cards. Specifically, gas taxes suffer from four faults:

  • Unlike income taxes, sales taxes, and property taxes, gas taxes don’t automatically adjust for inflation. The value of the 18.4 cent gas tax that Congress set in 1993 has declined to about 11 cents, in 1993 dollars, today.
  • Gas taxes do not automatically adjust for more fuel-efficient cars. Although a 3,000-pound plug-in hybrid Prius puts about the same wear and tear on a road as a 7,000-pound Chevrolet Suburban, the former pays a lot less to use the road, and electric cars pay nothing at all. This also creates an equity problem because low-income families tend to own older, less fuel-efficient cars.
  • Gas taxes don’t go to the owners of the roads. Although close to half of all driving takes place on minor roads and streets that are mostly owned by local governments, nearly all gas taxes go to the states. While the states share some of the taxes with local governments, it isn’t enough, and so local governments have to supplement them with general funds. That supplement was $43 billion in 2016 alone.11 Not coincidentally, as noted above, local roads and bridges tend to have the biggest maintenance backlogs.
  • Gas taxes don’t fix congestion. Although it costs far more to provide a road network that can support peak-period traffic than off-peak traffic, auto drivers pay about the same whether they drive during rush hour or well outside of rush hour.

Increasing gas taxes can temporarily solve the first problem but would do nothing to solve the other three. Especially because the nation’s auto fleet is becoming increasingly electrified, a new system of user fees must be found. Two promising candidates are tolling and mileage-based user fees.

When Oregon started collecting gas taxes to pay for roads in 1919, gas taxes made more sense than tolls because the fuel tax collection costs were much lower and more convenient than collecting tolls. That was still true in 1956, when Congress first created the Interstate Highway System and the Bureau of Public Roads opposed tolling because of its high collection costs. As a result, Congress forbade states receiving federal highway funds from tolling the roads built with those funds, with a few existing toll roads grandfathered in.

Today, however, tolls can be collected electronically, greatly reducing the cost and inconvenience. In recent reauthorizations, Congress has allowed a few areas to toll roads on a demonstration basis. The Oregon Transportation Commission, for example, has applied for federal approval for a large-scale variable-priced tolling program of major freeways in the Portland area; the varying toll rates are intended to shift users to less congested times of the day.12 For the 2020 reauthorization, Congress should lift all restrictions on tolling and leave it to the states, who are technically the owners of the roads, to decide whether tolling is a good way of funding infrastructure.

Principle 2c: Promote Mileage-Based User Fees

In addition to pioneering gas taxes, Oregon has also become the first state to experiment with mileage-based user fees on a large scale. The author is a volunteer in Oregon’s program and is satisfied that the state’s system protects the privacy of auto users while making it possible to collect different fees based on road owner and the time of use or the amount of traffic.13

Variable pricing can be applied using either tolls or mileage-based user fees in order to eliminate congestion. Economists often note that congestion results from poorly priced roads; just as airfares are higher at Thanksgiving than in February and Florida hotels are priced higher in the winter than the summer, roads should be priced higher when demand for them is highest. However, this leads many people to charge that, if such policies were enacted, roads will be used only by the wealthy.

To the contrary, roads have a unique characteristic that guarantees this won’t happen. Unlike airplanes and hotels, the ability of roads to accommodate demand declines when demand is the highest. Numerous studies show that the throughput of roads falls when traffic slows: at 50 miles per hour, a freeway lane can move about 2,000 vehicles per hour, but at 25 miles per hour it can only move about 1,000 vehicles per hour. By keeping traffic moving at high speeds, road pricing can double the number of vehicles using the roads during peak periods. Instead of pricing people off the roads, variable charges actually price people onto the roads.14

The federal government, as well as the states, has long collected gas taxes, and some have suggested that the federal government begin a mileage-based user-fee program. But the main justification for having a federal fuel tax is the low cost of collection: the federal government collects its fees directly from importers and refineries, something the states couldn’t do because not all fuel imported at one port or refined in one refinery are used in that state.

No such cost advantage exists for a federal mileage-based user fee, so the subsidiary principle (see below) suggests those fees should be collected by the states. The only possible federal role might be to help ensure that state systems are interoperable with other states, but that is likely to happen even without federal intervention. Oregon and Washington, for example, have both experimented with mileage-based user fees and ensured that their systems are interoperable.

Because of the advantages of mileage-based user fees over gas taxes, Congress may want to promote mileage-based user fees by offering a small bonus in the state highway formula. For example, for every 10 percent of highway users in a state that has mileage-based user fees instead of gas taxes, the state could get a 1 percent increase in federal funds. This would encourage states to convert to mileage-based user fees in order to maintain their share of federal funds.

Principle 3: Subsidiarity

Subsidiarity is the “the principle that decisions should always be taken at the lowest possible level or closest to where they will have their effect, for example in a local area rather than for a whole country.”15 In other words, state and local governments are better equipped to know state and local transportation priorities than Congress, so Congress should not hamstring the state and local governments by telling them how to spend transportation funds. This principle requires:

  • no earmarking
  • abolishing competitive grant funds
  • reducing the number of formula funds to an absolute minimum, preferably just one for highways and one for transit
  • ending the requirement for long-range transportation planning, and
  • removing all restrictions on highway tolling.

The highway tolling issue is discussed under Principle 2. The others are discussed in more detail below.

Principle 3a: No Earmarking

In 1956, Congress created a formula for distributing highway funds based on each state’s population, land area, and road miles. While the formula changed over time, each state had some discretion in how to use the federal funds it received so long as they were spent on highways. In 1982, Congress supplemented the formula by adding 10 earmarks — requirements that some of the funds be spent on specific projects.

In the 1987 reauthorization bill, the number of earmarks grew to 187, which contributed to President Reagan’s veto of the bill — a veto that was overridden by Congress. There were 538 earmarks in 1991, and 1,850 in 1998.16

Most of these earmarks didn’t increase the funding received by a state. Instead, they came out of the funds the states were to receive under the highway formulas. In some cases, the states would have carried out the earmarked projects anyway. But often, those earmarks had little or nothing to do with transportation, including earmarks for museums, national park visitor centers, and other non-transportation-related projects. Thus, while the earmarks clearly benefited some constituencies, they reduced the efficiency and effectiveness of the state transportation systems.

By 2005, the number of earmarks had increased to more than 8,000, or an average of 15 for each congressional district.17 From Congress’s point of view, earmarks appeared to be cost-free because members appeared to be working hard to get projects for their constituents when, in fact, those funds were going to go to the states anyway.

One problem with this system was that earmarks tended to divert funds away from needed infrastructure maintenance toward new construction. New construction is more visible than maintenance, so politicians prefer to bring home funding for new projects rather than maintaining existing ones. As Sen. Tom Coburn (R-OK) noted after the 2005 reauthorization, the money earmarked by the bill “could have repaired more than 30,000 structurally deficient bridges.”18

Earmarks clearly violate the principle of subsidiarity. In 2010, Congress recognized this and decided to ban earmarks. That ban should remain in place for the 2020 reauthorization.

Principle 3b: Abolish Competitive Grant Funds

At first glance, competitive grant funds such as the New Starts and TIGER/BUILD programs sound like a good idea. Congress identifies a potential need but recognizes that some states or regions have that need more than others. Then it creates a fund and authorizes the Department of Transportation to distribute money from the fund to the projects according to specific criteria.

Yet those criteria are necessarily subjective. The result is that the distribution of funds turns out to be highly politicized. A Cato study of New Starts funds found that they disproportionately go to states that have members on the House Transportation and Infrastructure Committee.19 A Reason Foundation study made similar findings regarding TIGER grants.20

Moreover, once a fund is created, interest groups lobby for it to continue operating long after it has fulfilled its original purpose. The TIGER (Transportation Investment Generating Economic Recovery) program was created to help the economy recover from the 2008 recession. The economy has recovered, yet the program lives on, albeit under the new name of BUILD (Better Utilizing Investment to Leverage Development).

In addition, Congress isn’t always correct in identifying needs. Light rail and streetcars were rendered obsolete in 1927 when advancements made buses less expensive to buy and operate than streetcars. Between that year and 1975, hundreds of American cities converted their streetcar lines to buses, leaving just six cities with streetcars, and those cities retained them either because they went through tunnels that couldn’t handle the exhaust fumes from buses or because the transit agency or company owned a private right of way for the streetcars.21

With everyone in the industry in agreement that buses were superior to streetcars (a belief that also applied to light rail), Congress nonetheless created a fund in 1991 to help cities build new light rail and streetcar lines. This decision resulted from former Massachusetts governor Francis Sargent’s (R-MA) successful effort in 1973 to convince Congress to allow cities to cancel urban interstate freeways and use the federal funds to make transit capital improvements. Sargent wanted to cancel a freeway in Boston, and since Boston already had lots of rail transit, it had plenty of places where it could reallocate that federal transit money, such as the purchase of new railcars, signaling systems, and capital improvements.

Other cities, including Buffalo, Portland, Sacramento, and San Jose, also wanted to cancel freeways, but their transit systems centered on buses. Unlike rails, buses are not capital-intensive, so spending the cancelled freeway money on buses didn’t make sense. These cities decided to build light rail, not because it was an efficient or effective way of moving people, but because it was expensive and could absorb the federal funds while at the same time creating work for the contractors who otherwise would have built the freeways.

By 1991, all of the cities that wanted to cancel freeways had done so, but in the meantime a lobby had grown for more rail construction, regardless of its cost-effectiveness. So, Congress repealed the 1973 freeway law and created a new fund called New Starts for transit capital grants. Most of the money in this fund went for the construction of new rail transit lines.

To make matters worse, in order to be eligible for the largest possible share of the New Starts fund, cities began planning increasingly expensive rail projects. In the 1980s, after adjusting for inflation to today’s dollars, the average light-rail project cost about $30 million per mile. In the 1990s, costs grew to more than $50 million per mile. In the 2000s, costs reached well over $100 million per mile, and in the 2010s, average costs reached $200 million per mile.

Seattle’s Sound Transit 3 program, approved by voters in 2016, calls for spending $32 billion to build 62 miles of light-rail lines, for an average cost of more than $500 million per mile.22 Sound Transit is counting on federal matching funds for these lines. Without the New Starts fund, cities and transit agencies would be much more cautious with how they spend their resources.

Principle 3c: Reduce the Number of Formula Funds to a Minimum

Federal surface transportation dollars are currently distributed through at least two dozen different funds, including funds for such things as freight highways, transit-oriented developments, and transportation planning.23 The multiplicity of these funds has the same effect as earmarking: incentivizing states to spend transportation money in certain ways, which often results in less-efficient spending than if the states were free to prioritize transportation spending. Yet each fund creates a constituency of interest groups that benefit from the fund even if the overall benefits to the nation are negligible.

The division of funds into so many different categories also increases the overhead costs to state and local governments because the Department of Transportation requires recipients to carefully document that the money they received was spent only on projects allowed under each fund. For example, Jay Schlosser, the city engineer in Tehachapi, California, reports that the administrative costs associated with federal funds are at least five times greater than those associated with the city’s own funds.24

To minimize these problems, Congress should reduce the number of funds. Ideally, there should be just two: one for highways that is distributed to the states, and one for transit that is distributed to metropolitan planning organizations or, for those transit agencies outside of metropolitan areas, the transit agencies themselves.

Congress should also minimize the requirements limiting the use of these funds, thus allowing state and local governments to set their own priorities. Historically, for example, most federal transit funds have been dedicated to capital improvements, and many transit agencies have also had to dedicate a large share of their funds to capital improvements to match federal funds. The result of this emphasis on capital is the nearly $100 billion maintenance deficit faced by the nation’s transit industry. Liberalizing these restrictions would allow individual agencies to make their own determinations of the appropriate ratios of capital, maintenance, and operating costs.

Principle 3d: End the Requirement for Long-Range Transportation Planning

Congress currently requires states and metropolitan planning organizations to prepare short-term (3-year) transportation plans, also known as transportation improvement plans, as well as long-range (20-year) transportation plans. Under the above simplified formulas, neither of these is necessary, but it is especially important to abolish the requirement for long-range planning, as its results have been pernicious.

Given rapidly changing technologies, no one can say for certain what our transportation system will look like in 10 years, much less in 20 years. Just a decade ago, no one would have predicted the huge effect that ride-hailing services such as Uber and Lyft would have on cities and transit systems. Ten years from now, driverless ride hailing may have an even greater effect. Since these new technologies and their effects are unpredictable, no one can write an effective long-range transportation plan.

Congress requires that the long-range transportation plans be revised every five years to take such changes into account. However, once set in motion, government plans are difficult to change, even when they fail. Interest groups that benefit from a plan will lobby to keep it in place even if the plan is otherwise a failure.

For example, the Sacramento Area Council of Government’s 2006 long-range trans­portation plan admitted that the plans written for the region “during the past 25 years have not worked out.” Despite transit improvements and a deliberate decision not to build more roads, transit’s share of travel had declined, and driving had doubled since 1980. Despite attempts to promote infill and discourage sprawl, low-density development “continues to out-pace infill.”25 Yet the council learned nothing from these failures. Instead, the 2006 plan “continues the direction of” previous plans by giving “first priority to expanding the transit system” and attempting to “reduce the number and length of auto trips.”26

Rather than force state and metropolitan governments to devote funds to pointless and often counterproductive plans, Congress should simply let the states and regions decide for themselves how much planning they need to do. This is another case of affirming the principle of subsidiarity.

Conclusion

A surface transportation reauthorization bill based on the principles of pay-as-you-go, user fees, and subsidiarity would greatly increase the efficiency and effectiveness of federal transportation spending. While these principles may reduce the total amount of federal dollars being spent on transportation, the increased efficiency would more than offset that decline, thus improving public welfare. Congress should seriously consider incorporating these principles into the 2020 surface transportation reauthorization.

Notes

1 Richard F. Weingroff, “Kill the Bill: Why the U.S. House of Representatives Rejected the Interstate System in 1955,” Federal Highway Administration, June 27, 2017.

2 Tax Policy Center, What Is the Highway Trust Fund and How Is It Financed? (Washington: Brookings Institution, 2017).

3 Federal Highway Administration, “Highway Statistics 2016,” 2018, Table FE-210.

4 Federal Highway Administration, “Apportionment,” February 8, 2017.

5 Congressional Budget Office, Options for Reducing the Deficit: 2019 to 2028 (Washington: CBO, 2018), p. 7.

6 Congressional Budget Office, Options for Reducing the Deficit: 2019 to 2028, p. 169.

7 Robert W. Poole, Jr. and Adrian T. Moore, Restoring Trust in the Highway Trust Fund (Los Angeles: Reason Foundation, 2010), p. 1.

8 Federal Highway Administration, “Bridge Condition by Owner 2017,” https://www.fhwa.dot.gov/bridge/nbi/no10/owner17e.cfm#total.

9 Federal Highway Administration, “Highway Statistics 2016,” Table HM-63 and Table HM-64. These tables do not distinguish between highway owners, but they do distinguish between interstates, arterials, and collectors. Interstates are state-owned and are the smoothest roads, collectors are mostly locally owned and are the roughest roads, and arterials are mostly state-owned and are intermediate in roughness.

10 Department of Transportation, Status of the Nation’s Highways, Bridges, and Transit: Conditions and Performance (Washington: Department of Transportation, 2016), p. l (Roman numeral L). The report estimates a backlog of $89 billion, but in 2019 dollars that is $100 billion.

11 Federal Highway Administration, “Highway Statistics 2016,” Table HF-10.

12 Andrew Theen, “Tolls on I-5, 205, Step towards Federal Approval,” The Oregonian, November 29, 2018.

13 Oregon Department of Transportation, “Getting to OReGo,” 2016.

14 Randal O’Toole, “Ending Congestion by Refinancing Highways,” Cato Institute Policy Analysis no. 695, May 15, 2012, pp. 3-6.

15Cambridge Dictionary, “Subsidiarity,” 2019, https://dictionary.cambridge.org/us/dictionary/english/subsidiarity.

16 Ronald Utt, A Primer on Lobbyists, Earmarks, and Congressional Reform (Washington: Heritage Foundation, 2006), Table 1.

17“Report Documents Impact of Earmarks on Transportation Funding,” The Newspaper, September 12, 2007.

18“Report Documents Impact of Earmarks on Transportation Funding.”

19 Randal O’Toole and Michelangelo Landgrave, “Rails and Reauthorization: The Inequity of Federal Transit Funding,” Cato Institute Policy Analysis no. 772, April 21, 2015, p. 1.

20 Baruch Feigenbaum, Evaluating and Improving TIGER Grants (Los Angeles: Reason, 2012), p. 10.

21 George Hilton, testimony before the Senate Subcommittee on Antitrust and Monopoly, the Industrial Reorganization Act: Hearings before the Subcommittee on Antitrust and Monopoly on S. 1167, Part 4A, 93rd Cong., 2d Sess. (1974), p. 2205.

22 John Niles, “Cost Exceeds Benefits in Sound Transit’s ST3 Light-Rail Expansion,” Washington Policy Center, 2016.

23 Six highway funds are listed in Federal Highway Administration, “Apportionment,” 2017, while 18 transit funds are listed in Federal Transit Administration, “FTA Allocations for Formula and Discretionary Programs by State, FY 1998-2019” (Excel file), 2018, https://www.transit.dot.gov/sites/fta.dot.gov/files/docs/funding/grants/38096/fta-apportionments-formula-and-discretionary-programs-state-fy-1998-2019-full-year.xls.

24 Jay Schlosser, personal communication to author, 2016.

25 Sacramento Area Council of Governments, 2006 Metropolitan Transportation Plan (Sacramento: Sacramento Area Council of Governments, 2006), p. 3.

26 Sacramento Area Council of Governments, 2006 Metropolitan Transportation Plan, pp. 4, 23.

Randal O’Toole is a senior fellow with the Cato Institute and author of the recent book, Romance of the Rails: Why the Passenger Trains We Love Are Not the Transportation We Need.

Unplugging the Third Rail: Choices for Affordable Medicare

$
0
0

John F. Early

Medicare expenditures as a share of gross domestic product (GDP) are now six times larger than they were in 1967. Forecasts for the next 75 years show that almost $1 of every $5 of GDP could be spent on Medicare. That is unaffordable. Without intervention, Medicare’s share of GDP will force some combination of substantial cuts in other government spending, significantly higher taxes, and unhealthy levels of public debt.

There are many policy issues concerning main­taining or redesigning Medicare. This paper looks only at the question of affordability. It identifies the minimum changes required to prevent further expansion of Medicare’s share of GDP, while retaining the existing structure of the program.

Three modifications can be phased in to meet that objective. About 41 percent of the required savings can be achieved by slowly raising the program’s eligibility age and by restoring the original criteria for disability benefits. The eligibility age could first be harmonized with the rising age for full retirement benefits from Social Security and then continue to increase consistent with rising life expectancy.

The remaining savings would require more cost sharing by beneficiaries. The first steps would be to increase deductibles and coinsurance to values that are typical for commercial insurance among the working population. Further increases would be required after another 30 to 50 years. These changes may seem large, but changes such as these are necessary to undo the substantial problem that history has given us. The good news is that if we begin soon, the changes can be made gradually, and current beneficiaries would face no benefit reductions.

Introduction

In 2016, Medicare spending constituted 3.64 percent of GDP. That is a six fold increase since 1967, the first year after the Medicare program began operation. However, Medicare growth will not stop at this current level; long-range projections estimate that it would grow to between 9.00 percent and 19.79 percent of GDP by 2091 (see Appendix A for sources and analysis of alternative forecasts). Many factors contributing to the past and projected increases have no inherent stopping points, so Medicare’s rising burden could continue indefinitely.

This situation is not sustainable. As Medic­are grows relative to GDP, it will necessarily create some combination of crowding out of other government expenditures, rising taxes, and increasing debt. At some point higher taxes will slow the economy and more debt will lead to higher interest rates, resulting in a vicious cycle of slower econ­omic growth, exploding government debt, and perhaps even government default.

The magnitudes of these Medicare spending effects are substantial. Funding the increase exclusively by cutting other federal spending would require across-the-board reductions by the end of the 75-year forecast horizon of between 30.45 percent and 91.76 percent in other entitlement programs, such as Social Security, as well as in discretionary spending (see Appendix B for details on required spending reductions). Funding the increase exclusively through higher taxes would require hikes of between 17.35 percent and 36.33 percent across all taxes: personal income, payroll, corporate income, and others (see Appendix C for details on required tax increases). Funding the increase exclusively through debt would raise federal public debt from the current 77.53 percent of GDP to the level of current Greek debt (181.9 percent) within 13 to 18 years—assuming the rest of the federal spending and revenues continue their same relationships to GDP (see Appendix D for details on debt growth). If policymakers were to try to cover Medicare’s cost growth by spreading the financing equally across these three revenue sources, that would still require spending cuts to other federal programs of between 10.15 and 30.59 percent, simultaneous tax increases of between 5.79 to 12.11 percent, and it would only delay reaching Greek debt levels by a mere 4–10 years.

These effects apply regardless of one’s view on whether governments should subsidize health insurance, whether healthcare is a right, or whether Medicare has good or bad effects on health, redistribution, or efficiency. Unaffordability must be addressed irrespective of how one sees these other issues.

A crucial question is, therefore, how the United States can slow the growth of Medicare. This is not an issue of how to finance Medicare; if expenditure increases faster than GDP grows, no financing system can pay for it.

I therefore consider what changes in Medicare’s parameters might reduce its growth rate relative to the overall economy. These changes take the program’s current structure as given but adjust key features that affect the level and growth rate of expenditure. The factors assessed include the age of eligibility, the criteria for disability under Social Security that guarantees Medicare coverage, and the sizes of deductibles and coinsurance.

My analysis shows that there are combinations of these three adjustments that can reduce Medicare expenditure growth to a rate consistent with the long-term historical growth rate of GDP. If future GDP growth continues to approximate recent history, Medicare would then remain stable as a percentage of the economy. These changes would not resolve the controversies over Medicare; it would still generate numerous distortions in healthcare markets and require substantial distorting taxation. But a Medicare program that does not bankrupt the economy is far better than one that does.

Quantifying the Causes of Historical Medicare Trends

The expansion in Medicare’s share of GDP from 1967 to 2016 resulted from three general trends: growth of the senior population, healthcare prices rising faster than overall inflation, and policy changes that enlarged coverage. Figure 1 shows the contribution of each trend and some more granular causes within them. Among those causes:

  • Population growth is the rate at which new beneficiaries reached the eligibility age of 65. It grew at an average annual rate of 1.2 percent from the inception of Medicare and added 0.06 percent of GDP to the cost of Medicare.1
  • Increased longevity added more years of Medicare coverage for each beneficiary. In 1967, the original Medicare beneficiaries, at eligibility age 65, had an average life expectancy of 14.8 years. By 2017, life expectancy had risen an additional 4.5 years, boosting the number of people receiving benefits by 30.4 percent and, thereby, consuming an additional 0.61 percent of GDP.2
  • Aging captures the effects of the average beneficiary being older and requiring more care. The net effect of aging was to raise Medicare expenditures by only 0.11 percent of GDP because improvements in health and care modalities have significantly reduced the cost of care for older beneficiaries.3
  • Relative price inflation reflects med­ical inflation being higher, on average, than general inflation, which increased Medicare’s share of GDP by 0.42 percentage points.
  • Disabled and end-stage renal disease (ESRD) patients were added to Medicare coverage in 1973 and account for an additional 0.25 percent of GDP.
  • Weaker disability criteria have been adopted by the Social Security Administration since disability coverage began in 1972. As a result, despite better medical care mitigating many disabilities, declining on-the-job injuries, and the new Americans with Disabilities Act requiring employer accommodations for disabled workers, the granting of Medicare disability benefits more than tripled from 1.41 percent of the working-age population to 5.17 percent (see Appendix E for fuller details on the changes and their consequences).4 The additional beneficiaries from these weakened regulations constitute the largest single source of Medicare’s increased expenditures—an additional 0.66 percent of GDP.
  • Drug benefits were added in 2006, increasing Medicare’s share of GDP by 0.38 percentage points.
  • Increased consumption of medical services per beneficiary beyond the added drug benefit accounted for nearly one-fifth of Medicare’s greater resource utilization, rising more than sixfold and consuming an additional 0.61 percent of GDP. Some of these increases were legislated, such as the addition of 60 days of lifetime in-patient care (1967); liver transplants (1985); unlimited home health services and detoxification services (1980); and expanded coverage of podiatrists and orthotic shoe manufacturers (2014).5 Most of the explicit expansions, however, came from hundreds of administrative additions such as electric wheel chairs, expanded joint replacements, and orthopedic braces.

Source: Medicare and Medicaid Board of Trustees, “Table II.B1,” 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Washington: July 13, 2017); and “Expanded and Supplementary Tables and Figures,” https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/ReportsTrustFunds/index.html. Shares of GDP computed by the author.

At least as important in generating greater consumption were the steep drops in the share of the cost paid by beneficiaries. In Medicare’s first years, beneficiaries had to meet deductibles equal to 44 percent of their average benefit, but by 2016 deductibles were a mere 14 percent of the benefit. The deductible for medical benefits (Part B) in real dollars fell by more than half.6 On top of those general cost-sharing reductions, beginning in 1989 four new programs started to pay all premiums, deductibles, and coinsurance for about 20 percent of beneficiaries with lower-incomes, cutting their marginal cost for healthcare to zero.7

Potential Policy Adjustments

Three types of policy adjustments could slow Medicare’s unsustainable expenditure increase without substantial modification to the program’s general structure. The following analysis evaluates the ability of each of these adjustments to limit the increase in Medicare’s share of GDP at several different levels of implementation. I consider the effect of each selected change by itself on the share of GDP at the end of the forecast period, then look at the contributions of selected combinations of all three, and finally identify some phase-in strategies, since immediate implementation of the full adjustments would not be necessary from the expenditure perspective and might be unnecessarily disruptive.

Raise the eligibility age

Despite substantially greater longevity, the eligibility age for Medicare has remained at 65 since inception. Social Security eligibility for full benefits, by contrast, is slowly being raised to age 67 by 2027. Harmonizing Medicare’s eligibility age with Social Security’s would be a minimal appropriate adjustment.8

A second alternative would begin increasing the eligibility age on a continuing basis to keep life expectancy at the eligibility age the same as it was in 2016 (19.6 years). This approach would initially raise the age more slowly than the move to age 67 by 2027, but the increase in eligibility age would continue for the entire forecast period, reaching 69 years, 3 months, by the end of the forecast period and ultimately saving more money.

A third possibility would be to adopt a higher fixed eligibility age of 70. With a phase-in rate similar to the one used in the first option to raise the eligibility age to 67, Medicare’s eligibility age would reach age 70 in 2039 and remain at that age thereafter.

Finally, the eligibility age could be set to give the same expected number of years of coverage as the original Medicare plan: 14.78 years for the typical beneficiary. This higher eligibility age could phase in at the same rate as the age-70 design. After the eligibility age reached age 70, if it continued to rise at the same rate it would reach the target of 14.78 expected years of benefits with an eligibility age of 73 in 2072 and then rise more slowly, maintaining the average expectancy of 14.78 years. Current demographic forecasts point to an eligibility age of approximately 74 by 2091.

As the eligibility age increases, some individuals between age 65 and the new eligibility age would continue on their previous disability status until they reached the new age, and others would likely be granted new Social Security disability benefits and, thus, be added to Medicare as disabled individuals. The results presented here incorporate estimates of those offsetting effects using reported levels of disability among older cohorts in the Current Population Survey to extend the observed Medicare under-age-65 disability rates to individuals above age 65.

Figure 2 illustrates the reduction in Medicare’s share of GDP created by each of the four alternative phased-in modifications to eligibility age. The most powerful age adjustment, which restores the 1967 life-expectancy criterion, would slow the future growth of Medicare’s GDP share by 25.55 percent for the upper scenario by the end of the forecast period and by 18.65 percent under the lower scenario. By itself, that age intervention has only moderate effects, but it could still make a significant contribution as part of multifactor approach.


Source: Author’s calculation from Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Washington: July 13, 2017); “Expanded and Supplementary Tables and Figures,” https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/ReportsTrustFunds/index.html; and Center for Medicare and Medicaid Services, “CMS Statistics Reference Booklet,” 2016.

Restore disability criteria

Procedures for granting Social Security disability benefits have been systematically weakened since 1973.9 For example, despite the statutory requirement that beneficiaries must be “unable to work any job in the national economy,” beneficiaries are now allowed to earn $13,000 per year without any reduction in their benefits.10 That would be equivalent to a full-time 35-hour week, year-round job at minimum wage. So, some beneficiaries can and do work, but choose not to work enough to lose some of their benefits.

Administrative procedures have also been changed to bias benefit determinations in favor of the claimant. First, the administrative law judges in the Social Security Administration are required to give greater weight to the opinions of the practitioners hired by the applicant than to the judgment of the government’s own experts.11 Second, a person can combine two or more nondisabling conditions to claim a benefit even though the combined demonstrable effect of both conditions still does not actually prevent them from working. Third, applicants are not required to complete a course of remedial therapy that would allow them to regain some or all of their lost function. Other countries have introduced reforms that require applicants to develop a rehabilitation plan for returning to work with an employer and to demonstrate that they have followed through on it before being approved for disability payments. In the Netherlands, this approach, combined with employer accommodation reforms, reduced newly approved cases by 60 percent.12

Once people are on taxpayer-financed disability benefits, they almost never admit to recovery and take up productive work. Less than 1 percent of beneficiaries ever leave the program to return to work, yet research by Till von Wachter of the University of California, Los Angeles, suggests that at least half of those in the 30–44 age group could, in fact, return to work if they were required to do so.13

As a result of degraded procedures and criteria, the percentage of the working-age population receiving disability benefits under Medicare has more than trebled. Reinstating statutory eligibility criteria could reduce the increase in Medicare’s share of GDP by between 15.88 percent at the lower boundary and 22.41 percent at the upper boundary.14 Making these changes in disability criteria would also create significant savings for Social Security, but those have not been incorporated here.

Increased cost sharing

True medical insurance would pay for high-cost, low-probability events such as heart surgery or a regimen of medication to cure hepatitis C. Routine care, such as wellness check-ups, vaccines, screening tests, treatments for the occasional upper respiratory infection, treatments for osteoarthritis, and even uncomplicated cataract surgery would not be insurable events because they are predictably likely and not exceptionally expensive.

Before Medicare, most medical insurance was protection against catastrophic medical events. Original Medicare reflected that type of arrangement, with deductibles equal to 44 percent of the total benefits, compared with only 14 percent today. This sharp decline in cost sharing reflects three trends. First, government began to promote and mandate so-called Health Maintenance Organizations (HMOs) in the nonsenior market that covered almost everything with minimal cost sharing. The putative tradeoff was that these HMOs could limit utilization and control costs better with a variety of managed-care techniques. The richer coverage became the widely accepted norm and was likewise added to Medicare, but without the cost control of managed care. Second, both state and federal governments mandated literally thousands of items that must be covered by insurance. While these regulations did not directly apply to Medicare, their intent was largely adopted for Medicare. Third, because Medicare was taxpayer-funded, lawmakers were not subject to market restraints and responded to political pressure that added benefits without countervailing cost controls.

A minimum starting point for reforming Medicare cost sharing would be to require at least as much financial responsibility from its beneficiaries as from the working population. In 2016, the average deductible for private-employer single plans was $1,505, which was 17 percent lower than the $1,814 sum of the deductibles for all three Medicare parts.15 This comparison, however, suggests a deceptive similarity. Medicare has three separate deductibles: $1,288 for hospitalization, $166 for medical, and $360 maximum for drugs (zero in many plans).16 The private-sector average deductible applies to the sum of all three expenditure types: hospitalization, medical, and drugs.

Only 12 percent of Medicare beneficiaries have an episode of hospital care each year, so for 88 percent of the beneficiaries their effective limit on unsubsidized care is only $166. Compare that to the average for single beneficiaries in private employer plans of $1,505—9 times more. High-deductible plans are the fastest-growing type of employer plan and are offered by two-thirds of large employers. Their deductibles average $2,304, or 14 times Medicare’s $166.17 The trend is for more of these plans to stipulate $3,000 deductibles, which is 18 times Medicare’s rate.

Medicare’s 20 percent coinsurance appears to be in line with typical private insurance provisions for in-network providers that have demonstrated high quality at lower total cost per episode of care. Depending on the type of plan, private coverage outside the limited network is either nonexistent or at coinsurance rates of 40–50 percent. Although there is a suggestive similarity between in-network coverage and the set of doctors accepting Medicare assignment, the analogy is a poor one.

First, many physicians outside of highly affluent practices continue treating their patients once they reach age 65 out of a social, professional, or moral commitment to them, despite receiving lower fees. Second, Medicare’s monopsony within the senior market is so strong that few physicians—and virtually no hospitals—can resist its power. Third, beyond the sheer market power, government has prohibited physicians and hospitals from treating both patients under Medicare and eligible senior patients outside of Medicare. If a private insurer required its in-network physicians to treat only its beneficiaries, state insurance regulators would pursue them under the network sufficiency and any-willing-provider regulations. Antitrust litigators would also likely follow.18 As a result, seniors have virtually unlimited provider access at a nominal in-network coinsurance.

Finally, Medicare’s coinsurance percentage is applied to Medicare’s government-enforced provider fee for the service. Private coinsurance percentages are applied to the contracted physician fee for in-network care and to the usual and customary charges for out-of-network care.19 In-network fees average 79 percent higher than the corresponding Medicare fees, so Medicare’s 20 percent coinsurance is applied to a lower fee, resulting in a substantially lower out-of-pocket cost for the same service.20 To make Medicare’s coinsurance equivalent to the in-network coinsurance dollar amount for the working insured, the Medicare coinsurance rate would need to be approximately 36 percent.

Beginning in 2011, Medicare eliminated all deductible and coinsurance requirements for a set of preventive health examinations and tests. Making these services free was justified as saving money by preventing costly diseases, although the evidence of actual savings is, at best, debatable for such across-the-board interventions.21 This change reduced further the financial incentives for prudent consumption.

The substantial divergence of Medicare cost sharing from industry standard practices is both a cause for its out-of-control expenditures and a significant opportunity to regain control. Making Medicare cost sharing more like industry standard practices has two benefits. First, the beneficiaries pay a greater portion of the total expenditure and tax­payers pay less. Second, because they must spend some of their own cash, beneficiaries will be more efficient in their use of care, consuming somewhat less overall. This analysis estimates the sensitivity of beneficiary spending to the cost sharing they must pay by using the conservative lower end of such estimates from the economics literature.22

Table 1 shows a range of plausible alternative plan designs that bring Medicare cost-sharing closer to that for the working insured. Each of these alternatives replaces the generous Part B deductible with one of three typical private-plan deductibles: the average of all plans, the average of consumer-directed plans, or the leading-edge $3,000 deductible. None have any first-dollar coverage. The “all parts” plans apply a unified deductible to all expenses and do not exempt the first 60 days of a hospital stay from coinsurance. The first two plan designs are the same with the exception that the first one continues the practice of zero cost sharing for dual-eligible beneficiaries, while the second adds a modest deductible and coinsurance, as do the other examples.

Table 1
Alternative cost-sharing plan designs for Medicare, consistent with commercial insurance


Source: Author’s computation using elasticity estimate of –0.20 for most of the calculations. See Amanda E. Kowalski, “Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care,” National Bureau for Economic Research, NBER Working Paper no. 15085, June 2009; Joseph P. Newhouse and the Insurance Experiment Group, Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge: Harvard University Press, 1993); and Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Washington: July 13, 2017), p. 163. Medicare Trustee forecasts use the same –0.20 elasticity. Elasticities from zero out-of-pocket cost to any coinsurance/deductible uses an elasticity of –0.35 derived from the results of Katherine Baicker, Sarah L. Taubman, Heidi L. Allen, et al., “The Oregon Experiment—Effects of Medicaid on Clinical Outcomes,” New England Journal of Medicine 368 (May 2, 2013): 1713–22, doi:10.1056/NEJMsa1212321. Baseline data are computed from Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, the downloaded file “2017 Expanded and Supplementary Tables and Figures.zip,” and Center for Medicare and Medicaid Services, “CMS Statistics Reference Booklet,” 2016.

The addition of the dual-eligible cost-sharing strengthens the effect of the first policy option by almost two-thirds, increasing the savings from 9.24 percent to 15.29 percent at the lower boundary and from 6.74 percent to 11.16 percent at the upper boundary. Advocates for the current policy justify zero deductibles and coinsurance for dual-eligible beneficiaries on the basis of their poverty. But with no financial stake in the transaction, they have no financial incentive to restrain consumption and they consume 2.27 times more per capita than Medicare-only beneficiaries.23Advocates correctly note that the dual-eligible population is also sicker than the rest, but after an adjustment for health status the dual-eligible population still spends 42.42 percent more per capita than its difference in health status would predict.24

Giving lower-income beneficiaries totally free care incentivizes consumption of healthcare for which costs exceed benefits. Even modest copays and deductibles will help limit their consumption to the necessary services because they must make at least some tradeoffs in how they spend their money. Giving poor seniors smaller premiums and maximum out-of-pocket limits can provide similar economic relief without eliminating all incentives to conserve.

The highest cost sharing for the leading edge of commercial-equivalent plans would mitigate the increase in the GDP share for 2091 by between 35.89 percent and 49.18 percent. Medicare expenditures per person would be cut by 29.29 percent, but individual consumption would be reduced by only 7.10 percent, with beneficiaries paying the difference.

Reduced consumption does not necessarily diminish the effectiveness of care. Several studies have concluded that one-quarter or more of the expenditures on medical care in the United States makes little or no significant contribution to improved health status.25 Of course, when consumers cut back, they do not necessarily eliminate the items that expert panels would consider excessive, and indeed studies have found that spending reductions from higher copays are divided roughly proportionately between those items the expert analysts believe are beneficial and those they don’t.26Proportionate noncompliance with expert panels is not necessarily bad. Even if panel consensus is sustained over time, individual patients may benefit from contrary decisions that they reach with their personal physicians. Patients, physicians, and insurers will continue to benefit from research on medical outcomes of different treatments, but the costs of these individual decisions are most appropriately borne, at least in part, by the individual who putatively gets the benefit, not entirely by the taxpayer, who does not generally benefit.

Cumulative effects of multiple interventions

Each of the three interventions examined so far offers significant improvements in the Medicare expenditure burden on GDP, but no single one is sufficient to overcome the entire problem. Since both the higher eligibility age and reinstated disability criteria ease the flow of individuals into the program, calculating their combined effect is relatively straight­forward. Implementing both the eligibility age based on the 1967 years of benefits and the restored disability criteria would reduce the forecast GDP share for Medicare from the range between 9.00 and 19.79 percent of GDP to the range between 5.24 and 9.74 percent, removing about 41 percent of the problem across the range of the forecast.

Combining the market-based $3,000 deductible plan design with the other two interventions eliminates between 62.21 and 70.24 percent of the total problem and lowers the forecast range to between 5.24 and 9.74 percent of GDP.27

Table 2 adds the cumulative effects of all three interventions for each cost-share design and provides two more plan designs that have been calculated to reduce expenditures sufficiently to avoid the entire increased GDP share when combined with the increased eligibility age and restored disability criteria. As one would expect, the plan designed to avoid the expenditure increases under the upper-boundary scenario would create a surplus if the lower boundary obtained; the 132.86 percent reduction is consistent with that expectation.

Table 2
Alternative cost-sharing Medicare designs to mitigate excess expenditures fully


Source: Author’s computation using elasticity estimate of –.20 for most of the calculations. See Amanda E. Kowalski, “Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care,” National Bureau for Economic Research, NBER Working Paper no. 15085, June 2009; Joseph P. Newhouse and the Insurance Experiment Group, Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge: Harvard University Press, 1993); and Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Washington: July 13, 2017), p. 163. Medicare Trustee forecasts use the same –0.20 elasticity. Elasticities from zero out-of-pocket cost to any coinsurance/deductible uses an elasticity of –0.35 derived from the results of Katherine Baicker, Sarah L. Taubman, Heidi L. Allen, et al., “The Oregon Experiment—Effects of Medicaid on Clinical Outcomes,” New England Journal of Medicine 368 (May 2, 2013): 1713–22, doi:10.1056/NEJMsa1212321. Baseline data are computed from Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, the downloaded file “2017 Expanded and Supplementary Tables and Figures.zip,” and Center for Medicare and Medicaid Services, “CMS Statistics Reference Booklet,” 2016.

Phasing in

While one might configure the details differently, any plausible options to avoid the unsustainable rise in Medicare expenditures by adjusting only values of its existing design parameters would require significant changes in the benefit package similar to those for the last two designs in Table 2. The required changes may appear to be large, but we must recall that the problem is very large and has been created over decades. It now needs to be unwound. Fortunately, if we do not delay too long, the changes can be phased in on a schedule that will avoid sudden large changes for individual beneficiaries and provide time for them to adapt.

About 41 percent of the problem can be solved by gradually raising the eligibility age to provide only the original expected number of years of coverage and returning to the original statutory requirements for disability. A similar gradual age increase has already been instituted for Social Security and it seems to have been tolerated well. No beneficiary will be required to give back any benefits on account of this change. In order to restrain Medicare’s GDP share to its current level, the eligibility age would continue to rise beyond the currently planned Social Security level, but never at a faster rate. Once the 1967 expected years are achieved, the age could rise more slowly to maintain the same expected number of years of retirement. This approach gives individuals plenty of time for planning and it reduces the current burden for younger cohorts funding longer-than-promised retirements for their elders.

Most of the disability reform involves not providing benefits to people who fail to meet the statutory “unable to work at any job in the economy” criterion. Unfortunately, over time, program officials and administrative law judges have increasingly overlooked this criterion when making regulations and benefit decisions, as I explain in Appendix E. If this criterion were reasserted, the only time that existing beneficiaries would be affected is when they fail to pass recertification of their disability under the original criteria.

Irrespective of which forecast path Medicare follows, the revised eligibility ages should be implemented immediately. The disability criteria should be immediately returned to their original forms and be quickly supported by requirements for appropriate rehabilitation and periodic audits.

If we are lucky and the lower-boundary forecast obtains, then fixing the age and disability criteria will eliminate most of the rise in Medicare’s share of GDP for the next 30 years, requiring only modest increases in cost sharing to close the remaining gap. To keep the GDP share stable, standard deductibles for Part B would increase from $166 to $370 over an initial period of 20 years and then rise to the average commercial level of $1,478 for the combined three parts over the next 10 years. Coinsurance would remain at 20 percent. No cost sharing would be added for dual-eligible beneficiaries until year 30, when a $100 deductible and 10 percent coinsurance would begin.

Even under the upper-boundary conditions, the first 10 years would also need relatively modest increases in cost sharing. By year 10, the deductible would need to rise to the average commercial deductible of $1,478 across all three parts while retaining a 20 percent coinsurance, and a $100 deductible with 10 percent coinsurance would begin for dual-eligible beneficiaries.

After 30 years under the lower-bound projection or 10 years under the upper-bound projection, cost sharing would then need to rise systematically every year. Under the lower boundary conditions, by the end of the forecast period standard deductibles and coinsurance would reach $7,500 and 36 percent, with $500 and 10 percent for dual eligible beneficiaries. At the upper boundary, they would become, respectively, $13,700 with 36 percent and $600 with 15 percent. See Figure 3 for one configuration to phase in cost sharing that would keep Medicare’s GDP share roughly stable. See Appendix F for more details.

Source: Author’s computation using elasticity estimate of –0.20 for most of the calculations. See Amanda E. Kowalski, “Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care,” National Bureau for Economic Research, NBER Working Paper no. 15085, June 2009; Joseph P. Newhouse and the Insurance Experiment Group, Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge: Harvard University Press, 1993); and Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Washington: July 13, 2017), p. 163. Medicare Trustee forecasts use the same –0.20 elasticity. Elasticities from zero out-of-pocket cost to any coinsurance/deductible uses an elasticity of –0.35 derived from the results of Katherine Baicker, Sarah L. Taubman, Heidi L. Allen, et al., “The Oregon Experiment—Effects of Medicaid on Clinical Outcomes,” New England Journal of Medicine 368 (May 2, 2013): 1713–22, http://dx.doi.org/10.1056/NEJMsa1212321. Baseline data are computed from Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, the downloaded file “2017 Expanded and Supplementary Tables and Figures.zip,” and Center for Medicare and Medicaid Services, “CMS Statistics Reference Booklet,” 2016.

Note: In combination with phased increases in eligibility age to achieve the same expected years of coverage as in 1967 and returning disability criteria to their original forms, the above cost-sharing arrangements will prevent increases in Medicare’s share of GDP across the 75-year forecast horizon at the lower and upper forecast boundaries, respectively.

Policymakers have been reluctant to make any changes that look like benefit reductions. They have also failed to confront the unsustainable growth in spending. The initial age and disability changes proposed here would not reduce any current beneficiary’s benefits; they simply return the number of years of coverage and the need for disability coverage to their original intents for new beneficiaries. If policymakers wish to retain the structure of the current Medicare program, changes similar to those identified here are required to avoid the negative consequences of the unsustainable increases in Medicare expenditures. They will need to make necessary immediate changes, which will delay the onset of adverse economic effects. They can then use that delay to engage each other and the public on the reality of the current affordability threats and the need to raise cost sharing. If the increases in cost sharing identified here are not acceptable, then policymakers need to explore other, more creative changes to the overall structure—topics beyond the scope of the current paper. They might also reach a consensus on a slightly higher level of spending as being appropriate, but that new target would still need to be firm and well below the uncontrolled consequences of the current arrangements.

Conclusion

Medicare spending was accelerated by a political process that added beneficiary populations and expanded individual benefits. As a result, Medicare’s financial burden on the economy has risen more than six-fold, from 0.55 percent of GDP to 3.64 percent. Without significant interventions, the government expenditure for senior medical care is forecast to rise by more than a factor of five: from 3.64 percent of GDP to as much as 19.79 percent. This greater resource burden can only result in some combination of deep spending cuts in both discretionary and entitlement spending, sharp tax increases, and a hazardous debt burden. Higher taxes and debt would slow economic growth and reduce our standard of living.

The rapid increases in Medicare expenditures have also driven up demand, and hence prices, for medical services generally. Those higher prices have been paid by working consumers and also created higher government expenditures for programs such as Medicaid and Affordable Care Act coverage. Keeping Medicare expenditures at a stable proportion of GDP will help moderate those costs as well.

This excessive government consumption could be moderated by plausible combinations of strong interventions to delay the eligibility age consistent with increased longevity, restore disability criteria, and increase beneficiary cost sharing. Policymakers may feel that these adjustments are too large, but if they wish to preserve the current general program design, some combination of similar adjustments is required to prevent financial collapse and deterioration in economic growth and our standard of living.

Appendix A: Alternative forecasts of Medicare expenditures

From 1967 to 2016, total federal expenditures rose to 22.5 percent of GDP and federal public debt to 77.5 percent of GDP. Medicare accounted for more than that entire increase in expenditures.28 The Medicare growth rate in inflation-adjusted dollars has slowed to 4.30 percent in the last 20 years, but it is still rising faster than GDP and government revenues. It is rapidly becoming unaffordable, consuming ever-larger proportions of our economic value and jacking up the nation’s debt burden.

This paper uses two bounding 75-year scenarios to forecast Medicare financial effects in the absence of policy changes: one at its likely upper bound and a second at its likely lower bound. Table A-1 shows how these two bounds fit within a range of forecasts based on historical periods and official government estimates.

At the high end of the forecasts, the full-history trend is unlikely to repeat because:

  • Almost all the future covered senior population has already been born, and we know it will grow more slowly.
  • There are fewer major opportunities to expand coverage. Dental, hearing, and vision might be added, but even in aggregate they are smaller than drug coverage.
  • Inflation differentials between medical and general prices are running only about one-third as large as they were during the first 30 years of Medicare and are generally decelerating.

The lowest forecast, published by the Medicare Board of Trustees, is unlikely because it assumes:

  • New price controls on hospitals and doctors will be more effective than the historical failures of similar schemes. For instance, the infamous “sustainable growth rate” was repeatedly suspended by the “temporary doc-fix” every year for 16 years before it was repealed. The Medicare Board of Trustees itself discounts the dependability of such an effort in the future.29
  • Reduced intensity and demand for medical service—not just a slowing, but a full reversal of the historical trend of rising demand, which has in all other times and places accompanied greater prosperity and scientific advances.30
  • An implicit 32 percent decline in the covered disabled population, with no justification or policy change to effectuate the reversal.

Appendix B: Federal spending reductions required to fund Medicare expenditure growth

Medicare expenditures are forecast to grow from 3.64 percent of GDP in 2016 to between 9.00 percent and 19.79 percent by 2091, an increase of between 2.47 and 5.43 times the current level for the lower- and upper-limit scenarios, respectively. Both are unsustainable levels of resource consumption for a single government transfer program. Figure A-1 shows these forecast percentages of GDP in comparison with 1967, 2016, 2036, and the point at which expenditures double. The figure also shows the percentages of GDP that were spent for major components of consumption, investment, and government in 2016. Any rise in Medicare’s share of GDP will force a reduction in the share of some or all other expenditure categories. Figure A-1 shows what those reductions would look like if they were applied pro rata to each category. While exactly proportionate reductions are unlikely, the chart demonstrates the magnitude of the average effects that would occur. Illustrative examples of the problem include:

  • In 1967, Medicare expenditures were smaller than those for any major national income category.
  • By 2016, they were almost as large as the spending on national defense, or on residential construction, or on elementary and secondary education. That means Medicare is competing for resources with two major government activities and one of the key drivers of economic growth. Beyond that, Medicare exceeded each of the following economy-wide expenditures by significant margins:

    • investment in industrial plants and business buildings
    • consumption by the entire population of
      • home utilities
      • recreational services
      • motor vehicles
      • transportation services
      • apparel
      • recreation goods
      • home furnishings and appliances
      • motor fuels
      • other durable goods
    • government spending for
      • state and local social services
      • public order and safety
      • federal social services other than Medicare and Social Security
      • highways
      • infrastructure other than highways
      • higher education
  • In 20 years, Medicare expenditures will exceed investment in industrial equipment and business machinery or in intellectual property. They will be larger than the consumption of financial services, food and beverage at home, or food and lodging away from home. Finally, Medicare spending will pass that of Social Security in a mere 14 years.
  • If the lower-limit scenario holds, Medicare spending will surpass spending for shelter and leave consumer medical spending as the only category exceeding Medicare.
  • Within the forecast range, Medicare is 90 percent likely to surpass consumer medical spending.

Source: Computed by author using U.S. Department of Commerce, Bureau of Economic Analysis, National Income and Product Accounts, Table 1.1.5, “Gross Domestic Product”; Table 2.3.5, “Personal Consumption Expenditures by Major Type of Product”; and Table 3.2, “Federal Government Current Receipts and Expenditures.”

If Medicare’s spending remains unchanged, reductions in the GDP share for some or all other public and private expenditures will be inevitable because the increased expenditures can be funded with spending cuts, tax increases, or larger debt, each with adverse practical effects on individuals.

To fund Medicare exclusively from cuts in other federal spending would require across-the-board reductions of between 30.44 percent and 91.75 percent in both other entitlement and discretionary spending.31 If any functions such as Social Security, Medicaid, food stamps, or national defense were exempted from some or all of the cuts, other functions would need even deeper cuts. The upper-boundary scenario would allow funding only for about 90 percent of the current relative resources of Social Security, national defense, and public safety—and nothing else, including no nonsenior safety net. While some reduction in federal spending might be desirable, simply spending more on Medicare and offsetting it with smaller amounts on other services would not reduce the size of government spending. Many people would also object to the sharp reductions required to achieve the offsets.

Appendix C: Federal tax increases required to fund Medicare expenditure growth

If the Medicare spending increase were to be funded totally through taxes, on average all federal taxes—personal income, payroll, corporate income, and others—would need to be raised by between 17.36 percent and 36.33 percent over the forecast period. The Medicare Board of Trustees estimates that an increase in the Medicare payroll tax of 0.64 percentage points to 3.54 percent would avoid depleting the funding balance projected for Part A in 2029.32 But Part A payroll taxes and funding balances are only a small part of total Medicare, so even if this payroll tax were passed, additional tax increases averaging between 15.41 percent and 34.06 percent would be needed to cover all of the spending increases.

Higher individual taxes would lower Americans’ standard of living and reduce savings for investment. Higher business taxes would cut investment in plant, equipment, and intellectual property, thereby slowing growth and further eroding the individual standard of living. Slower growth would also mean a smaller GDP, which would translate into a still larger proportion of GDP being consumed by Medicare, further exacerbating the negative effects.

Appendix D: Public debt growth generated by Medicare expenditure growth

During the last three decades, debt has been the primary funding mechanism for increases in Medicare spending. General revenue was allocated under permanent entitlement appropriations without any public attention or vote, and the Treasury simply borrowed what it needed to write the checks to Medicare. Continuing to follow this prescription would raise federal public debt from the current 77.53 percent of GDP to the level of current Greek debt (181.9 percent) within just 13 to 18 years.33

Projecting the exact timing and mode of financial distress from such high levels of debt would be difficult, but one can be reasonably certain that without policy changes to Medicare we are about one decade from significant financial turmoil. This is long before the end-of-forecast period, which would bring federal public debt to between 1,056.21 percent and 3,363.38 percent of total GDP.

Well short of the catastrophic levels, the added debt would absorb more of the available savings, leaving less for investment in productive capacity. Because the pool of investable funds would be smaller, the market interest rate would be driven higher and make investments more expensive. If there are fewer savings for investment and investment costs more, we will get less capital and, therefore, less production. Less production will lower the standard of living and slow the rate of GDP growth, further increasing the adverse effects of Medicare spending.

Appendix E: Factors creating Social Security disability criteria

Government payment to people claiming disability has been one of the fastest and largest drivers of disproportionate increases in transfer payments. Very recent data show a modest reversal of the trend, owing in part to recent administrative reforms of some of the excesses documented here. These payments go only to the preretirement population. Since the largest federal disability benefit program is run by the Social Security Administration, it is often lumped in with the Old-Age and Survivors Insurance (OASI) Social Security benefits paid based on earnings up until retirement, but the two programs are related only in that they are funded by the same payroll tax (although in law they each have an assigned portion of the total) and that the Social Security Administration runs both of them.

Disability beneficiaries are also entitled to free Medicare, so a portion of the Medicare taxes also go to pay for Medicare for the disabled.

The facts of excessive payments

The facts behind the rapid rise in government disability benefits, including through Medicare, are as follows:

  • The number of people receiving government disability benefits is growing more than 5.4 times faster than the population.
  • The Census Bureau’s Current Population Survey asks whether respondents have a “work-limiting disability.” In 2014, 5.4 percent of people aged 35–44 reported that they did have such a limitation. That is slightly less than the 5.6 percent who reported such a limitation in 1984. But in that same time period government more than doubled the number of people being paid disability allowances and entitled to free Medicare.34
  • The Americans with Disabilities Act of 1990 requires that employers make reasonable accommodations to hire disabled individuals, yet the proportion of the working age population drawing disability benefits has risen.
  • Workplace safety continues to improve, reducing disability on the job.35
  • Medical advances have reduced the debilitating effects of many disabilities.
  • The average benefits received per beneficiary have risen 1.39 times faster than inflation from 2003 to 2012. Since the basic benefits are escalated by the Consumer Price Index, this means that in terms of actual purchasing power the benefits being awarded have grown.36 While the size of these awards does not directly affect the Medicare costs, indirectly larger awards increase the incentive for people to seek disability classifications.
  • We now have only 17 people working and paying the taxes for each person receiving Social Security disability benefits and the corresponding free Medicare, compared to more than 50 working to support a single disability beneficiary in 1975.37
  • Beneficiaries who have been found disabled and “unable to work any job in the national economy” are allowed to earn $13,000 per year without any reduction in their benefit.38 Obviously some of them can work. They just choose not to work enough to lose their benefits.39

Causes of rising disability roles

The above facts show that there is no reason to believe that the incidence of disability is really rising. In fact, the frequency, severity, and age of onset for most disabling conditions have been improving steadily, and the rate of industrial accidents has fallen dramatically. There are only two possible explanations for the observed increase in the frequency and magnitude of disability payments: increasing fraud by beneficiaries, and/or politicians and bureaucrats systematically liberalizing the criteria for benefits. In fact, both appear to be the case.

The U.S. Senate Permanent Subcommittee on Investigations conducted a scientific survey and evaluation of disability awards over the period 2006–2010. The report documented that at least 25 percent of applications were granted “without properly addressing insufficient, contradictory and incomplete evidence.” The legal standard for a disability finding is “being unable to work any job in the national economy.” This standard was systematically violated in the 25 percent of cases flagged by the investigation.40

The hearings on disability claims are conducted by an examiner. If the claim is denied, it can be appealed to a second-level review. If it is denied a second time, it can be appealed to an administrative law judge. Among the examples in the report, one administrative law judge in Oklahoma City approved $1.6 billion in lifetime benefits in just three years of case review. He approved 90 percent of the 5,400 cases he reviewed, all of which had already been turned down by the initial claim examiner and by a senior reviewing examiner.

The investigation uncovered many cases in which administrative law judges simply cut and pasted copies of medical records from one report to another with no evidence of independent information or review. That is actually perjury. One administrative law judge in West Virginia was indicted for running a scam with a lawyer who would submit hundreds of cases that would be approved mechanically. Both the administrative law judge and the lawyer benefited financially. In another case, more than 70 people were arrested on fraud related to disability claims in Puerto Rico.41

While 70 percent of the third-level reviews confirmed findings of ineligibility, 9 percent of the administrative law judges overturned the denials they reviewed more than 90 percent of the time. There was an unbelievable consistency in those individuals. The administrative law judges who overturned 90 percent or more of their assigned cases did so year after year. Since every year they would get a random set of cases, that means something either illegal or incompetent is going on. Administrative law judges who had lower reversal rates showed substantial year-to-year variation in their determinations, reflecting the different levels of merit in the cases they would get each year. This 9 percent fraction of excessively generous administrative law judges have added 98,000 extra beneficiaries to Medicare over a six-year period, at a cost of $23 billion in taxpayer money.42

In addition to a general deterioration of standards and bad administration of the rules, there are at least five structural deficiencies that further bias the outcomes.43

  1. The administrative law judge is required to advocate on behalf of the claimant, including the 85 percent of claimants who are represented by a third party. So, the same person both represents the claimant and adjudicates the dispute.
  2. Not only does this dual role bias the outcome, it also places additional work burdens on the administrative law judge, who must invest time in being sure the claimants have all their documentation properly prepared to present their best case. The claimants and their representatives no longer bear that responsibility. This is the third-level appeal, not the initial claim where one might reasonably expect assistance in preparing a request. No one complains if the two earlier denials are overturned because nobody is representing the taxpayer in these third-level proceedings.
  3. Hearing officers and administrative law judges must follow the dictates of a device for determining judgments known as the “medical vocational grid.” The grid is a bureaucratic construct, not something in the law. This framework sets out the rules for making a disability determination. It departs from the fundamental standard of “unable to work any job in the national economy” by setting looser standards for some classes of people, such as people with only a high school education. The theory is that they would have a harder time finding a job, but this whole theory stands the meaning of disability on its head.

    With this grid, the criterion has shifted from whether the claimants are able to work to whether they can find a job—lots of unemployed people can’t find jobs. The criterion is supposed to be whether there are any jobs they can do. There is a big difference.

    Even in the context of using some assessment of the ease of finding a job, education is used, but not experience. A high school graduate with 20 years of progressively responsible positions might actually be able to find a job more easily than a new college graduate without any work experience. Yet the grid sets looser standards for experienced workers. Finally, the grid lowers the eligibility standards for people who don’t speak English well, on the same theory that it would be harder for them to find a job. This same criterion is applied in Puerto Rico, where most business is conducted in Spanish.

  4. It is too easy to shop for a biased administrative law judge that is 90 percent or more likely to award the benefit. If claimants get a rigorous administrative judge assigned to their case, they can proceed and, if they lose, just file again. An easier and faster abuse is simply to withdraw the case if it is assigned to a rigorous administrative law judge and then refile it, hoping for a better draw. This abuse could easily be stopped by requiring at least five years between filings, including any that are withdrawn. Claimants would get the assigned judge and that’s it. This reform does raise the concern that people who have progressive degeneration arising from their disease or injury may hit disabled levels before their five-year wait is over. But that would be a good disincentive for filing a capricious appeal. There may be a few hardship cases for which there really is degeneration. For those cases, the rule would need to require proof of material degeneration before the case is reopened.
  5. Administrative law judges are appointed for life. There is no persuasive justification for this. The lifetime rule is just a bureaucratic construct, not the constitutional rule that applies to federal court judges. Lifetime appointments make it easy for administrative law judges to grow lax in following the program’s requirements. Ten-year limits would seem to be the highest appropriate limit for the assignment.

Administrative law judges overturned the original denials in 70 percent of cases in 2008, and 67 percent of all cases were overturned in 2010. The Social Security Administration claims to have made some improvements in this regard, reducing the rate to only 56 percent in 2013.

In the course of studying this problem, investigators discovered that the Social Security Administration management, the administrative law judges, and their union claimed that no enforcement actions against overly lenient judges could be taken unless actual bribery were proved because their job descriptions granted them independence. The union objected that any managerial oversight would be political meddling. The fact that these folks have a union is prima facie evidence that they aren’t real judges. They are just senior hearing officers who need to adhere to standards and procedures to protect taxpayer money as well as grant benefits to the truly disabled. Eventually, the Social Security Administration changed the job descriptions, which was one of the improvements they claimed for 2013.44

Physician scholars, such as Steven Snyder at the University of California, San Francisco, Medical School, have observed that the medical community has been aiding and abetting the bureaucratic preferences for giving out more benefits. Patients are increasingly likely to ask their physicians or other caregivers such as chiropractors, acupuncturists, and physical therapists to certify them as disabled. Many diagnoses such as back pain, depression, fatigue, and fibromyalgia are not easily verified objectively and can even be easily faked by scammers. Findings of MRI “abnormalities” are frequently used to justify disability owing to chronic back pain, yet there is no objective evidence that, in fact, these physical structures cause chronic back pain. What is more, vast numbers of people have the same observed abnormalities with no pain whatsoever.45

Stopping excessive awards

We can hope that the modest improvements by the Social Security Administration will stick and become even stronger. We also need to insist on a complete overhaul of the approach and a systematic effort to unwind the bad decisions of recent decades. In addition to addressing the outright fraud and sloppy work habits, the government must return to the principle of disability meeting the legal requirement of “being unable to work any job in the national economy.” The entire set of rules is structurally deficient, such that even those administrators seeking to be just are hamstrung by inappropriate rules. For example, the statutory “unable to work any job in the national economy” has been replaced by the administrative rule “unable to perform a job that is equally physically demanding as jobs held in the past.” So, a steelworker who can no longer lift heavy parts would be granted disability even though the worker would have no physical challenge with electronic assembly work. That is a clear corruption of both the literal statutory language and its intent.

The disability political apparatus has also made it easier to qualify by the adoption of three administrative modifications to the underlying law. First, claimants can combine two or more nondisabling conditions to claim a benefit even though the net demonstrable effect still does not actually prevent them from working. Second, the administrative law judges in the Social Security Administration are required to give more weight to the opinions of the practitioners provided by the applicant in preference over the judgment of the government’s own experts. Third, applicants are not required to complete a course of remedial therapy that would allow them to regain some or all of their lost functions. Other countries have introduced reforms that require applicants to develop a rehabilitation plan for return to work with an employer and to demonstrate that they have followed through on it before being approved for disability payments. Such an approach reduced new approved cases by 60 percent in the Netherlands.46

Once people are on taxpayer-financed disability, they almost never admit to recovery and take up productive work. Less than 1 percent of beneficiaries ever leave the scheme to return to work, yet research by Till von Wachter of the University of California, Los Angeles, suggests that at least half of those in the 30–44 age group could, in fact, return to work if they were required to do so.47

The Americans with Disabilities Act (ADA) has placed a specific burden on employers to make reasonable accommodation for people with all sorts of disabilities. This means that someone with a disability that can be accommodated in some job somewhere should not be getting any government payouts for that disability while at the same time employers are spending money to accommodate it. It seems that for some portion of the disability determinations, the implicit qualification criterion has become that work is difficult, painful, annoying, unpleasant, or merely inconvenient.

Appendix F

An illustrative alternative for phasing in Medicare changes that avoid increasing its share of GDP


Source: Author’s computation using elasticity estimate of –0.20 for most of the calculations. See Amanda E. Kowalski, “Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care,” National Bureau for Economic Research, NBER Working Paper no. 15085, June 2009; Joseph P. Newhouse and the Insurance Experiment Group, Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge: Harvard University Press, 1993); and Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Washington: July 13, 2017), p. 163. Medicare Trustee forecasts use the same –0.20 elasticity. Elasticities from zero out-of-pocket cost to any coinsurance/deductible uses an elasticity of –0.35 derived from the results of Katherine Baicker, Sarah L. Taubman, Heidi L. Allen, et al., “The Oregon Experiment—Effects of Medicaid on Clinical Outcomes,” New England Journal of Medicine 368 (May 2, 2013): 1713–22, http://dx.doi.org/10.1056/NEJMsa1212321. Baseline data are computed from Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, the downloaded file “2017 Expanded and Supplementary Tables and Figures.zip,” and Center for Medicare and Medicaid Services, “CMS Statistics Reference Booklet,” 2016.

Notes

1. Medicare and Medicaid Board of Trustees, “Table 4: Medicare Enrollment by Part and in Total,” supplementary data tables to 2017 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplemental Medical Insurance Trust Funds (Washington: July 13, 2017).

2. Center for Medicare and Medicaid Services, “2016 CMS Statistics,” 2016, pp. 9-10, based on data from the Social Security Administration, the Office of the Chief Actuary and Centers for Disease Control and Prevention, the National Center for Health Statistics, and the National Vital Statistics System.

3. An average 80-year-old consumed approximately 69 percent more healthcare than an average 70-year-old in 1967, so as people live longer, one might expect that the average cost per person per year would also rise. But higher longevity is the result of better health, so, by 2013, the average 80-year-old consumed only 46 percent more healthcare than the average 70-year-old. Calculated by author from Center for Medicare and Medicaid Services, “Personal Health Care (PHC) Spending, Age and Gender Tables,” https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/Age-and-Gender.html; and Medicare Payment Advisory Commission, Health Care Spending and the Medicare Program, A Data Book (Washington: MedPAC, June 2017), p. 22. On average, spending for beneficiaries during their final year of life has been nearly four times greater than for the rest of the Medicare population. As the Medicare population became older, the age at which these higher terminal expenses were incurred shifted later, so some of the higher cost for older patients was merely a delay in their individual final costs, not a higher annual cost of continuing care. Furthermore, even the cost differential associated with the end of life dropped by about 15 percent from 2000 to 2014. Juliette Cubanski, Tricia Neuman, Shannon Griffin, and Anthony Damico, “Medicare Spending at the End of Life: A Snapshot of Beneficiaries Who Died in 2014 and the Cost of Their Care,” Kaiser Family Foundation, July 2016.

4.National Safety Council, Accident Facts, 1994 edition, as reported by Thomas J. Kniesner and John D. Leeth, “Abolishing OSHA,” Regulation 18, no. 4 (1995): 46-56; and United States Department of Labor, Bureau of Labor Statistics, “Census of Fatal Occupational Injuries 2013,” updated from https://data.bls.gov/PDQWeb/fw.

5. The last item was addition to the Medicare Provider Payment Modernization Act of 2014, “The ‘Doc-Fix’ Follies,” Wall Street Journal, March 15, 2014.

6. Medicare and Medicaid Board of Trustees, “2017 Expanded and Supplementary Tables and Figures,” in 2017 Annual Report of the Boards of Trustees. Dollar amounts for 1967 (in 2016 dollars), the deductible as a percentage of average benefit, and the annual rates of change were computed by the author.

7. The four so-called Medicare Savings Programs (MSP) are: Qualified Medicare Beneficiary (QMB)(1989); Qualified Disabled and Working Individual (QDWI)(1990); Specified Low-Income Medicare Beneficiary (SLMB)(1993); and the Qualifying Individual (QI)(1998). While the funding for these programs was taken from general revenue and assigned to the Medicaid program, they increased consumption and thereby raised Medicare’s expenses.

8. The Social Security Increase from age 65 to age 67 has already begun, so there would necessarily be some confusion with two different and changing eligibility ages. Several techniques might be used to limit the confusion in practical terms, but for this discussion we assume they each rise from their current levels to age 67 in 2027 in equal annual steps. Social Security is being raised by two months per calendar birth year. Medicare would need to rise approximately twice that amount to harmonize by 2027.

9. For a scholarly summary of the issue, see David H. Autor, “The Unsustainable Rise of the Disability Rolls in the United States: Causes, Consequences, and Policy Options,” MIT and NBER, November 23, 2011, https://economics.mit.edu/files/7388.

10. Andrew Biggs, “Averting the Disability-Insurance Meltdown,” Wall Street Journal, February 24, 2015.

11. Recent modest administrative adjustments have begun to reverse the trend. If they are sustained and expanded, that will be a good start on the reforms suggested here. Unfortunately, history is not encouraging in that respect. Similar reforms in the 1980s were soon eliminated, and the trend continued. See, for example, Eric Morath, “America’s Hidden Workforce Returns,” Wall Street Journal, January 16, 2019.

12. Biggs, “Averting the Disability-Insurance Meltdown.” For a detailed overview of Dutch and other international systems with more preliminary results, see Richard V. Burkhauser, Mary C. Daly, Duncan McVicar, and Roger Wilkins, “Disability Benefit Growth and Disability Reform in the U.S.: Lessons from Other OECD Nations,” Federal Reserve Bank of San Francisco, Working Paper 2013-40, December 2013.

13. Till von Wachter, Jae Song, and Joyce Manchester, “Trends in Employment and Earnings of Allowed and Rejected Applicants to the Social Security Disability Insurance Program,” American Economic Review 101, no. 7 (December 2001): 3308-29.

14. For this intervention, the percentage reduction in the added expenditure is larger for the upper boundary, while for the other interventions, the percentage reduction is larger at the lower boundary. The other interventions reduce expenditures by similar amounts at both boundaries and thus have a larger effect on the smaller baseline of the lower boundary. But the upper boundary forecast includes a significantly larger forecast for the growth of disabled beneficiaries, so returning to the statutory criteria will have a much larger effect on it.

15. The Kaiser Family Foundation and Health Research Educational Trust, Employer Health Benefits, 2017 Annual Survey (Menlo Park, CA: Kaiser, 2017), Figure 7.8. This is an average across all types of plans that have a general deductible—that is, one deductible that applies to hospitalization, medical, and drug combined. Many private plans do have some separate deductibles, especially for drugs, but these are not represented here.

16. Note that the drug deductible (Part D) is for the base plan maximum deductible. Because Part D allows for some amount of market competition, a majority of drug plans have lower deductibles, or even none.

17. Kaiser, Employer Health Benefits: 2017 Annual Survey, Figure 7.8.

18. For fuller documentation of both the legislative and regulatory impediments to freedom of healthcare choice in the senior market, see Kent Masterson Brown, “The Freedom to Spend Your Own Money on Medical Care: A Common Casualty of Universal Coverage,” Cato Institute Policy Analysis no. 601, October 15, 2007.

19. Patient responsibility for out-of-network costs may be even higher than the out-of-network coinsurance applied against the usual-and-customary charge because the provider is not contractually bound to accept the usual-and-customary amount and may balance-bill the patient for even more.

20. The overall average was calculated by the author from service-specific percentages in Trudy Millard Krause, Maria Ukhanova, and Frances Lee Revere, “Private Carriers’ Physician Payment Rates Compared with Medicare and Medicaid,” Texas Medicine 112, no. 6 (June 2016): e1. The Medicare-private fee differences vary widely by the particular service and also vary by geography and carrier. The differences have also grown over the last two decades. Compare S. Norton and S. Zuckerman, “Trends in Medicaid Physician Fees, 1993-1998,” Health Affairs 19, no. 4 (2000): 222-32; M. E. Miller, S. Zuckerman, and M. Gates, “How Do Medicare Physician Fees Compare with Private Payers?” Health Care Finance Review 14, no. 3 (1993): 25-39; W. Fox and J. Pickering, “Hospital and Physician Cost Shift: Payment Level Comparison of Medicare, Medicaid and Commercial Payers,” Milliman (December 2008); and J. Clemens and J. Gottlieb, “Bargaining in the Shadow of a Giant: Medicare’s Influence on Private Payment Systems,” NBER Working Paper no. 19503, October 2013.

21. See for example, Toshiaki Iizuka, Katsuhiko Nishiyama, Brian Chen, and Karen Eggleston, “Is Preventive Care Worth the Cost? Evidence from Mandatory Checkups in Japan,” NBER Working Paper no. 23413, May 2017.

22. This sensitivity is called “elasticity” by economists. This analysis uses an elasticity estimate of -0.2 for most of the calculations, and Medicare Trustee forecasts use the same value. See Amanda E. Kowalski, “Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care,” NBER Working Paper no. 15085, June 2009; Joseph P. Newhouse and the Insurance Experiment Group, Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge, MA: Harvard University Press, 1993); and Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, p. 163. When dual-eligible beneficiaries go from zero out-of-pocket costs to any coinsurance/deductible, this analysis uses an elasticity of -0.35, as derived from the results of Katherine Baicker, Sarah L. Taubman, Heidi L. Allen, et al., “The Oregon Experiment—Effects of Medicaid on Clinical Outcomes,” New England Journal of Medicine 368 (May 2, 2013): 1713-22, https://www.nejm.org/doi/full/10.1056/nejmsa1212321.

23. Center for Medicare and Medicaid Services, MMCO_National_Profile_CY2012.xlsx, Table 1A: “Medicare and Medicaid Enrollment and Spending by Dual-Eligible Status, Age, and Other Characteristics by Year, CY 2012.”

24. Medicare Payment Advisory Commission, June 2017 A Data Book: Health Care Spending and the Medicare Program, Charts 2-3 and 4-1, http://medpac.gov/docs/default-source/data-book/jun17_databookentirereport_sec.pdf. Calculations of percent difference by author.

25. See, for example, Donald M. Berwick and Andrew D. Hackbarth, “Eliminating Waste in US Health Care,” Journal of the American Medical Association 307, no. 14 (2012): 1513-16, doi:10.1001/jama.2012.362; and Mark Smith, Robert Saunders, Leigh Stuckhardt, and J. Michael McGinnis, Best Care at Lower Cost: The Path to Continuously Learning Health Care in America (Washington: National Academies Press, 2013).

26. The baseline study on this point was Robert H. Brook, John E. Ware, Jr., William H. Rogers, et al., “The Effect of Coinsurance on the Health of Adults: Results from the Rand Health Insurance Experiment,” Rand Corporation, R-3055-HHS, 1984. While they found lower utilization with higher coinsurance, they found no effect on overall health except for reduced outcomes among the very poor and some people suffering from severe chronic conditions. More recently, a focused study of post-myocardial infarction patients found that with non-zero coinsurance they were not as compliant with drug regimens. The results were also lower for some secondary indicators, but not statistically different for the primary indicator of survival. See Niteesh K. Choudhry, Jerry Avorn, Robert J. Glynn, et al., “Full Coverage for Preventive Medications after Myocardial Infarction,” New England Journal of Medicine 365, no. 22 (December 2011): 2088-97. Additional analysis of the experiment is provided by Katherine Baicker, Sendhil Mullainathan, and Joshua Schwartzstein, “Behavioral Hazard in Health Insurance,” National Bureau for Economic Research Bulletin of Aging Health 1 (2013): 2-3.

27. The reductions from the plan design are smaller here than in Table 2 because the change is applied to a smaller population that has been reduced by the later eligibility age and tighter disability criteria.

28. The increase in Medicare’s share of GDP was 117.89 percent of the increase for total federal spending. It was possible for Medicare’s contribution to be more than the total GDP because real expenditures for national defense and some smaller categories declined or rose more slowly than real GDP.

29. The Medicare Board of Trustees acknowledges that their assumed controls will likely create disruptions in later years and need to be replaced. See Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, p. 2.

30. As part of the 2010 Affordable Care Act (ACA), spending for Medicare was nominally cut by $716 billion over the 10-year budget forecast horizon. Since there were no reductions in Medicare benefits (in fact, there were increases in the benefits for preventive care), the reduction was strictly notional on the wish that new schemes for manipulating reimbursements would result in savings. These hypothetical savings were used to offset the added real costs of the ACA subsidies to lower-middle-income insurance purchasers, thereby making the nominal cost of the ACA appear smaller than it really was. The legislative wishes are incorporated into the forecasts without further justification.

31. The discussion here is about reduction of federal spending only. Figure A-1 includes some state and local spending, as well, to show the relative magnitudes of Medicare. While state and local spending would not be reduced explicitly to fund Medicare, the funding of Medicare through higher taxes or debt would likely reduce state and local spending.

32. Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, p. 30. The rates cited here are the combined employee-employer rates.

33. In the short to middle term, the increase in debt is more sensitive to the effective interest rate paid by the Treasury than to the rate of increase in Medicare spending. The upper and lower spending boundaries differ by just one year in the time required to reach the Greek debt level of 181.9 percent. See United States Central Intelligence Agency, “Country Comparison: Public Debt,” https://www.cia.gov/library/publications/the-world-factbook/rankorder/2186rank.html. Both the median and slow increase assumptions for interest rates are relatively conservative. The slower interest rate increase follows Congressional Budget Office projections for rate of change until the effective interest rate reaches the 25th percentile of historical rate levels (3.96 percent). See Congressional Budget Office, The Budget and Economic Outlook: 2017 to 2027 (Washington: CBO, January 2017), Table 1-4. The median rise assumes reaching the historical median (6.12 percent) within 10 years.

34. Biggs, “Averting the Disability-Insurance Meltdown.”

35. National Safety Council, Accident Facts, 1994 edition, pp. 46-56; and United States Department of Labor, Bureau of Labor Statistics, “Census of Fatal Occupational Injuries 2013.”

36. Author calculated inflation comparison from Social Security Administration data, as reported in Damian Paletta, “Government Pulls in Reins on Disability Judges,” Wall Street Journal, December 27, 2013.

37. Computed by author from Social Security Administration, “Ratio of Covered Workers to Beneficiaries,” https://www.ssa.gov/history/ratios.html; and Medicare and Medicaid Board of Trustees, 2017 Annual Report of the Boards of Trustees, “Expanded and Supplementary Tables and Figures,” https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/ReportsTrustFunds/index.html. Also note that the total number of government disability beneficiaries is almost 50 percent higher than these numbers as the result of yet more government disability programs. The net result is that only 12 people work and pay the taxes for each person that benefits from a government disability check.

38. Biggs, “Averting the Disability-Insurance Meltdown.”

39. For detailed analysis of the disincentives to work, see Nicole Maestas, Kathleen J Mullen, and Alexander Strand, “Disability Insurance and the Great Recession,” American Economic Review Papers and Proceeding 105, no. 5 (May 2015): 177-82.

40.Social Security Disability Programs: Improving the Quality of Benefit Award Decisions, Permanent Subcommittee on Investigations of the Committee on Homeland Security and Governmental Affairs, United States Senate, September 13, 2012.

41. Paletta, “Government Pulls in Reins on Disability Judges.”

42. Mark J. Warshawsky and Ross A. Marchand, “Disability Claim Denied? Find the Right Judge,” Wall Street Journal, March 9, 2015.

43. This discussion builds on an outline suggested by Warshawsky and Marchand, “Disability Claim Denied?”

44. Paletta, “Government Pulls in Reins on Disability Judges.” See also, “At-A-Glance: The Significance of Changes to the Position Description of ‘Administrative Law Judge’ in the Social Security Administration,” Association of Administrative Law Judges, which compares the new and old position descriptions. The analytical conclusions represent the interests of the administrative law judge union, but the content is straightforward.

45. Steven Snyder, “Disability: If You Build a Program, They Will Come,” Wall Street Journal, September 18, 2018.

46. Biggs, “Averting the Disability-Insurance Meltdown.”

47. Biggs, “Averting the Disability-Insurance Meltdown.”

John F. Early is president of Vital Few, LLC, a consultancy in mathematical economics, and has twice served as an assistant commissioner at the Bureau of Labor Statistics.

Trump’s First Trade Deal: The Slightly Revised Korea-U.S. Free Trade Agreement

$
0
0

Simon Lester, Inu Manak, and Kyounghwa Kim

While the renegotiation of the North American Free Trade Agreement has received far more attention, a lesser-known U.S. trade deal has also been reworked. In April of 2017, President Trump proclaimed his displeasure with the Korea-U.S. Free Trade Agreement (commonly referred to as “KORUS”), stating, “It was a Hillary Clinton disaster, a deal that should’ve never been made.”1 Trump said he had told the South Koreans, “We’ll either terminate or negotiate. We may terminate.”2 This set the wheels in motion for a relatively low-profile trade renegotiation that became Trump’s first trade deal.

The renegotiation of KORUS provides a useful example of Trump’s trade dealmaking in practice. As we will show below, the renegotiation made only minor changes to the agreement and could be taken to mean that the reality of Trump’s trade policy may not always match the rhetoric. However, the administration’s concerns about trade with Korea have always been less prominent than its concerns about trade with other trading partners, so the conclusion of the KORUS talks with only small changes may simply be a reflection of the administration’s focus on other areas of trade policy rather than an indication of its general approach to trade policy.

The Original KORUS

The original KORUS grew out of bilateral consultations that began in late 2004, although the idea of a trade agreement between the two countries had been floated as early as the 1980s. A deal was concluded in April 2007, revised the next month to reflect demands from Congressional Democrats, and signed by the parties on June 30, 2007.3 Important features of the agreement were a phase-in period for the removal of most tariffs on bilateral trade, with autos and agriculture the most noteworthy areas of liberalization; a reduction in the burden of various Korean tax and regulatory policies; and the opening up of certain Korean services markets.4

The initial version of the deal faced several hurdles with domestic ratification. Although Korea had significantly opened its agricultural market as part of the negotiations, Korean restrictions on U.S. beef imports had not been fully resolved. Max Baucus, a powerful farm-state senator, objected to the deal until that issue was fixed. The U.S. auto industry also had concerns about the new competition it would face from its Korean counterparts. Finally, presidential elections in Korea led to delays in consideration of the deal, and then came the 2008 U.S. presidential election and the financial crisis. These issues held up ratification for the remainder of the Bush administration and a couple of years into Obama’s first term.5

In December 2010, the two parties agreed to a set of minor changes: U.S. tariff cuts on cars and light trucks were delayed for a few years, and Korea made changes to certain regulatory policies that would help U.S. carmakers with access to the Korean market.6 These changes paved the way for ratification in both Korea and the United States, and the agreement entered into force on March 15, 2012.7

Timeline of the KORUS Renegotiation

President Trump and his Korean counterpart, Moon Jae-in, first spoke about a KORUS renegotiation during the June 2017 U.S.-Korea Summit. Soon after, U.S. Trade Representative Robert Lighthizer requested convening a special session of the KORUS Joint Committee.8 The special session was held in August but failed to reach a resolution. At that point, press reports suggested that Trump was hinting at a possible U.S. withdrawal from the agreement.9 However, after another meeting in October, the two sides agreed to start the process of amending the agreement.10

The two countries held the first round of talks on possible amendments in early January 2018, focusing on automotive trade and the further opening of Korea’s agricultural market.11 The second round of talks began at the end of that month, occurring just a week after Trump had announced safeguard tariffs that would affect Korean washing machines and solar panels.12 During this tense second round, the United States continued to push for changes concerning the sale of autos in Korea. Meanwhile, Korea made detailed suggestions to reform the investor-state dispute settlement (ISDS) mechanism and raised concerns about the safeguard tariffs on washing machines and solar panels.13

The third round of talks, held in March, coincided with the Trump administration’s announcement of sweeping new tariffs on steel under Section 232 of the Trade Expansion Act of 1962. Korea negotiated an exemption from the tariffs in exchange for agreeing to limit steel exports to the United States. The two sides also discussed further opening the Korean market to U.S. pharmaceuticals. Both governments seemed to take a more diplomatic approach to these talks in order to avoid adding complications to the upcoming inter-Korea and U.S.-North Korea summit.14 On March 28, Korea and the United States released a joint statement announcing that they had “reached an agreement in principle on the general terms of amendments and modifications to the United States-Republic of Korea Free Trade Agreement.”15 The two parties signed the renegotiated trade deal on September 24, 2018.16

Shortly thereafter, Korea completed its domestic procedure to effectuate the amended KORUS, and on December 7, 2018, the National Assembly ratified the agreement, voting 180-5 in support of the deal, with 19 abstentions.17 Although the Koreans had hinted that they would demand an exemption from the Trump administration’s possible Section 232 tariffs on all automobiles in exchange for their approval of the new KORUS, the legislation was finalized without addressing this issue.18 Meanwhile, no congressional vote was required in the United States because of the limited scope of the revisions and the absence of changes to any U.S. statutes.

Upon exchanging written notifications that each country had completed its respective legal requirements and procedures, the new KORUS entered into force on January 1, 2019.19

Major Changes to KORUS

KORUS 2.0 is mostly just a tweak of the original KORUS, but it contains a few noteworthy changes. Some issues were addressed as modifications to the original KORUS, while others that were not covered in the original were negotiated as side agreements secured by exchanges of letters between the parties. Changes demanded by the United States included steel export restrictions, a larger quota for U.S. cars exported to Korea that meet U.S. emissions and safety standards instead of Korea’s idiosyncratic rules, an extension of the duration of the U.S. 25 percent tariffs on imported pickup trucks, changes to rules on Korean medicine pricing, and new procedures for Korean customs inspections. There were also several Korean demands that resulted in changes to the investor-state dispute settlement and trade defense mechanism procedures, as well as rules of origin requirements for certain textile products.

Voluntary Export Restraint on Steel from Korea

Regarding the side deals, the biggest (and most negative) economic impact will arise from the export restrictions on Korean steel. Pursuant to these restrictions, Korea will cap steel exports to the United States at 70 percent of the average volume from the past three years on a product-by-product basis.20 This was done in exchange for an indefinite exemption from the Trump administration’s Section 232 national security tariffs on steel. These quotas will lead to some degree of price increase for U.S. consumers, with the amount of the increase dependent on how the measures are implemented, among other factors.

In anticipation of the quotas, larger Korean steel producers had already been looking to other markets, such as India, for their exports, and some of Korea’s smaller steel producers, such as Seah Steel and Husteel, have considered moving more production to the United States to circumvent the quotas altogether.21

This outcome is troubling because it takes trade policy back to the 1980s and utilizes a tool that operates outside current international rules. Tying unrelated national security issues to pressure Korea into concessions signals a new approach to trade negotiations that we are likely to see more of from the Trump administration.

Increased Export Quotas and Expansion of Eco-Credits for U.S. Autos

Under the original KORUS, U.S.-based auto manufacturers can export up to 25,000 vehicles (per manufacturer per year) to Korea that will be deemed compliant with Korean safety standards as long as they meet U.S. standards. As part of the renegotiation, the annual quota has now been increased to 50,000 vehicles per manufacturer.22 On its face, this appears to be a good market-opening provision and a positive development for increasing U.S. access to the Korean market. However, the real economic value is unclear. In 2017, U.S. passenger vehicle and light truck exports to Korea totaled only 52,687 units; to put this figure in perspective, Canada is the leading destination for U.S. auto exports, with 917,669 units, and China is second at 262,527 units.23 Furthermore, Ford and General Motors each shipped fewer than 10,000 vehicles to Korea in 2017.24 Given the low volume of U.S. auto exports to Korea, increasing the quota will probably not have much impact.

In addition, most U.S. automobiles will be exempt from Korea’s stricter CO2 emission requirements. In order to achieve this, the cap on eco-credits that U.S. manufacturers can use to “pay” for increased CO2 emissions will be raised to match the discrepancy between the U.S. and Korean emission standards.25 In addition, Korea will continue to provide leniency on both fuel economy and greenhouse gas emissions regulations for small-volume U.S.-vehicle manufacturers that sell small quantities of cars to Korea. As with the increased quota for autos meeting U.S. safety standards, given the low level of U.S. exports to Korea, this change is likely to have a minimal effect on trade.

Phaseout of Tariffs on Light Trucks from Korea

While the auto provisions noted above could open Korea’s market a bit to U.S. exports, on trucks the Trump administration has moved in the direction of greater protectionism. Korea agreed to a U.S. demand to extend a 25 percent U.S. tariff on light-truck imports until 2041 (the tariff was supposed to be phased out by 2021 under the original KORUS). Because Korea does not currently export trucks to the United States, this will have no immediate impact on the market. However, the change could delay any future export plans Korean truck producers may have pursued. Ambassador Lighthizer has said, “The Koreans don’t ship trucks to the United States right now and the reason they don’t is because of this tariff,” and, “They were going to start next year — we would have seen massive truck shipments. So, that’s put off for two decades.”26 Along the same lines, in a study published in June 2018, the U.S. International Trade Commission estimated that the extension of the duties “could avoid an increase of 59,000 units in light truck imports” and “7,600 units in medium/heavy truck imports from Korea” over the 20-year extension period.27 Although the actual plans of Korean automakers were unclear, the tariff extension certainly limits their options for producing trucks for the U.S. market and keeps imported light trucks out of reach to U.S. consumers for another 20 years.28

Korean Medicine Pricing

The Pharmaceutical Research and Manufacturers of America has long complained about how Korea’s national health insurance pricing entities — the Health Insurance Review and Assessment Service and the National Health Insurance Corporation — have priced imported drugs at below-market prices.29 In this regard, the association has claimed that “Korea’s pricing policies severely devalue U.S. intellectual property and favor Korea’s own pharmaceutical industry at the expense of U.S. companies.”30 According to the U.S. Trade Representative’s Office, as part of the KORUS renegotiation, “Within 2018, Korea will amend its Premium Pricing Policy for Global Innovative Drugs to make it consistent with Korea’s commitments under KORUS to ensure non-discriminatory and fair treatment for U.S. pharmaceutical exports.”31 In essence, the amended KORUS was supposed to ensure that Korea bring its pharmaceutical policies in line with what was originally agreed. Korea made the amendments as scheduled, but criticism of the new rules has emerged from both domestic and foreign pharmaceutical companies, and the policy may continue to be contested.32

Korean Customs Procedures

Another KORUS change targets red tape involving customs procedures. Korean customs, as compared to U.S. customs, traditionally demands more detailed documentation, a practice that acts as a nontariff barrier to trade. Whereas U.S. Customs and Border Protection places scrutiny primarily on Tier 1 suppliers (direct suppliers to original equipment manufacturers) as long as certificates exist for producers farther down the supply chain, the Korean Customs Service often demands significantly more documentation, even from suppliers as far removed as the Tier 3 level (suppliers of raw material).33 The KORUS renegotiation has produced a list of eight principles designed to reduce this customs slowdown and calls for the creation of a working group to monitor these issues.34

Other Notable Changes and Omissions

Although the majority of KORUS 2.0 amendments were designed to satisfy U.S. demands, three smaller changes were made at the request of Korea. First, the investor-state dispute mechanism has been revised in minor ways and largely resembles the rules in the Comprehensive and Progressive Agreement for Trans-Pacific Partnership. ISDS is an arbitration process that allows foreign investors to bring claims against governments before an ad hoc panel. Some Korean officials are dissatisfied with the burden this system has placed on their government. As of this year, Korea is facing a number of ISDS claims that put it at risk of upward of $50 billion in damages. Korea lost its first ISDS case this year against Iran’s Dayyani Group, after which the Korean government was required to pay 73 billion won (approximately $64 million).35 This, among other factors, has reduced Korea’s support for ISDS and led it to seek revisions.

Second, the KORUS amendments also seek to promote transparency in antidumping and countervailing duty proceedings.36 The renegotiated terms are a direct response to the frequent use of this type of import restriction by the United States. While this change may not do much to curtail U.S. recourse to these trade remedies, improving transparency in the process is a net positive result.

Third, Korea asked for modifications to rules-of-origin requirements for three product categories of textile inputs that are not available in either Korea or the United States and thus have to come from other countries.37 This change was requested because the current “yarn-forward” rules only allow a textile product to qualify for a free-trade agreement’s lower tariffs if it is made of yarns and fabrics from one of the free-trade-agreement parties. The United States favors yarn-forward rules in its trade agreements because they restrict inputs from other countries.38 The United States agreed to expedite its domestic commercial-availability review process, agreeing to make rule changes in the Specific Rules of Origin for Textile and Apparel Goods (Annex 4-A) if it is determined that commercial availability does not exist. This would be a welcome development in relaxing stringent yarn-forward rules that impede the most efficient ways of manufacturing textiles and clothing.

Finally, and notably, the agreement lacks provisions addressing currency manipulation, which the United States has sought in other recent trade negotiations. Initially, it appeared that the United States was pushing for KORUS provisions similar to those agreed to in a side letter to the Comprehensive and Progressive Agreement for Trans Pacific Partnership, which the United States helped negotiate but from which it later withdrew.39 The Trump administration was later able to include currency provisions in the renegotiated North American Free Trade Agreement, known as the United States-Mexico-Canada Agreement, which has not yet been ratified by Congress.40 In spite of early talk about a KORUS currency chapter,41 the final renegotiated KORUS says nothing about currency issues. However, Korea has stated that it will begin disclosing its foreign exchange transactions.

Conclusion

Overall, the KORUS renegotiation is a minor tweak to the U.S.-Korea trade relationship rather than the wholesale revolution that Trump and his trade advisers portray it to be. That is probably for the best. However, concerns about KORUS have been less prominent for the Trump administration than concerns about other trade relationships in which the United States may take more aggressive actions. The escalating U.S.-China trade conflict, the administration’s persistent use of various unilateral tariffs, and its blocking of nominations to the World Trade Organization’s highest court are taking center stage. The resolution of these hot-button issues will reveal more about whether the administration can figure out a way to put together a coherent trade strategy that does not unravel decades of trade liberalization.

Notes

1 Philip Rucker, “Trump: ‘We May Terminate’ U.S.-South Korea Trade Agreement,” Washington Post, April 28, 2017.

2 Rucker, “Trump: ‘We May Terminate’ U.S.-South Korea Trade Agreement.”

3 Jeffrey J. Schott, “Why the Korea-United States Free Trade Agreement Is a Big Deal,” SERI Quarterly 4, no. 3 (2011): 24.

4 Jeffrey J. Schott, “The Korea-US Free Trade Agreement: A Summary Assessment,” Policy Brief no. PB07-7, Peterson Institute for International Economics, 2007, pp. 2-9.

5 Schott, “Why the Korea-United States Free Trade Agreement Is a Big Deal,” pp. 26-27.

6 Jeffrey J. Schott, “KORUS FTA 2.0: Assessing the Changes,” Policy Brief no. PB10-28, Peterson Institute for International Economics, 2010, p. 1.

7“As KORUS Enters into Force, No Timelines to Tackle Drug, Beef Problems,” Inside U.S. Trade, March 15, 2012.

8 Robert Lighthizer, letter to Korean Minister of Trade, Industry, and Energy, July 12, 2017, https://ustr.gov/sites/default/files/files/Press/Releases/USTR%20KORUS.pdf.

9 Steve Holland, “Trump Hints at Withdrawal from U.S.-South Korea Free Trade Deal,” Reuters, September 2, 2017. Around this time, North Korea announced the successful test of a nuclear weapon that could be loaded onto a long-range missile, which may have influenced U.S. and South Korean thinking about trade issues. “North Korea Nuclear Test: Hydrogen Bomb ‘Missile-Ready,’” British Broadcasting Corporation (BBC) News, September 3, 2017.

10“U.S., Korea Agree to Tackle KORUS Implementation Issues, Amendments,” Inside U.S. Trade, October 4, 2017; and “U.S., Korea Agree to Discuss FTA Amendments,” Sandler, Travis, and Rosenberg Trade Report, October 6, 2017.

11“First Round of Talks on Renegotiating KORUS FTA Take Place,” The Economist, January 9, 2018.

12“Korea, US to Hold 2nd Round of FTA Renegotiation Talks Next Week,” Korea Herald (Seoul), January 26, 2018.

13 Hyunjoo Jin, “South Korea Complains to U.S. about Tariffs on Washing Machines, Solar Panels,” Reuters, February 1, 2018; and “U.S., Korea Continue Talks on KORUS Implementation and Revision,” Sandler, Travis, and Rosenberg Trade Report, February 2, 2018.

14 Jane Chung and Christine Kim, “How Seoul Raced to Conclude U.S. Trade Deal ahead of North Korea Denuclearization Summit,” Japan Times (Tokyo), March 20, 2018.

15 Office of the United States Trade Representative, “Joint Statement by the United States Trade Representative Robert E. Lighthizer and Republic of Korea Minister for Trade Hyun Chong Kim,” press release, March 28, 2018.

16 Donald J. Trump and Jai-in Moon, “Joint Statement on the United States-Korea Free Trade Agreement,” White House, Statements and Releases, September 24, 2018.

17“National Assembly Ratifies Revised S. Korea-U.S. Free Trade Deal,” Yonhap News Agency (Seoul), December 7, 2018.

18 Kwanwoo Jun, “Trump’s ‘Great Deal’ with South Korea Jeopardized by Car Tariff Dispute,” Wall Street Journal, August 7, 2018.

19 Office of the United States Trade Representative, “Protocol between the Government of the United States of America and the Government of the Republic of Korea Amending the Free Trade Agreement between the United States of America and the Republic of Korea,” September 3, 2018.

20 White House, “President Donald J. Trump Is Fulfilling His Promise on the U.S.-Korea Free Trade Agreement and on National Security,” Fact Sheets, March 28, 2018.

21 Jane Chung and Yuka Obayashi, “Trumped: How Seoul’s U.S. Trade ‘Coup’ Left Korea Steel in Limbo as Japan Marches On,” Reuters, September 13, 2018; and Shin Eun-jin, “Korean Steelmaker to Bolster U.S. Output amid Tariff Wars,” Chosun Ilbo (Seoul), September 28, 2018 (in Korean).

22 Office of the United States Trade Representative, “Protocol between the Government of the United States of America and the Government of the Republic of Korea.”

23 International Trade Administration, “US Exports of New Passenger Vehicles and Light Trucks $US.”

24 Hyunjoo Jin and Joyce Lee, “U.S., South Korea Revise Trade Deal with Quotas on Korean Steel,” Reuters, March 26, 2018.

25 Office of the United States Trade Representative, “Protocol between the Government of the United States of America and the Government of the Republic of Korea Amending the Free Trade Agreement between the United States of America and the Republic of Korea.”

26“Lighthizer: US Strikes 3-Part Trade Agreement with South Korea,” CNBC, March 28, 2018.

27 United States International Trade Commission, “U.S.-Korea FTA: Advice on Modifications to Duty Rates for Certain Motor Vehicles,” Publication no. 4791, June 2018, pp. 10-11.

28 Hyundai is planning to sell a new pickup truck, called the Santa Cruz, in late 2019. Jinwoo Park, “Hyundai Plans the First Appearance of Pickup Trucks in the Second Half of This Year,” ITChosun (Seoul), January 17, 2019 (in Korean).

29 International Trade Administration, “2016 Top Markets Report Pharmaceuticals: Country Case Study, South Korea.”

30“PhRMA 2018 Special 301 Submission Calls for Urgent Action to Address Serious Access and IP Barriers,” PhRMA, press release, February 8, 2018.

31 Office of the United States Trade Representative, “New U.S. Trade Policy and National Security Outcomes with the Republic of Korea,” fact sheet, March 28, 2018.

32 Seungduk Lee, “The Amended Pricing Policy of Global Innovative New Drugs Is Implemented as Originally Planned without Further Change,” Yakupnews (Seoul), January 2, 2019 (in Korean); and Jihyun Lee, “The Amended Pricing Policy of Global Innovative New Drugs Lost Its Original Intent Due to the KORUS FTA; Multinational Firms Oppose the Policy Again,” Hankyung (Seoul), January 4, 2019 (in Korean).

33 Larry Ordet, “US, Korea Get Tough on Verifying Compliance with FTA Claims,” Sourcing Journal, May 13, 2014.

34 Office of the United States Trade Representative, “Protocol between the Government of the United States of America and the Government of the Republic of Korea.” See “Attachment: Customs Principles under the Free Trade Agreement between the United States of America and the Republic of Korea.”

35“S. Korea to File Lawsuit against Earlier Ruling in Favor of Iranian Firm,” Yonhap News Agency (Seoul), July 4, 2018.

36 Office of the United States Trade Representative, “Protocol between the Government of the United States of America and the Government of the Republic of Korea,” part 3(a).

37 These include certain viscose rayon staple fibers classified in subheadings 5504.10 or 5507.00; certain textured and nontextured cuprammonium rayon filament yarns classified in subheading 5403.39; and certain cashmere yarns classified in heading 51.08.

38 U.S. Customs and Border Protection, “Textile and Apparel Products: Rules of Origin,” last modified May 29, 2014.

39 The United States pulled out of the Trans-Pacific Partnership in January 2016. The agreement was renamed the Comprehensive and Progressive Agreement for Trans Pacific Partnership and went into effect on December 30, 2018, among the remaining 11 members: Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, and Vietnam.

40 Office of the United States Trade Representative, “USMCA Chapter 33: Macroeconomic Policies and Exchange Rater Matters.”

41 David Lawder, “U.S., South Korea to Revise Trade Pact with Currency Side-Deal, Autos Concessions,” Reuters, March 28, 2018; and Congressional Research Service, “U.S.-South Korea (KORUS) FTA,” December 28, 2018.

Simon Lester is associate director, Inu Manak is a visiting scholar, and Kyounghwa Kim is a visiting scholar at the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies.

Is This Time Different? Schumpeter, the Tech Giants, and Monopoly Fatalism

$
0
0

Ryan Bourne

Growing numbers of legislators and policy experts charge that tech firms such as Amazon, Google, Facebook, Apple, and Microsoft are “monopolies” with the potential power to harm consumers. Many economists, lawyers, and politicians say that economic features of these companies’ product markets — such as network effects, economies of scale, data collection, tying of complementary goods, or operating online marketplaces — create unfair competition or insurmountable entry barriers for new competitors. They conclude that “forward-looking” antitrust policy is needed to prevent persistent market dominance from undermining consumer welfare.

Economist Joseph Schumpeter warned against such monopoly fatalism. He recognized that the most important long-term competitive pressure comes from new products cannibalizing incumbent businesses through marked product quality improvements. An antitrust policy that second-guesses the future based on the present ignores this unpredictable margin of competition, to the detriment of consumers.

Over the past century, large businesses operating in industries similar to today’s tech firms were regularly labeled as unassailable monopolies. Retailers, social networks, mobile phone producers, camera manufacturers, and internet browser and search engine companies have all been thought likely to dominate their sectors perpetually, based on similar economic reasoning to that heard about tech companies today.

Yet historical case studies of the Great Atlantic and Pacific Tea Company, Myspace, Nokia, Kodak, Apple’s iTunes, Microsoft’s Internet Explorer, and more show that none of these features ensured continued dominance. All these businesses saw their market shares disintegrate in the face of innovative new products and companies, as Schumpeter theorized.

This suggests that we should be extremely skeptical about predictions of entrenched monopoly power for Amazon, Google, Facebook, Apple, and Microsoft today. Basing antitrust policy on overcoming market features that “tip” markets toward one-firm dominance or legislating to prevent highly speculative “future harms” is a fool’s errand.

Introduction

Tech firms such as Amazon, Google, Facebook, Apple, and Microsoft are regularly and pejoratively referred to as “monopolies” — implying that their dominance harms consumers.1

Economists find the certainty of such pronouncements troubling.

The companies are no doubt extremely valuable (see Table 1). But they have become profitable by offering free or inexpensive high-quality products popular with users and customers.2 The businesses generate vast amounts of consumer surplus, and their true value is undercounted in conventional GDP estimates.3

Even assessing whether the companies have dominant positions is difficult, because this largely depends on how one defines the relevant market. Is Google, for example, competing in the market for advertising revenue, digital advertising revenue, or user search engines? Is Facebook an advertising space seller or a social network? Is Amazon a retailer in individual product lines, an online digital retailer, a marketplace platform, or all three?

The broad sectors in which the firms operate appear contestable over long periods, with dynamic innovation and competition.4 But economic phenomena, such as network effects, economies of scale, and access to extensive data, can create “winner-take-all” markets that tip toward one company being persistently successful for a period.5 Even in sectors or subsectors where the firms appear to have very high market shares, then, it’s unclear whether this is an efficient outcome given available technologies, or something that poses a genuine future threat to consumers.

What we know for certain is that these tech companies engage in extensive research and development spending and are continually diversifying into new product markets.6 All regularly outline their fears of being disrupted by insurgent firms and technologies. They compete with one another in serving nonconsumers or the low ends of markets.7 None of this behavior would be expected from entrenched monopolies planning to harm consumer welfare.

Nevertheless, the tech firms occupy “psychological monopoly” status in our political discourse. Commentators seem unable to perceive the possibility of viable substitutes or competitors to the firms at a similar scale either now or in the future.8 Many lawmakers, lawyers, economists, and commentators worry that the large size, value, and extensive conglomerate-like activity of these companies brings a future threat of higher prices, less innovation, and worse customer experiences if left untouched by antitrust authorities.

In her influential article, “Amazon’s Anti-Trust Paradox,” lawyer Lina Khan explicitly argues that “the current market is not always a good indication of competitive harm” and that antitrust authorities should “ask what the future market will look like.”9 In particular, Khan worries that the current “consumer welfare standard” interpretation of antitrust law — focusing on the short-term quality, price, and output effects of firm behavior — cannot capture the potential longer-term harm to consumers. Given the future advantages of building a large network or an online intermediary platform, Khan believes companies such as Amazon have an incentive to pursue growth over near-term profits. The result is that they deliver what looks like a great service in a contemporary sense, but their dominance can undermine the competitive process in the long run. This need for antitrust or competition policy to be forward looking was recently echoed by economist Jason Furman in a digital competition review for the UK government.10

It is beyond the scope of this analysis to critique Khan’s work, or indeed the broader intellectual movement pushing for antitrust policy to consider a host of other policy objectives, including but not limited to “rising inequality, employee wage concerns, and the concentration of political power.”11 Much of the current debate is baffling, with different schools arguing about how to interpret existing laws rather than determining what the law should be from first principles. Instead, this paper will limit its focus to one aspect: the call for antitrust policy to be forward looking.

Given that a quarter century ago Facebook, Google, and Amazon did not even exist, such a task is unenviable. As the great Austrian political economist Joseph Schumpeter argued, it is difficult to predict future developments of companies and technologies. The only way to properly judge the effectiveness of market capitalism and associated policy is to review it retrospectively. In Capitalism, Socialism, and Democracy, Schumpeter writes, for example:

Since we are dealing with a process whose every element takes considerable time in revealing its true features and ultimate effects, there is no point in appraising the performance of that process ex visu [from its appearance] of a given point of time; we must judge its performance over time, as it unfolds through decades or centuries.12

Schumpeter is famous for coining the term “creative destruction” to describe the process through which firms innovate to capture consumers, in turn achieving market share, only to be eventually usurped themselves. Too often, he argues, we think about market competition too statically. We utilize a textbook understanding of it — a notion of similar firms competing for sales of largely undifferentiated products. Schumpeter recognized that what really drives the economy over time is the development of “the new commodity, the new type of organization — competition which commands a decisive cost or quality advantage and which strikes not at the margins of the profits and the outputs of the existing firms but at their foundations and their very lives.”13

The rest of this paper uses Schumpeter’s astute observations and historical examples to warn against monopoly or technological fatalism in relation to tech firms today.

Looking back over the past century, it reviews case studies of businesses in industries related to today’s tech giants (retail, social networks, mobile, photography, music, browsers and search engines), themselves widely considered unassailable “monopolies” or potential monopolies, that in some cases faced resultant antitrust lawsuits or investigations. In many of the examples, there are uncanny parallels in the economic arguments used to justify policy action. Problems identified include the supposedly insurmountable barriers created by network effects, economies of scale, predatory pricing, bundling the sale of a product to a complementary good (tying), or a company’s acting as an intermediary marketplace for competitors to its own products. In every case, the monopoly pessimism associated with these claims ultimately proved ill founded: new technological innovations or rival competitors with differentiated products knocked the firms from their supposedly dominant position.

When it comes to industrial change, past performance is not necessarily an accurate guide to future outcomes. This time the tech firms might buck historical trends, as Furman argues, owing to their scale, the enhanced ease of data collection, and their acquisition strategies. But historical research suggests that press coverage of the emergence of supposed monopolies is much more extensive than that of the same businesses’ disappearance.14 The past century is replete with warnings of “this time is different.” Fears of entrenched monopoly power echo through time, often using near-identical arguments to those used against the tech giants today.

The Great Atlantic and Pacific Tea Company (A&P)

The Great Atlantic and Pacific Tea Company — A&P — was the Amazon of its day.15 The company revolutionized retail, disrupting the grocery sector into the chain-store model that evolved into the modern supermarket. In doing so, it was accused of all the charges currently leveled at Jeff Bezos’s tech company. This included accusations it was harming local economies by usurping sole-proprietor retailers, engaging in predatory price cutting, giving preference to its own products over those of rivals within its marketplace, and exacting “unfair” monopsony pressure on wholesalers and suppliers.

In the 19th and very early 20th centuries, customers purchased their groceries from small, independent (often specialist) retailers, such as butchers and bakers. Goods were usually sold on credit and often delivered to customers, having been obtained wholesale by retailers from (often corrupt) jobbers and middlemen. This low-volume, high-cost, expensive distribution model meant relatively high grocery prices for customers.

A&P completely overturned this. It standardized its stores and went about vertically integrating food production in areas such as bakeries, canneries, and dairy plants, producing its own brand of products. It founded its own distribution network of trucks and shifted to a cash-and-carry model. For products it did not produce, it bought directly and in bulk from food producers, obtaining discounts for regular, predictable custom and savings from cutting out the middlemen. Consumers overwhelmingly benefited from these cost reductions through lower retail prices. One estimate suggests chain-store prices were 4.5 to 14 percent lower than traditional grocers’.16

As a result, throughout the 1920s and early 1930s, A&P (and, indeed, other chain stores that replicated its techniques) saw explosive growth. Marc Levinson’s biography of A&P estimated that by 1929, it “owned nearly 16,000 grocery stores, 70 factories, and more than 100 warehouses. It was the country’s largest coffee importer, the largest butter buyer, and the second-largest baker.”17 Elsewhere, its store numbers reportedly increased from 4,244 in 1919 to 14,926 by 1935 — leaving it with more stores than its next four chain-store competitors combined (see Figure 1).18 During that period, the market share of the top five chain stores in 1935 had increased to 25.7 percent from just 4.2 percent in 1919.

As A&P and others blew away competitors and displaced or squeezed wholesalers, they suffered blowback that would sound familiar today to those who have followed the media coverage of Amazon’s impact on bookstores and other retailers. As early as 1928, chain stores such as A&P were slammed by the Virginia Wholesale Grocers’ Association for “their effort to create monopoly, by attempting to freeze out the independent wholesaler and retailer with indiscriminatory cut prices of standard advertised merchandise and advertising these prices as bait to the public, thereby monopolizing local business.”19 Critics denounced the effects of chain stores on traditional grocery sellers, and lamented A&P’s market power and use of hard data collected from sales across the country — allowing it to regionally vary products held in stores.

By 1936, the lobbying for protection from traditional retailers and wholesalers had resulted in a policy backlash. State and local taxes and price control laws had been imposed on chain stores and wholesalers with the intention of propping up other independent retailers and the wholesalers. The Robinson-Patman Act of that year — originally known as the Wholesale Grocer’s Protection Act — prohibited wholesalers from offering different prices to different buyers except in ill-defined circumstances, disabling the consumer-friendly deals A&P could demand down the supply chain, in turn hitting A&P’s profitability and raising retail prices. Similar to the way some commentators advocate with tech giants today, these two senators pushed for an industry-specific chain store tax in 1938 and 1939, which, had they not failed to get it passed, almost certainly would have raised consumer prices significantly, or else driven A&P out of business entirely.20

It is now widely acknowledged that the Robinson-Patman Act, though notionally still on the books, is mostly unenforced, and for good reason. But after its introduction, A&P and other chain stores were already trying to keep up with the disruption to their profits from the proliferating big-box supermarkets and were themselves engaged in substituting toward that model. The barrage of economic illiteracy from antitrust authorities did not help them in transitioning.

In 1940, A&P came under more pressure from government for engaging in “price discrimination” between regions.21 By 1946, A&P had been found by federal judge Walter C. Lindley to be in violation of the Sherman Act, not because it had actually raised prices or excluded competition, but because the “power exists to raise prices and exclude competition,”22 which stemmed from A&P’s dominance in some cities. The company had been criminally prosecuted on spurious economic grounds in regard to supposed predatory pricing and buying power over wholesalers and because of its vertical integration. There was no evidence this had harmed consumers, but it had harmed its competitors.

A&P fought back, and in 1949 took out advertisements in 2,000 newspapers asking, “Do the American people want A&P out of business?” One ad stated:

Do they want to continue to enjoy lower prices and better living, or do they want to break up A&P? … Nobody has ever shown we have anything even approaching a monopoly of the food business anywhere. Nobody has ever said we charge too high prices — just the opposite… . If the antitrust lawyers succeed in destroying A&P, the way will be cleared for the destruction of every other efficient large-scale distributor.23

The case against A&P was upheld, though, and the federal government sought the company’s breakup before eventually settling for A&P’s closing some parts of its brokerage business that sold products (for some reason, deemed unfairly) to rivals.

A long and winding downturn in fortunes followed. Some pin this failure on the antitrust proceedings distracting the company from its core business. But the truth is chain stores themselves were disrupted by big-box warehouse-like supermarkets, the rise of television promoting national brands for certain products, and the significant reduction in transportation and refrigeration costs in the postwar era, which changed the types of stores consumer preferred. Creative destruction then came in the form of stores embedded within shopping center locations, the introduction of nongrocery products to supermarkets, and then later again, with the rise of IT, big data, and revolutions in logistics. Over time, A&P simply failed to keep up with these changes and was disrupted in the same way it had disrupted the grocery retailers of the early 20th century. The company filed for Chapter 11 bankruptcy twice — in 2010 and 2015.24

Myspace

“Will Myspace ever lose its monopoly?” asked Victor Keegan in the Guardian’s technology section in early 2007.25 The journalist was riffing off a TechNewsWorld article by John Barrett that claimed Myspace was not just a monopoly, but a natural one.26

The arguments for such claims were similar to those made about Facebook today. Keegan and Barrett argued that social networks inevitably tend toward monopoly because of the extensive network effects associated with social media. The time invested in uploading content, coupled with the product’s utility rising with the number of users on the network, supposedly made Myspace’s dominant position unassailable.

This was particularly true, Barrett argued, because Myspace had more unique users than other social media platforms at the time, including Yahoo 360, Friendster, and Facebook. Keegan even implied (ironically, given trends since) that the time and effort required by social network users to upload content meant that social network websites were much “stickier” than search engines such as Google, where just one click could take someone to a competitor’s site.

Myspace had been founded in 2003 and quickly saw a rapid expansion of users. The website was a social network with individual profiles, creating networks of friends and opportunities to embed or connect to music.

Observing its explosive growth, Rupert Murdoch’s NewsCorp bought the site in 2005 for $580 million, and just a few months after the acquisition agreed to a $900 million advertising revenue contract with Google. The Financial Times reported that “within 15 months of the acquisition, revenues had leapt from about $1m a month to $50m a month.” By June 2006, the site was the most visited in the United States, overtaking Google.27

In early 2008, the web measurement firm Hitwise estimated that Myspace enjoyed 73.4 percent of all traffic on social networking sites.28 At its December 2008 peak, the site attracted 75.9 million monthly unique visitors in the United States alone — about a quarter of the country’s entire population.29 So widespread was the perception of its market dominance that LiveUniverse (a company that produces and distributes corporate videos) brought and lost a case against Myspace alleging it had monopoly power. LiveUniverse accused Myspace of engaging in exclusionary conduct simply for refusing to deal with the firm (a charge sometimes invoked against today’s tech giants).30

Yet by the time Keegan and Barrett penned their articles, there was a new competitor on the rise. By 2008, Myspace had already been overtaken in the number of worldwide users by Facebook.31 By May 2009, Myspace had been overtaken by Facebook in unique U.S. visitors too. The Financial Times estimated that Myspace’s overall market share fell to just 30 percent by the end of 2009.32

A more user-friendly interface on Mark Zuckerberg’s site, and a less cluttered advertising space allowing more onsite innovation, spurred Facebook’s rapidly rising user numbers. Facebook also adopted an email address importer tool that boosted user rates, accelerating its own network effects. Since that time, Myspace has never really recovered.

By 2016, Myspace was estimated to have just 15 million unique global visitors per month; with 5.5 million unique visitors from the United States.33 In February 2018, global monthly traffic fell further, to just 7.6 million visits per month.34 Myspace was back in the news in March 2019 when it announced that a server migration error had lost “any photos, videos and audio files” uploaded to the site before 2016.35

Network effects certainly make competing with existing firms more difficult for those producers selling very similar products or services at a given point in time. But while those effects might tip a market toward one firm enjoying extraordinarily high market share, the Myspace example shows that network effects need not create insurmountable monopolies, not least because competition can still occur “for the market.”

These days, accusations of the “monopolization” of social networking by Facebook are undermined by extensive evidence of users multihoming (i.e., actively using a number of platforms at once). Facebook and others are constantly looking for new ways to improve their offerings to maintain active users too, including recent promises about improving privacy.

Importantly, the Myspace history shows that the very network effects that lead to massive growth can also lead to a rapid demise when a superior product comes along. All social networks face a difficult balancing act between providing an attractive and innovative user experience, on the one hand, and monetizing the platform by competing for the real “customers” — digital advertisers — on the other. The Myspace example shows the degree of interdependence between the two. Getting the balance wrong can have significant consequences.

Nokia

In discussion about the tech giants, Apple’s dominance in the U.S. mobile vendor market is often taken for granted.36 It shouldn’t be. Just 12 years ago, on November 13, 2007, Forbes ran a front cover entitled “One Billion Customers — Can Anyone Catch the Cell Phone King?”37 The article was referring not to Apple, but to the growing global dominance of mobile handset company Nokia.

In 2007, the Finnish firm sold approximately 430 million mobile handsets worldwide — estimated to be equal to the volume sold by Motorola, Samsung, and Sony Ericsson combined. It self-reported that it had a 40 percent market share of the global handset market, including over half the smartphone market.38

Though its U.S. footprint at the time was much smaller, the company itself had grand plans to expand into internet services on its handsets and become as big a global brand as Google or Yahoo! The Forbes story confidently pronounced that given its investment in location services and other apps, “no mobile company will ever know more about how people use phones than Nokia.” Today, it is Apple’s extensive data collection, through mobile and other sources such as Echo spots, that regularly causes gnashing of commentators’ teeth.

“Mobile Monopoly?” Germany’s Der Spiegel asked in January 2008, as it reported “Nokia Rockets Past Rivals.”39 According to the newspaper, the sheer volume of phones sold by Nokia was creating economies of scale that would act as a significant barrier to entry for rivals. The higher profits generated through these unit cost savings were generating “more money to invest in research and development,” it was said, making it “very difficult for competitors to manufacture as many different models of phones as cheaply and still make a profit.”

That, of course, was written just after the launch of the Apple iPhone, described as a “wild card” in the mobile-phone industry in these articles. The iPhone was a much more expensive product than Nokia’s top range N95 at the time. But it was becoming increasingly recognized that Nokia’s operating system was no match for Apple’s app-based platform.40

Nokia’s strength was in hardware, but Apple advanced into the sector by shifting the key dimension of competition toward software. Nokia had become a market leader, launching the first smartphone in the 1990s and ploughing money into research and development, even producing prototypes for internet-enabled touchscreen technologies. But it did not foresee the importance of apps to the appeal of the phones until it was too late.41 That recognition led to a host of managerial recriminations and soul-searching. Yet Nokia didn’t have the technological competence in software to counter Apple’s iPhones and Samsung’s Android phones.42

Looking at the global market shares of these three firms for the past decade shows this clearly. In Q4 2009, Nokia still had a 38.6 percent global market share in sales.43 Apple had accelerated to 16.1 percent, while Samsung was a bit player with just 3.3 percent. By Q1 2012 though, Nokia had dropped to just 8.2 percent of the global market, while Apple and Samsung combined had 53.3 percent of the market (see Figure 2). Since then the large Chinese players such as Huawei and Xiaomi have diluted these global market shares, but Apple and Samsung were still estimated to have a combined global market share of 36.9 percent in Q4 2018. In the United States, their combined market share is higher still, at 79 percent (55 percent Apple and 24 percent Samsung).

Microsoft bought out Nokia in 2013, at a time when it had just 3 percent global market share and its market capitalization had fallen to a fifth of what it was in 2007.44 The story of how the company was completely usurped by Apple and Samsung suggests that present economies of scale are no barrier to a fundamentally better product outcompeting on the strength of its quality or new features, irrespective of price. Competition in the mobile market used to be along the hardware dimension. Now it occurs primarily across the software dimension on smartphones. Who knows how the market will develop in future?

Kodak

So dominant had Kodak been in the film- and photo-processing business through much of the 20th century that eventually a beautiful image or scene was commonly referred to as “a Kodak moment,” after one of the company’s advertising campaigns.45 Modern echoes of this can be found in the internet search engine market, where searching online for a product is commonly referred to using the verb “to Google.”

As with Google and search engines, Kodak really was the personal photography industry for a sustained period. It was the first to pioneer mass-market cameras and set up a business model that incorporated the whole film, photo development, and printing value chain. In 1976, Kodak was estimated to have 90 percent of the U.S. film market and 85 percent of the market for cameras. The firm had built a successful model predicated on film and processing sales delivering high revenue, allowing the company to sell camera units at relatively low prices.46

This dominance in film was particularly long-lived. As far back as 1923, the Federal Trade Commission had filed a complaint against Kodak on the grounds of conspiracy of restraint of trade. A contemporaneous Time article remarked that “the company had manufactured and sold, up to March 1920, 94 percent of all film, and sold 96 percent of all film, produced in the United States.”47 The company was also regularly described as a monopoly. In 1978, a federal jury even labeled the company a monopolist in the color print paper amateur photography business, albeit not finding the company guilty of obtaining that position unlawfully.48

Kodak’s domestic position in photography was even more dominant than Apple’s position in the mobile vendor market today. Yes, Kodak faced challenges from other competitors such as Fujifilm and camera manufacturers such as Olympus and Nikon. But, like Apple, Kodak was an innovative firm, developing new products in competitive submarkets, including instant and single-use cameras.

Fujifilm, in particular, began competing aggressively against Kodak in the late 1980s and 1990s, capitalizing on the rise of big-box retailers replacing film and photography stores. These new retailers were squeezing manufacturers and demanding a broader range of products for their shelves. The Japanese company engaged in robust marketing and generated extensive price competition in the late 1990s, which ate into Kodak’s market share.49

But it was a new type of product — the consumer digital camera — that completely revolutionized the industry and led to a decline in Kodak’s fortunes. Rather than sending off a raft of film images to be developed, this new technology meant customers were now able to review shots on camera, upload to a computer to save or edit, and ultimately print or share with others through email or the internet. Digital photography was instantaneous and safer to store. Complementary services sprang up, delivering inexpensive printing, copying, or sharing of digital images from a range of photofinishing companies, many of which developed products better suited to digital material than Kodak.

Kodak was never able to fully embed itself in this digital marketplace, despite one of its engineers — Steve Sasson — having invented the first digital camera back in 1975. In fact, the company had engaged in extensive research and development into the digitization of photography for decades, but it used the insights to serve niche high-end markets rather than the mainstream amateur photography industry.50 The story here is not about consumers no longer wanting to take pictures, just as Nokia’s fall was not evidence of consumers giving up on mobile phones. What happened with Kodak was that the technology had shifted from chemical film to digital electronics, in turn creating demand for new types of products.51 So dependent had Kodak’s business model been on film and film processing that the company found it hard to shift corporate culture and truly embrace the digital world. The specialists it employed were, overwhelmingly, experts in film- and photo processing. Indeed, there appears to have been internal resistance to going all in on the digital technology that would cannibalize the traditionally profitable business.52 That reluctance, plus underestimation of the growth prospects of the digital market, left Kodak playing catch-up to rivals when it made its big push into the digital market in 2001. It was a classic case of an incumbent being slow and misjudging the scale of changing consumer demands.

Under pressure from the digital threat, Kodak had cut film costs and cycle times, but by 1997 digital camera sales were exploding for new competitors, especially those from Japan. Film camera sales had peaked in 2000 before starting a precipitous decline. They were overtaken by digital camera sales by 2005 (Figure 3 shows similar trends for shipments rather than sales, albeit with the overtaking by digital cameras occurring earlier).53 By then, Kodak found itself in a crowded market. It didn’t appear to appreciate how the internet would shape the industry either, further depressing the need for film or even the digital kiosks the company had invested in within traditional stores.

Since the middle of the first decade of the 2000s, of course, the industry has been completely disrupted once again by smartphones containing digital cameras and associated apps such as Snapchat and Instagram. Digital camera shipments worldwide peaked in 2010 and sales have fallen by 80 percent since.54 In the United States, industry estimates show that digital camera sales volumes themselves have fallen from 14.5 million in 2010 to 4.8 million in 2018.55

At the start of 2012, Kodak filed for bankruptcy.56 It announced that it would leave the digital photo capture market to focus on the business printing market. The company sold some of its digital imaging patents. Kodak is now back out of bankruptcy and focuses largely on business packaging, printing, and other professional services, but it has recently invested in new digital imaging and touchscreen technologies and a blockchain cryptocurrency for photographers.57

It is difficult for us to imagine Apple and the iPhone not dominating the mobile phone market or Google the search engine market. As with Kodak and photography, they are currently synonymous with their industries. But technological change can completely revolutionize a sector, leaving behind existing firms that have developed around “tried and tested” business models.

iTunes

Apple itself has already seen the rise, fall, and rise again of its “dominance” in the music purchase sector. “Who Will Break iTunes’ Monopoly?” asked Talia Soghomonian in the British music magazine NME in 2010.58 At the time, this was a common refrain.

The idea that Apple had monopoly power in the digital music download market through its music store, iTunes, had been building for four years. In 2006, technology podcaster Paul Thurrott said, “Apple should be stopped before the abuses get too great and harm too many consumers. That the US DOJ is publicly defending this company and its practices in Europe is, of course, insane.”59 By 2010, a Department of Justice inquiry into Apple’s online music presence and digital marketing tactics had been undertaken.60

iTunes had launched in 2003 as an online store where people could buy and download music at the individual song level. In an interview with Rolling Stone, Apple cofounder Steve Jobs dismissed the prospect of serious competition from a subscription music model, saying, “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.”61

Jobs had good reason to be confident. The iTunes product itself was revolutionary. It allowed an individual to purchase a track or album to play as he or she wished on a computer or on a personal iPod, or to burn onto a homemade CD. Importantly, record labels were willing to sign up for their music to be placed in the store because iTunes files could be digitally protected from unauthorized redistribution, reducing the problem of piracy. iTunes also contained a library of audiobooks and other complementary features that encouraged use.

The type of music file downloadable on iTunes meant that the only portable device the music could be played on was Apple’s iPod — a form of tying its rivals resented and one that was subject to an antitrust suit launched in 2005 and eventually dismissed in 2014.62 But despite its lesser compatibility across device types, consumers appeared to love the iPod product. Several competitors tried and failed to eat into its market share through 2006.63

The iTunes store had seen explosive growth by that time. It had a market share of anywhere between 72 and 88 percent in the digital music download market.64 The Apple model, which charged 99 cents per song for downloads, seemed incredibly popular with customers. Some economists claimed it had developed the truly optimal pricing model, which would win out against bundled subscription services such as Napster.65

As with modern tech companies, how one viewed iTunes’ success depended on how one defined the market under discussion. Digital music sales, although rapidly growing, still had to compete with physical CDs and records. The market was clearly contestable too, given that companies such as Rio and Creative Technologies and Dell had tried to launch rival MP3 players and eMusic, Napster, MSN Music, and Yahoo Music all ran rival song download stores with much smaller market shares.

But that didn’t stop widespread concern about some of Apple’s practices. French authorities objected to the bundling of the listening device (iPod) with the music store (iTunes).66 Similar concerns relating to this type of tying occur today. Apple is regularly lambasted by developers for disabling substitutes to the app store on the iPhone. Though Apple itself largely innovated the model, it is regularly asserted that this bundling and exclusivity gives it substantial power over sellers because of the high iPhone market share, allowing it to take a cut of developer revenue as an “unlawful monopoly.”67

Yet the history of iTunes shows that technological change itself can force unbundling. In fact, 2010 was curious timing for an antitrust investigation into a potential monopoly of iTunes linked to the iPod. By that time, although iTunes sold 25 percent of all music in the United States, its market share for digital music sales had fallen slightly, to 70 percent.68 This subsequently fell further to 64 percent by 2012.69 A whole new form of disruption was well advanced through online streaming and subscription services such as Pandora and Spotify, and through people listening to music on smartphones.70

The subscription model now completely dominates the online music purchase market. In response to rival firms, Apple launched its own streaming service, Apple Music, in 2015. Though subsequently becoming the largest player in this market by U.S. monthly users, there’s no sign of any impending monopoly. As of March 2018, Apple Music had 49.5 million monthly U.S. users, compared with Spotify’s 47.7 million, Pandora’s 36.8 million, SoundCloud’s 34.2 million, and Google Play’s 21.9 million.71 These products have been complemented by the rise of smart speakers, such as Amazon Echo and Google Home.

So dramatic has the change in the industry been that by 2018, music streaming services contributed three-quarters of the total U.S. music industry revenue, if one tots up both premium subscription services and ad-supported revenue from sites such as YouTube. Digital downloads, which iTunes dominated, now make up just 11 percent of total music sales revenue; a collapse from 42 percent just five years before.72 Revenue from physical product sales once again exceeds digital downloads (see Figure 4).

Last year there were widespread rumors that Apple would completely shut down iTunes sometime this year.73 People today listen to music on their phones and when they are on the move, and they want access to huge libraries of songs on demand. New technologies to deliver that and services to provide it completely overhauled the music purchase sector that iTunes had dominated.

Netscape and Internet Explorer

Back in 1996, around 90 percent of internet users used variants of one internet browser: the Netscape Navigator.74

The company that launched that product — the Mosaic Communications Corporation — had developed the first browser using clickable buttons rather than text commands. This proved an incredibly popular innovation. Recognizing the potential for huge success, Mosaic released the first Netscape Navigator browser in December 1994. By August 1995, the company had launched a successful IPO. Users commonly referred to surfing the world wide web as “using Netscape.” Cofounder Marc Andreessen appeared on the cover of Time under the heading, “The Golden Geeks.”75 The company seemed unstoppable.

Yet by 2001, Netscape had a global market share of just 12 percent. It had been completely usurped by Microsoft’s Internet Explorer (IE), which by then had a global browser market share of nearly 88 percent.76 That interim period became retrospectively known as the “browser wars” as Microsoft and Netscape competed extensively to add features in developing a better product to capture the market.

Software giant Microsoft was ultimately able to invest more resources in its offering. In bundling IE into its Windows operating system, the company effectively had set a pre-installed default browser, which gave it a significant competitive edge over rivals whose products had to be downloaded or purchased and installed separately. The rising market share of IE earned Microsoft the attention of the Department of Justice’s (DOJ) antitrust division by 1998. The DOJ charged the company with violating the Sherman Act for operating a monopoly and using supposedly anti-competitive practices, not least by bundling IE with its Windows operating system.

Initially the DC District Court ordered the breakup of Microsoft into an operating system unit and a unit for other software. But by 2004, the DOJ and Microsoft had settled for the company having to disclose its application programming interfaces and protocols for three years. The last Netscape browser, Netscape Navigator 7, was released in 2003, at a time when IE still dominated.77

Today, the notion that continued dominance for Microsoft in the browser market was inevitable seems almost quaint. Yet even in 2006, experts such as Harvard Business School professor Pai-Ling Yin believed that sustained monopolization by the software giant in the browser market was likely. Her research, written with Timothy Bresnahan, had concluded that it was Microsoft’s advantage in tying and distributing IE that had won the browser wars, rather than some technological superiority of the product. Crucially, they believed that Microsoft had launched IE just at the right time, with the explosion of the PC market increasing the importance of network effects in browsers and making it more difficult for other competitors to do to IE in the future what Microsoft had done to Netscape.78

In an interview for Harvard Business School’s Working Knowledge series, Yin explained that a large “second-mover” into the browser market, such as Microsoft, had a window of opportunity to compete with Netscape in the mid-to-late 1990s because the market was still growing substantially. This allowed Microsoft to focus on winning new users rather than on switchers.79 Through the complementary Windows operating system, Microsoft was able to establish itself as a default browser on PCs. This slowed the rise of Netscape and allowed Microsoft to obtain critical mass.

Yin thought it would be extraordinarily difficult to displace IE’s market share in a mature mass market. Once IE had achieved dominance, companies chose to optimize their websites in IE, meaning that the user experience was worse for many major sites on alternative browsers. It was also costly for webmasters to write code for different types of browsers, leading them to focus on IE, which had more end users. These indirect network effects, thought Yin, therefore represented a large barrier to entry. When asked, “So Firefox and other new browsers, no matter that they have new features and refinements that IE lacks, remain at a competitive disadvantage?” Yin responded, “Game over.”

But it wasn’t “game over.” In fact, it was from roughly 2006 onward that new competition in the browser market really took off. By July 2008, Mozilla Firefox had been eating into IE’s market share in the desktop browser market, as had Apple’s Safari. Then Google Chrome was released. By the end of 2019 it is estimated that Chrome will have a global market share of close to 64 percent, followed by Safari with 15 percent, and IE with just 3 percent (see Figure 5 below).

By 2016, Microsoft had stopped offering support for Internet Explorer versions 7 through 10 on its operating systems.80 Late last year, it effectively announced it would cease developing its own browser technology, instead adopting the Chromium project, the technology that underpins Google Chrome.

How did Chrome become so dominant? As had Microsoft before it, Google had brand recognition and a complementary product, in this case its search engine, which allowed it access to users to encourage them to download Chrome. This gave it a clear path into the market and an opportunity to break down some of the network effects described above. Microsoft, in a comfortable and dominant position, saw little incentive to innovate, allowing Google the opportunity to develop a browser that was clean and integrated with the company’s other services. Microsoft had rested on its laurels, and subsequently found itself behind the curve on cloud computing, mobile browsing, and collaborative browser-based software.

These days Google’s web presence, including its presence in the browser and search engine markets, is a cause of much consternation. Yet the Netscape and IE examples suggest that sustained dominance based on product complementarity is not inevitable.

As former Mozilla chief technology officer Andreas Gal put it in 2017:

Browsers are what the Web looked like in the first decades of the Internet. Mobile disrupted the Web, but the Web embraced mobile and at the heart of most apps beats a lot of JavaScript and HTTPS and REST these days. The future Web will look yet again completely different. Much will survive, and some parts of it will get disrupted.81

Other examples

The case studies so far demonstrate that monopoly and technological fatalism are not new features of discussions about dominant businesses. In retail, social networks, mobile phones, cameras, music, and web browsing — industries in which today’s tech giants operate — companies assumed to have entrenched dominance have themselves been overwhelmed by the process of creative destruction Schumpeter described.

Nor are these cherry-picked stories. There are numerous other examples in related industries where the same dynamic prevailed:

  • Xerox, for example, invented the first modern photocopier in 1960 and then dominated the sector, with nearly 100 percent of the market in 1970.82 So complete was the firm’s dominance that photocopying informally became known as “xeroxing.” In 1973, an antitrust complaint filed by a rival alleged that Xerox had violated the Sherman and Clayton Acts.83 A five-year struggle cost the company millions of dollars. But the eventual settlement came about just as IBM, Eastman-Kodak, Canon, Minolta, Ricoh, and others entered the market with smaller and cheaper machines, following the expiration of Xerox’s patents. By 1976, Xerox’s market share had fallen to 59 percent, and by 1978 it had fallen to 54 percent.84 Since then, of course, the margins of competition in the industry have been disrupted again by digital copiers, home printers, computers, e-mail, and instant cameras.
  • Yahoo! used to dominate the search engine market. These days, a lot of people labor under the misapprehension that Google “invented” search. In fact, Google was the 35th search engine to enter the sector. Though the firm was founded in 1997, until 2000 Yahoo! was by far the most popular search engine, with around 34 percent of all unique search engine users in August 1997.85 In 1998, Fortune wrote up “How Yahoo! Won the Search Wars.”86 Yet, by 2000 it was obvious that Google would overtake Yahoo! due to better technology that took account of cross-references and the popularity of pages for searching, unlike Yahoo!’s card catalogue-like system.87 By June of that year, Yahoo! and Google had come to an agreement that Yahoo! would use Google’s search results.88 Google today is estimated to have 92 percent of the worldwide search engine market share, with Bing at 3 percent, and Yahoo! at just 2 percent (see Figure 6).89
  • AOL was thought to have a “monopoly” in instant messaging at the turn of the millennium, with an estimated 90 percent share of that market.90 Despite Microsoft, Yahoo!, Tribal Voice, and iCast all developing their own services, network effects meant that the market tipped toward AOL’s AIM. Other firms were greatly concerned by this: more than 40 companies asked the Federal Communications Commission to “encourage” AOL to make its network compatible with others as a condition for approving its merger with Time Warner.91 As with firms such as Facebook today, a “free” service with extensive network effects was the cause of considerable consternation. But such fears were misplaced. We’ve since seen the rise and fall of MSN Messenger and Myspace, which contained an instant messenger service. In 2008 Facebook Chat (later named Facebook Messenger) was launched; then later came WhatsApp, iMessage, GChat, WeChat, Snapchat, and Slack.92 AIM simply got blown away by new forms of competition.
  • IBM was the subject of a 13-year antitrust lawsuit that was ultimately dismissed “without merit” in 1982.93 Analysts debate the extent to which this focus on the company aided the entry of competitors into its markets. What is perhaps less well-known is that for two years between 1976 and 1978, the Federal Trade Commission investigated whether IBM had monopolized the “office typewriter industry” in making, purchasing, renting, and repairing office typewriters and parts.94 Ultimately, it decided to take no action. But that an investigation was even taking place shows the inherent danger of trying to forecast market trends. After all, it was around this time that personal computers were just taking off and with them word processors.

Conclusion

Two important lessons can be drawn from the case studies presented here.

First, the predictions of unassailable market dominance that we hear in relation to today’s tech giants, often explained by appeals to economic phenomena such as network effects, economies of scale, tying of products, or other cost barriers to entry, have been heard many times before in similar industries. The forecasts have proven ill-founded. The predictions of sustained dominance by Amazon, Google, Facebook, Apple, and others should therefore be taken with extreme skepticism. Yes, the nature of technologies and markets can result in one firm enjoying large market share, sometimes persistently. But this does not mean that the firm’s dominant position will endure, or that the firm’s dominance is bad for consumers — either now or in the future. As Schumpeter understood, the most important margin of competition in the long term is not having many firms deliver very similar products at a single point in time, but rather innovations that entirely change the type of products demanded.

Second, shaping antitrust policy to deal with highly speculative “future harms” is likely to be a fool’s errand. It is almost impossible to predict market evolutions or technological transformations. But a host of commentators, lawyers, and economists try to do it anyway, often claiming that, left unimpeded by authorities, present companies have such an overwhelmingly dominant position that consumers are at risk of higher prices and dramatic welfare costs through reduced innovation.

None of the above analysis suggests, of course, that the tech giants are incapable of anti-competitive behavior or harming consumer welfare. However, history serves as a warning that extrapolation of the future based upon the present could lead to wasteful lawsuits absorbing resources that could otherwise fund innovative products or product features. If today’s monopoly fatalism leads to associated regulatory clampdowns too, such as treating incumbent firms as public utilities, it might even entrench existing positions and deter entry into sectors that over longer periods would otherwise be incredibly dynamic.95

Notes

1 For example, “Amazon is a monopoly, a product of this new and twisted Gilded Age” quoted from Ross Barkan, “Amazon’s Retreat from New York Represents a Turning Point,” Guardian, February 14, 2019; Ryan Cooper, “Google Is a Monopoly — and It’s Crushing the Internet,” The Week, April, 21 2017; David Meyer, “Why Facebook Is Impervious to Damage — and What’s Needed to Rein It In,” Fortune, January 31, 2019; and Denise Hearn, “Canadian, U.S. Regulators Asleep at the Switch as Monopolies Thrive,” Globe and Mail, February 25, 2019.

2 The five — Apple, Amazon.com, Alphabet (which owns Google), Microsoft, and Facebook — were the five largest companies in the world by market value in 2018 (in billions of U.S. dollars) according to Statista, “The 100 Largest Companies in the World by Market Value in 2018.”

3 Ryan Bourne, “Every Day, We Vote with Our Clicks That We Value Facebook,” City A.M., February 5, 2019; and Erik Brynjolfsson et al., “GDP-B: Accounting for the Value of New and Free Goods in the Digital Economy,” NBER Working Paper No. 25695, March 2019.

4 David Evans, “Why the Dynamics of Competition for Online Platforms Leads to Sleepless Nights, But Not Sleepy Monopolies,” SSRN (website), August 23, 2017.

5 Jason Furman et al., “Unlocking Digital Competition: Report of the Digital Competition Expert Panel,” HM Treasury, United Kingdom, March 2019, https://assets.publishing.service.
gov.uk/government/uploads/system/uploads/attachment_data/file/785547/
unlocking_digital_competition_furman_review_web.pdf
.

6 Strategy & 2018 Global Innovation 1000 Study, PricewaterhouseCoopers.

7 Nicolas Petit, “Technology Giants, the Moligopoly Hypothezis and Holistic Competition: A Primer,” SSRN (website), October 20, 2016.

8 Jill Sundie, Betsy Gelb, and Darren Bush, “Economic Reality vs. Consumer Perceptions of Monopoly,” Journal of Public Policy & Marketing 27, no. 2 (Fall 2008): 178-81.

9 Lina Khan, “Amazon’s Anti-Trust Paradox,” Yale Law Journal 126, no. 3, Note (January 2017): 710-805.

10 Furman et al., “Unlocking Digital Competition: Report of the Digital Competition Expert Panel.”

11 Joshua Wright et al., “Requiem for a Paradox: The Dubious Rise and Inevitable Fall of Hipster Anti-Trust,” George Mason Law & Economics Research Paper No. 18-29, Arizona State Law Journal, 2019.

12 Joseph A. Schumpeter, Can Capitalism Survive? Creative Destruction and the Global Economy (New York: Harper Perennial, 2009). Originally published as Capitalism, Socialism, and Democracy.

13 Schumpeter, Can Capitalism Survive? Creative Destruction and the Global Economy, p. 45.

14 Dirk Auer and Nicolas Petit, “Two Systems of Belief about Monopoly: The Press vs. Anti-Trust,” Cato Journal, vol. 39, no. 1 (Winter 2019).

15 Timothy Muris and Jonathan Nuechterlein, “Anti-Trust in the Internet Era: The Legacy of United States v. A&P,” George Mason Law & Economics Research Paper No. 18-15, May 29, 2018.

16 R. S. Tedlow, New and Improved: The Story of Mass Marketing in America (New York: Basic Books, 1990).

17 Marc Levinson, “Monopoly in Chains: Anti-Trust and the Great A&P,” CPIAnti-Trust Chronicle 12 (2011).

18 Paul Ellickson, “The Evolution of the Supermarket Industry: From A&P to Walmart,” chapter 15, in Handbook on the Economics of Retail and Distribution (Cheltenham: Edward Elgar Publishing, 2011), pp. 368-91.

19 Aaron Hardy Ulm, “Chain-Store ‘Issue’ Arises: Retailer and Wholesaler Seeking Government Aid in Struggle against ‘Menace,’” Barron’s, June 4, 1928.

20 Arundel Cotter, “Legislating against the Consumer: Price-Fixing Laws Raise Retail Prices — Threat of New Patman Bill,” Barron’s, January 9, 1939; and Thomas V. DiBacco, “Depression Tale: Putting the Chain Stores in a Cage,” Wall Street Journal, March 5, 1985.

21“A.&P. Defends Variations in Prices between Cities,” Wall Street Journal, May 2, 1940.

22“A.&P. Decision,” Washington Post, October 3, 1946.

23“A&P Ad Puts Anti-Trust Case before Public,” Los Angeles Times, September 21, 1949.

24“A.&P. Files for Bankruptcy,” New York Times, December 12, 2010; and “A&P Grocery Chain Files for Bankruptcy Again,” USA Today, July 20, 2015.

25 Victor Keegan, “Will MySpace Ever Lose Its Monopoly?,” Guardian, February 8, 2007.

26 John Barrett, “MySpace Is a Natural Monopoly,” Tech News World, January 17, 2007.

27 Stefanie Olsen, “Google’s Antisocial Downside,” CNET.com, December 19, 2006.

28 Josh Catone, “Hitwise: MySpace Takes 3/4ths of US Social Network Traffic,” Readwrite.com, May 6, 2008.

29 Jasper Jackson, “Time Inc. Buys What Is Left of MySpace for Its User Data,” Guardian, February 11, 2016.

30Liveuniverse, Inc. v. Myspace, Inc., 304 Fed. Appx. 554 (9th Cir. 2008).

31 Michael Arrington, “Facebook No Longer the Second Largest Social Network,” TechCrunch, 2008.

32“The Rise and Fall of MySpace,” Financial Times, December 4, 2009.

33 Jeremy Barr, “Does MySpace Have Any Distribution Juice Left for Publishers?,” AdAge, April 28, 2016.

34 Martin Armstrong, “Myspace Isn’t Dead,” Statista, March 18, 2019, https://www.statista.com/chart/17392/myspace-global-traffic/.

35 Zoe Kleinman, “MySpace Admits Losing 12 Years’ Worth of Music Uploads,” BBC, March 18, 2019.

36 According to data on StatCounter, in March 2019 Apple had a market share of just under 55 percent in the mobile vendor market, with Samsung a distant second with 24 percent. http://gs.statcounter.com/vendor-market-share/mobile/united-states-of-america.

37 Nick Whigham, “How Some of the Once Unstoppable Tech Giants of Yesteryear Met Their Demise,” news.com.au, November 20, 2017; and Bruce Upbin, “The Next Billion,” Forbes, October 26, 2007.

38 Tony Smith, “Nokia Grabs 40% of Phone Market for First Time,” The Register, January 24, 2008; and Phil Goldstein, “Report: Nokia’s Smartphone Market Share Dropping,” FierceWireless, March 11, 2009.

39 Jack Ewing, “Nokia Rockets Past Rivals,” Der Spiegel, January 25, 2008.

40 Yves Doz, “The Strategic Decisions That Caused Nokia’s Failure,” Insead (website), November 23, 2017.

41 Doz, “The Strategic Decisions That Caused Nokia’s Failure.”

42“Why Did Nokia Fail and What Can You Learn from It?,” Brand Minds, Medium, July 23, 2018.

43 Statista, “Global Market Share Held by Leading Smartphone Vendors from 4th Quarter 2009 to 4th Quarter 2018.”

44 James Surowiecki, “Where Nokia Went Wrong,” New Yorker, September 3, 2013.

45“Kodak Moment,” Wiktionary.org.

46 Henry C. Lucas, Jr., The Search for Survival: Lessons from Disruptive Technologies (Santa Barbara, CA: Praeger, 2012).

47“A Kodak Monopoly,” Time, May 12, 1923.

48“U.S. Jury Finds a Kodak Monopoly in Amateur Photography Business,” New York Times, January 22, 1978.

49 Dave Lehmkuhl et al., “Kodak: The Challenge of Consumer Digital Cameras,” University of Michigan Business School.

50 Jordan Crook, “What Happened to Kodak’s Moment?,” TechCrunch, 2012.

51 Michael Hiltzik, “Kodak’s Long Fade to Black,” Los Angeles Times, December 4, 2011.

52 Lehmkuhl et al., “Kodak: The Challenge of Consumer Digital Cameras.”

53 Lucas, The Search for Survival: Lessons from Disruptive Technologies.

54 Felix Richter, “Digital Camera Sales Dropped 84% since 2010,” Statista, February 13, 2019.

55 Statista, “Digital Cameras,” https://www.statista.com/outlook/15010400/109/digital-cameras/united-states#market-arpu.

56 Michael J. De La Merced, “Eastman Kodak Files for Bankruptcy,” New York Times, January 19, 2012.

57 Quentin Hardy, “At Kodak, Clinging to a Future beyond Film,” New York Times, March 20, 2015.

58 Talia Soghomonian, “Who Will Break iTunes’ Monopoly?,” NME.com, May 28, 2010.

59 Daniel Eran, “Myth 4: The iTunes Monopoly Myth,” Roughly Drafted (blog), December 30, 2006.

60 Brad Stone, “Apple Is Said to Face Inquiry about Online Music,” New York Times, May 25, 2010.

61 Jeff Goodell, “Steve Jobs: Rolling Stone’s 2003 Interview,” Rolling Stone, October 6, 2011.

62 Christian de Looper, “History of the Apple iPod Anti-Trust Case,” Tech Times, December 19, 2014; and Dominic Timms, “Apple iPod and iTunes Accused of Music Monopoly,” Guardian, January 6, 2005.

63 Rachel Rosmaren, “IPod Killers That Didn’t,” Forbes, October 23, 2006.

64 Devin Leonard, “Rockin’ Along in the Shadow of iTunes,” Fortune, February 13, 2007; and Paul Boutin, “Live from the Steve Jobs Keynote — ‘It’s Showtime,’” (“And the higher figure of 88% came from Steve Jobs’ mouth himself in a Keynote speech”), Engadget.com, September 12, 2006.

65 Albert Lin, “Understanding the Market for Digital Music,” Stanford Undergraduate Research Journal 4 (Spring 2005): 50-56.

66 Sundie, Gelb, and Bush, “Economic Reality vs. Consumer Perceptions of Monopoly.”

67 Harrison McAvoy, “Developers Are Our Best Bet to Stop Apple’s App Store Monopoly,” Next Web, February 23, 2019.

68 NPD Group, “Amazon Ties Walmart as Second-Ranked U.S. Music Retailer, behind Industry-Leader iTunes,” press release, May 26, 2010.

69 NPD Group, “iTunes Continues to Dominate Music Retailing, But Nearly 60 Percent of iTunes Music Buyers Also Use Pandora,” press release, September 18, 2012.

70 NPD Group, “Streaming Music Is Gaining on Traditional Radio among Younger Music Listeners,” press release, April 2, 2013.

71 Statista, “Most Popular Music Streaming Services in the United States as of March 2018, by Monthly Users (in Millions).”

72 Joshua P. Friedlander and Matthew Bass, “2018 RIAA Shipment & Revenue Statistics,” Recording Industry Association of America, 2019.

73 Paul Resnikoff, “Apple Is Shutting Down iTunes Music Downloads on March 31, 2019, Sources Say,” Digital Music News, April 6, 2018; and Dion Dassanayake, “iTunes Is NOT Shutting Down — Apple Rubbishes Rumours That It Will Scrap Music Player,” Daily and Sunday Express, April 14, 2018.

74 John Naughton, “Netscape: The Web Browser That Came Back to Haunt Microsoft,” Guardian, March 22, 2015; and Ed Kubaitis, “Browser Statistics for April 1996,” Engineering Workstations, University of Illinois at Urbana-Champaign.

75 James Collins, “Netscape’s Marc Andreessen,” Time, February 19, 1996.

76“Microsoft’s Share of Browser Market Continues to Rise: Now More Than 87 Percent,” WebSideStory, accessed with the Internet Archive.

77“Firefox’s Share of Browsers Market Grows 34 Percent in One Month, According to WebSideStory,” press release, December 13, 2004.

78 Timothy F. Bresnahan and Pai-Ling Yin, “Economic and Technical Drivers of Technology Choice: Browsers,” Harvard Business School Working Knowledge, August 12, 2005.

79 Sara Grant, “Lessons from the Browser Wars,” Harvard Business School Working Knowledge, April 10, 2006.

80 Klint Finley, “The Sorry Legacy of Internet Explorer,” Wired, January 12, 2016.

81 Andreas Gal, “Chrome Won,” AndreasGal.com, May 25, 2017.

82 Victor K. McElheny, “Xerox Fights to Stay Ahead in the Copier Field,” New York Times, February 21, 1977.

83SCM Corp. v. Xerox Corp., 463 F. Supp. 983 (D. Conn. 1978).

84 Larry Schweikart, The Entrepreneurial Adventure: A History of Business in the United States (Cengage Learning, 1999).

85 Neil Gandal, “The Dynamics of Competition in the Internet Search Engine Market,” UC Berkeley Center for Competition Policy Working Paper No. CPC01-17, February 19, 2004.

86 Randall E. Stross, “How Yahoo! Won the Search Wars,” Fortune, March 2, 1998.

87 Gil Press, “Why Yahoo Lost and Google Won,” Forbes, July 26, 2016.

88“Yahoo! Selects Google as Its Default Search Engine Provider,” press release, Google, June 26, 2000.

89 Search Engine Market Share for United States of America, StatCounter Global: http://gs.statcounter.com/search-engine-market-share/all/united-states-of-america.

90 Joe Salkowski, “AOL May Also Have Monopoly,” Chicago Tribune, June 19, 2000.

91“AOL’s Instant Messaging Monopoly?,” editorial, Wired, December 26, 2000.

92 Jeff Desjardins, “The Evolution of Instant Messaging,” VisualCapitalist.com, November 17, 2016.

93 Alan Reynolds, “The Return of Anti-Trust?,” Regulation, vol. 41, no. 1 (Spring 2018): 24-30.

94 Agis Salpukas, “I.B.M. Says F.T.C. Has Ended Its Typewriter Monopoly Study,” New York Times, February 3, 1978.

95 Elizabeth Warren, “Here’s How We Can Break Up Big Tech,” Medium Business, March 8, 2019.

Ryan Bourne occupies the R. Evan Scharf Chair for the Public Understanding of Economics at the Cato Institute.

Immigration Wait Times from Quotas Have Doubled: Green Card Backlogs Are Long, Growing, and Inequitable

$
0
0

David Bier

During his presidential campaign, Donald Trump repeatedly promised that although he would build a border wall, it would have a door open to those willing to come to America legally. This policy analysis shows how badly America needs that new door by providing the first calculation of how outdated quotas have increased the average wait times for immigrants. Since 1991, when the current quotas went into effect, time spent waiting to apply for a green card (i.e., legal permanent residence) has doubled for applicants immigrating through the family-sponsored and employment-based quota categories — from an average of 2 years and 10 months to 5 years and 8 months.

More than 100,000 legal immigrants — 28 percent of the family-sponsored and employment-based lines with quotas — waited a decade or more to apply for a green card in 2018, up from 3 percent in 1991. By contrast, 31 percent had no wait at all from the quotas in 1991, while just 2 percent had no wait in 2018. The quota system also imposes limits on the number of green cards for individual nationalities, causing longer waits from countries with the highest demand. Indians averaged the longest wait because of quotas — over 8 years and 6 months.

Behind those immigrants who applied for green cards in 2018 stand nearly five million people waiting in the applicant backlog. Without significant reforms, wait times will become impossibly long for these immigrants. Altogether, about 675,000 would-be legal immigrants — 14 percent of those waiting in 2018 — would die without seeing a green card if they refused to give up and stayed in the line indefinitely. It will take decades and — in some categories — a half century or more to process everyone else waiting now.

Long waits separate American families and artificially suppress lawful migration to the United States of workers whose skills contribute greatly to the U.S. economy. Nearly three decades have passed since Congress last updated the legal immigration system. During that time, the U.S. economy has doubled, and its population has grown by one-third. Entire new industries have formed that need workers. Congress should reform the antiquated quotas, enact a limit on wait times, and keep these pathways viable for legal immigrants in the 21st century.

Introduction

Legal immigrants to the United States can face two different types of waits. Every immigrant must deal with the first type: the time it takes for the government to process petitions and applications for green cards (i.e., legal permanent residence). By itself, the administrative processing wait generally took more than a year and a half in 2018 — first to wait for an approval for the immigrant’s sponsor and then for an approval for the immigrant.1 But a third of all legal immigrants face a second type of wait between their sponsor’s petition and their own application: the time it takes for a green card to become available under the immigration quotas. Because Congress limited the number of green cards for certain types of immigrants, not everyone who receives an approval after the first wait can apply for a green card immediately. Like customers at a deli, they wait for their number to be called.

This policy analysis describes the second type of wait: the one caused by the unavailability of green cards due to quotas, not bureaucratic delays. The immigration categories with quotas and waiting lists are the “preference categories.” The preference quota categories account for a third of all permanent immigration to the United States — about 366,000 slots annually.2 These immigration lines are known as preference categories because the system prioritizes applicants according to different family and employment “preferences.”3 Table 1 lists each preference, along with its category limits. The law also limits the number of green cards that any single nationality may receive: no more than 7 percent of the total (25,620), plus any unused green cards distributed to nationals on a first-come, first-served basis in a given category.4

These nationality-based quotas are known as the country limits. The country limits result in each nationality waiting in lines that move at different speeds within each category. The wait time for Mexican siblings of U.S. citizens is different from that of Filipino siblings of U.S. citizens, and both wait times differ from those of Mexican or Filipino spouses of legal permanent residents. For the most part, just four nationalities — Indians, Chinese, Filipinos, and Mexicans — reach the country limits. When a nationality reaches the country limit, nationals of other countries pass them in the line.

Each month, the State Department publishes the Visa Bulletin, which informs immigrants who entered the line before a certain date that they may now apply for a green card. For example, in October 2018, the date for Mexican-born siblings of U.S. citizens was January 22, 1998, meaning that Mexican-born siblings had waited about two decades for the chance to apply for a green card. In October 1991, the date for this category was January 1, 1979, meaning that immigrants applying for green cards in that category had, at that time, waited only about 12 years.5 The average for the entire year provides the basis for the estimates below. The current quotas went into effect in October 1991, so estimates for 1991 are based on October to December of that year.

 

Current Wait Times by Category

The average wait time to apply for a green card in all preference categories has doubled since 1991. Although the waits vary across categories, Figure 1 shows the average wait time for all preference immigrants — family-sponsored and employment-based — who applied for a green card in 1991 and 2018 (weighted based on category and country of birth). From 1991 to 2018, the average immigrant in the preference categories waited 4 years and 10 months for a green card. The average wait for all preference immigrants grew from about 2 years and 10 months in 1991 to about 5 years and 8 months in 2018 — a 97 percent increase. Overall wait times for immigrants have grown much longer over the past three decades.

The overall averages disguise significant variation among individual applicants in the backlog. In 1991, 31 percent of immigrants in the preference categories had no wait at all due to the quotas (Figure 2). In 2018, that share had fallen to just 2 percent. In 1991, just 3 percent of applicants waited a decade or more to apply for a green card. By 2018, 28 percent waited a decade or more, and 41 percent waited at least five years. Applicants with exceptionally long waits have become normal in America’s legal immigration system.

The variance in outcomes for individuals in the backlog stems from two sources: different quotas for each category (“category limits”) and identical quotas for each nationality in each category (“country limits”). Both limitations fail to align the supply of green cards with demand for them. This failure produces wildly differing outcomes depending on what category the immigrant is in (i.e., who is sponsoring them) and where the applicant was born.

Figure 3 shows the average wait times for family preference immigrants in 1991 and 2018. The average time waited for all family preference immigrants in 2018 was about 8 years and 1 month, up from about 4 years and 3 months — an 88 percent increase. While the average wait for family-sponsored immigrants nearly doubled, the waits for unmarried adult children of citizens (F1) and those for married adult children of citizens (F3) increased tenfold and sixfold, respectively. In absolute terms, waits for F3 rose the most — by an additional 11 years and 5 months. Meanwhile, the waits for spouses and minor children of legal permanent residents (F2A) actually declined. The category for siblings of adult U.S. citizens (F4) had the long­est average wait in 2018: 14 years and 7 months.

The average wait time in the employment-based categories grew more than sevenfold — from just 3 months in 1991 to 1 year and 9 months in 2018 (Figure 4). Only the EB3O category for workers without a college degree saw a decrease in the wait since 1991. The other five categories saw their wait times increase. Among the employment-based categories, bachelor’s degree holders employed by U.S. businesses waited longest: 2 years and 4 months for a green card in 2018. The next-longest average category wait was in the EB5 category for investors creating at least 10 jobs who had waited an average of 1 year and 8 months in 2018.

Current Wait Times by Nationality

The country limits — which cap the number of green cards for any particular nationality at 7 percent of the total number — artificially inflate the longest waits, while artificially deflating the average wait. This deflation effect happens because, once a nationality bumps up against the country limit, nationals from other countries pass them in the line. For example, because Indians have reached the country limits in the EB2/EB3 categories for employees of U.S. businesses with bachelor’s and master’s degrees, the law requires them to wait about a decade, while applicants from all other countries except China may apply for their green cards almost immediately, cutting ahead of Indians in the line. Under this inequitable system, the longest wait can grow much longer, but the average wait only increases slightly, since 93 percent of the line may be unaffected by the limits. Ever greater numbers of applicants pile up in the line for the nationalities at the country limit, while nationals of other countries apply for green cards in roughly the same amount of time.

Paradoxically, the longest waits in the employment-based preferences can grow, even while the average wait time actually shortens. This can happen because the law allows nationalities in those categories to receive green cards above their country limits if not all the green cards in the category would otherwise be used. If a nationality goes above the country limit in one year and then more applicants apply from other countries in the next year, the new applicants can cut into the greater numbers that the nationality with the longest wait was previously receiving. Thus, the share of applicants with no wait time increases, while the share with the longest wait time decreases. The result is a shorter average wait time for all applicants but a much longer one for those with the longest wait. From 2017 to 2018, for example, the longest wait in the EB5 category for investors in U.S. businesses grew from 2 years and 6 months to 3 years and 4 months, yet the average wait fell from 1 year and 11 months to 1 year and 7 months because the share of EB5 green cards for Chinese investors dropped from 75 percent to 48 percent.6

The country limits generally affect only four nationalities: Chinese, Indians, Mexicans, and Filipinos.7 Figure 5 highlights the disparity between the average wait times for the top four nationalities and the waits for all other nationalities in 1991 and 2018. The waits grew the most for Indians — 4 years and 6 months since 1991 — followed by Mexicans, whose waits increased to 3 years and 2 months. The average wait for all other nationalities increased by 2 years and 4 months since 1991. In 2018, Indians also waited the longest: 8 years and 6 months — nearly double the average wait of 4 years and 6 months for all nationalities not at the country limits.

The category limits and country limits operate together to create even more widely variant outcomes across the entire immigration system. Figure 6 shows all preference immigrants in categories with waits longer than the average for all categories (5 years and 8 months). Filipino siblings of adult U.S. citizens (F4) who applied for green cards in 2018 waited the longest — 23 years. They originally entered the line for green cards in 1995. Just behind them were F3 Filipino and Mexican adult married children of U.S. citizens who each waited more than 22 years for their green cards. The longest employment-based line was for Chinese and Indian employer-sponsored immigrants lacking a bachelor’s degree (EB3O), Indian professionals with a college degree (EB3), and Indian advanced-degree holders (EB2), who all waited about a decade to apply for their green cards in 2018.

Figure 7 shows how wait times have increased since 1991 for nationalities with the longest wait in each category in 2018. The largest increase — 20 years and 7 months — occurred for F1 Mexican unmarried adult children of U.S. citizens, whose wait time rose from 4 months in 1991 to 20 years and 11 months in 2018. In the employment-based categories, EB3 Indian employees of U.S. businesses saw their wait increase more than any other EB category, from no wait to 10 years and 6 months.

Current Backlogs

The lengthy wait times cause many applicants to pile up in a backlog awaiting their chance to apply for green cards. The most recent statistics on the number of approved applicants indicate that about 4.7 million applicants are waiting for green cards because of the quotas — 83 percent in the family preferences and 17 percent in the employment-based preferences (Table 2).8 One category — siblings of adult U.S. citizens — accounts for half the entire backlog. As Table 2 shows, there is a significant mismatch between the share of available green cards in each line and the share of applicants in each line.

Table 3 shows the backlogs by nationality. Mexican applicants account for 28 percent of the backlog in the preference categories. Indians accounted for 19 percent, and another 19 percent were born either in the Philippines, China, or Vietnam. Applicants from all other countries amount to about a third of the total. While the distribution in the family preference categories is similar, the employment-based backlogs are filled almost entirely by people born in India (78 percent) or China (17 percent).

The backlog has grown significantly since 1991. While only partial data is available, the number of people waiting for immigrant visas abroad — primarily family-sponsored immigrants — has grown from 2.9 million in 1992 to 3.7 million in 2017.9 These numbers do not include people waiting for green cards in the United States — primarily employer-sponsored immigrants who work on temporary visas while their green card applications are pending. Based on the increases in wait times for these categories, the backlogs for these types of immigrants have also grown significantly.

Projected Future Wait Times

Whereas it may have taken immigrants an average of 5 years and 8 months to immigrate in 2018, the backlogs mean that immigrants who are applying for the first time right now may have to wait much longer. The government makes no attempt to estimate these future waits. Table 4 highlights how long it would take to process everyone currently in the backlogs by nationality and category if everyone stays in the line. As it shows, applicants in several lines face multidecade waits if they stick it out indefinitely. In fact, EB2/EB3 Indian employees of U.S. businesses who entered the line in 2018 have an impossible half-century-long wait, and Mexican and Filipino married adult children of U.S. citizens and Mexican siblings of U.S. citizens face a full century in the backlog.

The waits are so long that many people waiting for green cards will die before they can even apply. Table 4 also shows how many applicants would die waiting based on the average age distribution of immigrants in 2018 and the average mortality rate by age.10 As the population grows older, the death rate increases with each passing year until all immigrants have either received green cards or died. Altogether, about 675,000 would-be legal immigrants — 14 percent of those waiting in 2018 — will die without seeing a green card if they refuse to give up and stay in the line indefinitely.

Those near the back of the longest lines will have to find another way to receive permanent residence — for example, by marrying a U.S. citizen and thus bypassing the quota categories. Of course, many immigrants will give up rather than wait for a green card that may never come. To account for attrition, Figure 8 projects how long the average preference immigrant will have waited to apply for a green card in 2038, assuming that the linear trends from 2008 to 2018 continue. If current trends continue, the average wait will increase from 5 years and 8 months in 2018 to 7 years and 8 months in 2038.

Waits for specific nationalities will grow even more disproportionate under current trends. Nationalities in about a dozen categories will have waited multiple decades for a green card in 2038 (Figure 9). This means that immigrants entering those lines in 2018 will likely not apply for their green cards until 2038 or later. For F3 Mexican and Filipino married adult children of U.S. citizens, the wait is projected to rise to 36 years, meaning that applicants who applied in 2003 or later will still be waiting for their green cards in 2038. The share of immigrants receiving green cards under the quotas who wait more than two decades will rise from 3 percent to 15 percent by 2038.

Detailed Projected Future Waits for EB2/EB3 Categories

The current trends could change, so it is worth making a more detailed assessment to explore how the wait times could change in a couple of specific categories. The following factors all affect how long it will take to process everyone in these backlogs for any particular nationality: (1) marriages, (2) children, (3) deaths, (4) abandonment of applications, and (5) the number of green cards made available for each nationality. Getting married could increase or decrease the waits. Because the law gives spouses of immigrants the same place in line as the primary applicant, getting married to a noncitizen would increase the backlog. On the other hand, if the spouse is a U.S. citizen, the spouse can sponsor the immigrant for a green card immediately, which would reduce the backlog.

Children also have an equivocal effect on future wait times. The law entitles children under the age of 21 to the same place in line as their parents. This means that, in cases where the child turns 21 before the parent is able to apply for a green card, the child loses eligibility, reducing the wait times (at least for the parent — for the child, the wait becomes infinite, as he or she will have lost eligibility entirely). If children are born in the United States, they are U.S. citizens, and if their parents remain in the United States for more than 20 years legally, the law allows U.S. citizens to sponsor their parents for green cards immediately upon their 21st birthdays, which would also reduce the backlog. On the other hand, giving birth to children outside the United States would increase the backlog because those children would be entitled to the same place in line as their parents. Deaths and abandoned applications obviously reduce the backlog, while the availability of green cards for a particular nationality under the country limit could increase or decrease the projected waits, depending on whether a greater or lesser number of green cards is made available in future years than recently.

To use a concrete example, current EB2 and EB3 immigrants from India — employees of U.S. businesses with bachelor’s or master’s degrees, respectively — have waited 9 and 10 years, respectively. However, about 543,152 applications have been approved for Indian immigrants in the EB2 and EB3 lines, and nearly all of them are working in the United States on work visas that can be renewed indefinitely. About 80 percent of them are in the EB2 line, but because all EB2 applicants can also qualify under EB3 — as EB2 immigrants have both a master’s and bachelor’s — the lines will tend to equalize over time (as they have already). For this reason, it is worth treating them as a single category for purposes of projecting future wait times.

Marriages will have little effect on the EB2/EB3 backlog since most EB2/EB3 employees in the backlog are already married to foreign spouses. Moreover, while marriages to U.S. citizens decrease the backlog, marriages to foreign spouses, which are particularly common among Indian nationals, increase it. Children can have a similarly equivocal effect depending on their places of birth, but the fact that children “age out” of eligibility for derivative permanent residence through their parents’ petitions will reduce the backlog by about 45,000. Deaths will also have only a relatively small effect in the EB2 and EB3 categories over the next couple of decades, though not over the next 50 years (as Table 4 above shows) because most employment-based immigrants are in their prime working years.

The two factors that could most dramatically change the length of future waits for Indian employees of U.S. businesses — at least over the next several decades — are abandoned applications and the availability of green cards for Indians. Because the EB categories allow nationalities to move above the country limit if not all the green cards would otherwise be used, it is impossible to know exactly how many green cards Indians will receive annually going forward. Because the EB2/EB3 lines for India cumulatively used about 10,000 green cards in 2018 — higher than the country limit of 4,900 — the number of green cards for Indians could decrease in the future if demand in the EB2 and EB3 categories rises among other nationalities.

The rate of abandoned applications must be inferred indirectly. Abandoned applications would include deaths, marriages to receive green cards, and emigration due to discouragement. An I-140 petition for employer-sponsored workers starts the employment-based preference green card process, after which point the worker must wait for a green card number to become available. Since 2002 — before the EB2/EB3 backlog built up — there have been about 460,000 more I-140 petitions for employment-based workers than green cards issued. As of April 20, 2018, however, there were just 372,089 non-abandoned pending petitions — a difference of about 89,000.11 This implies an abandonment rate by those who entered the backlog for any amount of time of about 4.75 percent annually. This rate is only for the primary applicants. The rate will be much higher for their children since they would drop out when their parents leave or when they themselves turn 21 and lose eligibility. About half of all the children in the backlog in 2018 will end up aging out.12 More broadly, the total abandonment rate could increase in the future if the waits grow much longer and more people give up and leave the country.

These facts lead to three main scenarios for future wait times. At the high end, scenario 1 sees green card issuances at the country limit (4,900 annually) and the same rate of abandoned applications (4.75 percent annually). Under this scenario, it would take 36 years to process the backlog. In the middle, scenario 2 sees the same rate of abandoned applications but green card issuances above the country cap (10,000 annually). Under the midrange scenario 2, it would take 26 years to process the backlog. At the low end, scenario 3 sees the higher rate of green card issuances, but the rate of abandoned applications gradually rises at 0.2 percent annually to almost double the current rate (9.4 percent annually). Under the low-end scenario 3, it would take 24 years to process the backlog.

Figure 10 provides these three projections compared to the 10- and 15-year trends for wait times for Indians in the EB2 and EB3 lines over the next 20 years. In the low-end scenario — with high attrition and high green card issuances — the wait would increase to 21 years and 3 months by 2038 — meaning that people who applied in 2017 and 2018 would still be waiting at that time. In the midrange scenario, it would increase to 22 years and 3 months, and in the high-end scenario it would increase to 24 years and 5 months. The high-end and low-end scenarios closely align with the 15-year and 10-year linear trends, respectively. All the projections are within a range of less than 5 years. This provides some independent support for the projections based on the current linear trends reported in Figure 6.

The particularities of different categories, however, could result in strange departures from the current trends. For example, waits for EB5 investors from China will almost certainly increase far more than current trends predict. This is because Congress, in 1991, effectively reduced the Chinese EB5 country limit to zero.13 However, because nationalities may exceed their country limits in order to allow for the use of all available green cards, Chinese investors still managed to receive about half the green cards in the category in 2018.14 But the waits for Chinese have already forced firms seeking EB5 investment to look for investors elsewhere, causing demand from the rest of the world to rise. From 2014 to 2018, the share for Chinese fell from 85 percent to 48 percent.15 If the rest of the world continues to increase its share, the 65,953 Chinese investors and their families could be completely shut out of the EB5 program forever.

Why Wait Times Matter

Lengthy wait times result in several interrelated problems. Wait times reduce the liberty of Americans to associate with people born in other countries. The waits separate U.S. citizens from their family members, prevent U.S. businesses from employing or fully utilizing the skills of foreign workers, and keep U.S. firms from receiving capital from foreign investors. Simultaneously, wait times artificially depress the rate of legal immigration to the United States. America already has a rate of immigration — controlling for population size — lower than much of the developed world, and its net immigration rate and immigrant share of the population ranked in the bottom third of wealthy countries from 2015 to 2017.16

By hampering America’s ability to compete for labor and capital around the world, wait times injure the U.S. economy. Every year that EB5 investors wait is a year in which the United States loses out on billions of dollars in foreign direct investment that grows the economy and increases demand for U.S. workers.17 Both family-sponsored and employment-based immigrants generally have higher college and high school graduation rates than the U.S. public, meaning that legal immigrants are increasing the U.S. skill level.18 According to a 2016 analysis from the National Academies of Sciences, Engineering, and Medicine, better-educated immigrants contribute significantly more in taxes than they receive in benefits, making preference immigrants a net positive to the U.S. Treasury.19 The same analysis concluded that immigrants make the GDP of the United States larger by 11 percent annually — about $2.2 trillion in 2018.20

Particularly lengthy wait times cause some foreign students to leave the United States rather than pursue green cards. As one recent study found, “The stay rate of Chinese graduates [of U.S. universities] declines by 2.4 percentage points for each year of delay, while Indian graduates facing delays of at least five and a half years have a stay rate that is 8.9 percentage points lower.”21 Because foreign students are highly skilled, higher rates of departure result in fewer startups, fewer patents, and less innovation — all of which high-skilled immigrants do at higher rates than the U.S.-born population.22

The country limits exacerbate these trends by concentrating the wait times among certain nationalities. Moreover, they perversely distort the labor market by making people with more experience and skills wait longer than other immigrants. In fact, the country limits depress the average wage offer for new employment-based immigrants by $11,592 in the EB2 and EB3 categories because the average wage offers for Indians and Chinese nationals are $27,649 and $20,750 higher, respectively, than those for other immigrants (Figure 11).23 America benefits from immigrants of all skill levels, but the market should determine which immigrants the economy needs, not government centralized planning based on the birthplace of the applicants.

Finally, foreigners who wish to permanently immigrate to the United States have very few options to do so legally. Some nationalities that are underrepresented in the U.S. immigration system can apply for the diversity visa lottery if they meet the work requirements or have a high school degree. Refugees can hope for a resettlement referral to the United States. But the odds of winning the lottery or getting a referral were just 0.2 percent in 2017.24 Most other immigration channels — like asylum and various forms of relief from deportation — are limited for people already in the United States.

Except for spouses, minor children, or parents of adult U.S. citizens (who have no numerical limits), all other legal immigrants must use the quota system. A main reason that Congress increased the quotas for the preference categories in the Immigration Act of 1990 — particularly for family-sponsored immigrants — was because it believed that this would provide an alternative to illegal immigration.25 Time has proven this theory correct. Immigrants use the preference categories as an alternative to illegal immigration and as a pathway to correct illegal status.26 Wait times undermine the goal of reducing illegal immigration, while also causing damage to the economy and separating Americans from their families.

Policy Solutions for Wait Times

The United States should adopt four simple reforms to prevent the wait times from growing further. First, Congress should end the country limits. Micromanaging immigration flows in this way results in highly inequitable outcomes. Similarly situated employees of U.S. businesses or family members of U.S. citizens wind up with waits that diverge wildly for no reason other than that one immigrant was born in a country with higher demand than the other. Legal immigration should be a first-come, first-served process without consideration of an immigrant’s nationality.

Removing the country limits would equalize wait times among nationalities, eliminating the extremely long waits for certain immigrants. Repealing the country limits, for example, would make the average time to process everyone in the EB2 and EB3 lines six or seven years — using the same assumptions about abandonment rates as above — compared to 24 to 36 years for Indians and roughly zero for almost everyone else except Chinese. That would produce a fairer process and give all immigrants a reason to advocate for additional reforms. As noted earlier, repealing the country limits would raise the average wage offer of green card recipients by eliminating long waits for the more experienced workers in the backlog. That would improve economic efficiency by ending discrimination against immigrants who would be more productive.

Second, Congress should link the family preference quotas to population growth and the employment quotas to economic growth. Hard caps make little sense in a world that is constantly changing. Since Congress passed the Immigration Act of 1990 that determined the current quotas, the U.S. population has grown by a third and the U.S. economy has more than doubled in size.27 It makes sense to link the overall family preference quotas to population growth because the need for family-sponsored green cards grows with the population. On the other hand, if the U.S. population begins to decline and America’s growing economy needs even more workers, the employment-based preference quotas should be tied to growth in U.S. GDP.

Third, Congress should explicitly exempt derivative applicants — spouses and minor children of primary applicants (e.g., employees of U.S. businesses) — from the quotas. Current law entitles these immediate family members to the same status as their parent or spouse. In 2017, about 45 percent of all green cards in the preference categories went to derivatives, not the primary applicant.28 It makes no sense to lengthen wait times for primary applicants simply because they marry someone or have children while they are waiting for a green card.

Had derivatives not been charged against the green card limits since 1991, the waits and backlogs would have been eliminated. For example, the EB2/EB3 backlog has grown to nearly 600,000 applicants — primary and derivative — which is 810 percent larger than the total number of green cards issued annually in those categories. Yet an average of 52,000 green cards per year have gone to spouses and children of the employees (about 800,000 total), meaning that if the administration had not counted those applicants against the quotas, the backlog would have never developed at all. In 2017, excluding spouses and children of preference category immigrants from the count would have amounted to an increase in overall legal immigration of roughly 318,000 or 28 percent.

Fourth, Congress should enact a guarantee that immigrants will not have to wait longer than five years for a green card. If the measures above do not prevent wait times from creeping back up, the law should automatically grant a green card to someone who has waited for five years or more. That would preserve the viability of each immigration category and prevent immigrants from abandoning the legal option entirely.

Conclusion

The average immigrant’s wait time for a green card was nearly twice as long in 2018 as it was in 1991 when the quotas were first implemented. The share of those waiting more than a decade increased nearly tenfold, and many immigrants already wait more than 20 years because of the quotas. Wait times for immigrants will continue to grow year after year as a result of America’s antiquated legal immigration quotas, and many immigrants who are applying right now will not see their green cards for decades, if ever. Today, almost five million immigrants are waiting for green cards in a fundamentally broken legal system.

The current quotas fail to respond to changes in America’s population or economy, and the waits reflect this disconnect. The country limits create massive inequities between identical immigrants who happen to have different birthplaces. For this reason, these limits have no place in a modern immigration system. Congress should eliminate the country quotas, exempt spouses and minor children from the overall quotas, and instead link quotas to population and economic growth. America needs a flexible and adaptive immigration system for the 21st century.

Notes

1 Combined processing times for I-140 Petition for Alien Worker (no premium processing) and an EB I-485 Adjustment of Status was 17 months. Combined processing times for I-130 Petition for Alien Relative and an FB I-485 Adjustment of Status was 20.6 months. U.S. Citizenship and Immigration Services, “Historical National Average Processing Time for All USCIS Offices.”

22017 Yearbook of Immigration Statistics, Table 6, “Persons Obtaining Lawful Permanent Resident Status by Type and Major Class of Admission,” Department of Homeland Security.

3 Under U.S. immigration law, only spouses, minor children, and parents of adult U.S. citizens, as well as a few humanitarian categories, receive green cards without a quota — they face only the first wait. While refugees and diversity lottery applicants do have quotas, the government only accepts applications from the number it plans to admit each year. As a result, no backlog of applicants develops for them. For these reasons, the only major categories of immigrants who deal with quota-caused waits are those entering through the family-sponsored and employment-based “preference” categories.

4 8 U.S. Code § 1152(a).

5 If moving forward the date to apply causes an unexpectedly large surge in applicants — more than the quotas can accommodate — the government will occasionally move the date back suddenly several years. This obviously does not mean that the wait suddenly jumped several years in a single month. It is just the government’s way to stop new applications. To account for this type of movement in the final action dates, this policy analysis uses the annual average to smooth out the changes in the priority dates.

6 The U.S. Department of State released these figures early in response to a lawsuit affecting EB-5 investors. Other 2018 green card numbers are estimated using 2017 figures. Charles Oppenheim, “Declaration of Charles W. Oppenheim,” Feng Wang, et al., v. Michael Pompeo, et al., No. 18-1732, August 24, 2018, p. 4.

7 Vietnam in the EB5 category and Guatemala, Honduras, and El Salvador in the EB4 category are also affected.

8 U.S. Department of State, National Visa Center, “Annual Report of Immigrant Visa Applicants in the Family-Sponsored and Employment-Based Preferences Registered at the National Visa Center as of November 1, 2018.”

9 Cornelius D. Sully, “Various Determinations of Numerical Limits on Immigrants Required under the Terms of the Immigration and Nationality Act as Amended by the Immigration Act of 1990,” Center for Migration Studies, In Defense of the Alien, vol. 16 (1993), p. 13; and U.S. Department of State, “Annual Report of Immigrant Visa Applicants in the Family-Sponsored and Employment-Based Preferences Registered at the National Visa Center as of November 1, 2018.”

10 For employment-based immigrants, this assumes the age distribution of H-1B workers, and for family-sponsored, the age distribution of all legal immigrants in 2017. Number of deaths account for “aging out” in the employment-based lines, but not the family-sponsored lines since every aged-out child will likely be replaced by a child born abroad. This replacement rarely happens in the EB lines because EB workers are almost entirely already in the United States in temporary statuses, so their new children are born in the United States. U.S. Citizenship and Immigration Services, “Characteristics of H-1B Specialty Occupation Workers,” Department of Homeland Security, April 9, 2018; and 2017Yearbook of Immigration Statistics, Table 8, “Persons Obtaining Lawful Permanent Resident Status by Sex, Age, Marital Status, and Occupation: Fiscal Year 2017.” Deaths estimated based on mortality rates from Felicitie C. Bell and Michael L. Miller, “Life Tables for the United States Social Security Area 1900-2100,” Actuarial Study No. 120, Social Security Administration, August 2005.

11 Green cards for employment-based applicants from Yearbook of Immigration Statistics, 2002-2017. Approved I-140 petitions estimated based on partial year statistics for 2007 and based on total completions for 2008-2011. For I-140 petitions from 2002 to 2007: Citizenship and Immigration Services Ombudsman, “Annual Report 2007,” June 11, 2007. For I-140 petitions from 2008 and 2009: Citizenship and Immigration Services Ombudsman, “Annual Report 2010,” June 30, 2010. For I-140 petitions from 2010 to 2011: U.S. Citizenship and Immigration Services, “Fiscal Year 2011 Highlights Report.” For I-140 petitions from 2012 to 2017: U.S. Citizenship and Immigration Services, “Data Set: All USCIS Application and Petition Form Types,” October 30, 2018.

12 Using the age distribution for children from 2017 Yearbook of Immigration Statistics, Table 8, “Persons Obtaining Lawful Permanent Resident Status by Sex, Age, Marital Status, and Occupation: Fiscal Year 2017.”

13 The Chinese Student Protection Act of 1991, Pub. L. 102-404.

14 The U.S. Department of State released these figures early in response to a lawsuit affecting EB-5 investors. Other 2018 green card numbers are estimated using 2017 figures. Charles Oppenheim, “Declaration of Charles W. Oppenheim,” Feng Wang, et al., v. Michael Pompeo, et al., No. 18-1732, August 24, 2018, p. 4.

15 U.S. Department of State, “Table V (Part 3) Immigrant Visas Issued and Adjustments of Status Subject to Numerical Limitations Fiscal Year 2014,” https://travel.state.gov/content/dam/visas/Statistics/AnnualReports/FY2014AnnualReport/FY14AnnualReport-TableV-PartIII.pdf.

16 David Bier, “America Is One of the Least ‘Generous’ Countries on Immigration,” Cato at Liberty (blog), Cato Institute, January 30, 2018.

17 For a review of studies on EB5 investment see Carla N. Argueta and Alison Siskin, “EB-5 Immigrant Investor Visa,” Congressional Research Service, April 22, 2016, p. 15.

18 David Bier, “Family and Diversity Immigrants Are Far Better Educated Than U.S.-Born Americans,” Cato at Liberty (blog), Cato Institute, January 25, 2018.

19 National Academies of Sciences, Engineering, and Medicine (NASEM), The Economic and Fiscal Consequences of Immigration (Washington: National Academies Press, 2017), p. 349.

20 NASEM, The Economic and Fiscal Consequences of Immigration, p. 215.

21 Shulamit Kahn and Megan MacGarvie, “The Impact of Permanent Residency Delays for STEM PhDs: Who Leaves and Why,” NBER Working Paper No. 25175, October 2018.

22 Alex Nowrasteh, “Boost Highly Skilled Immigration,” Cato Online Forum, Cato Institute, November 17, 2014.

23 David Bier, “Higher-Paid Immigrants Forced to Wait Longer Due to Per-Country Limits,” Cato at Liberty (blog), Cato Institute, October 22, 2018.

24 U.S. Department of State, “Diversity Visa Program, DV 2016-2018: Number of Entries Received during Each Online Registration Period by Country of Chargeability,” November 12, 2018, https://travel.state.gov/content/dam/visas/Diversity-Visa/DVStatistics/DV%20AES%20statistics%20by%20FSC%202016-2018.pdf; U.S. Department of State, Refugee Processing Center, “Admissions & Arrivals,” October 31, 2018; and United Nations High Commissioner for Refugees, “Population Statistics.”

25 For example, Sen. Phil Gramm, 101 Cong. Rec. 7789 (July 12, 1989): “We have tremendous illegal immigration in this country which has not been stopped and yet we are here setting up arbitrary limits that prevent people who came here legally, who have been successful, who have achieved the American dream, from bringing their kinfolk to America. I do not think that is right. I do not think it makes any sense. And I do not think that this is a very bold or daring amendment in terms of doing injustice to the bill before us. I think it is a simple, straightforward amendment. It says that when you reach the point of only 216,000 people left to come in under family preference, after you take out the immediate family, you do not let it go any lower.” The amendment was adopted, and the final bill adopted the 226,000 floor for family preference green cards.

26 Douglas Massey and Nolan Malone, “Pathways to Legal Immigration,” Population Research Policy Review 21, no. 6: 473-504.

27 U.S. Census Bureau, “U.S. and World Population Clock”; World Bank, Data, “Population, Total for United States”; and U.S. Bureau of Economic Analysis, Data, “Gross Domestic Product.”

28Yearbook of Immigration Statistics, 1991-2017.

David J. Bier is an immigration policy analyst at the Cato Institute’s Center for Global Liberty and Prosperity.

Closing Pandora's Box: The Growing Abuse of the National Security Rationale for Restricting Trade

$
0
0

Simon Lester and Huan Zhu

Over its first two years, the Trump administration has aggressively reshaped U.S. trade policy. One of its most controversial initiatives is the expansive use of national security to justify imposing tariffs and quotas. Section 232 of the Trade Expansion Act of 1962 gives the president authority to restrict imports on this basis after an investigation by the Department of Commerce. The administration has already done so for steel and aluminum and is now threatening similar actions on automobiles. The World Trade Organization (WTO) has a special exception for such measures, so there is at least an argument that they are permitted under international law.

However, the administration has taken what was previously considered a narrow and exceptional remedy and broadened it to serve as a more general tool to protect domestic industries. In the domestic arena, there have been court challenges against the tariffs imposed under Section 232 and against the constitutionality of Section 232 itself. In addition, legislation has been introduced in Congress to rein in the president’s authority by requiring congressional approval of tariffs or other import restrictions before they can go into effect. Internationally, many U.S. trading partners responded immediately to the steel and aluminum tariffs with tariffs of their own, and both the U.S. tariffs and the retaliatory tariffs are the subject of litigation that will test the limits of the WTO’s dispute settlement process and the trading system itself.

This study argues that WTO dispute settlement cannot easily resolve disputes of this kind and suggests an alternative mechanism to handle these issues. Instead of litigation, a rebalancing process like the one used in the context of safeguard tariffs and quotas should be utilized for national security measures. Safeguards are a political safety valve that allows the trading system to pursue broad-based liberalization by providing the flexibility to protect domestic industries under certain conditions (ideally, by offering compensatory liberalization elsewhere). By adopting a similar political arrangement for national security trade restrictions, the overall balance in the system can be preserved, permanent damage to the WTO dispute system avoided, and a potentially destructive loophole kept closed.

Introduction

The Trump administration has raised tariffs under a variety of pretenses, but one of the most controversial has been the invocation of national security under Section 232 of the Trade Expansion Act of 1962. So far, only steel and aluminum imports have been assessed tariffs under this statute, but the administration soon may announce tariffs on automobiles and automobile parts, as well as on uranium and titanium sponges.

The administration has already received some strong pushback domestically to the steel and aluminum tariffs. There have been federal court challenges both to the tariff measures and to the constitutionality of the Section 232 statute itself. Meanwhile, Congress is considering various bills to rein in the president’s authority in this regard (Congress delegated some of its constitutional power over tariffs via the Section 232 statute and could take some of it back through new legislation). Congressional action would be the simplest and most straightforward way to restrain the Trump administration’s trade restrictions, but the political hurdle of convincing a Republican Senate to do this appears to be significant.

Beyond the domestic aspects of Section 232, there is also an international crisis over the Trump administration’s invocation of national security to justify tariffs. Many governments consider these actions to be in bad faith and a threat to the world trading system. Trade agreements involve a carefully balanced set of commitments to lower tariffs and other trade barriers. If countries can adopt protectionist measures simply by invoking national security, the trade liberalization achieved through such agreements may start to unravel.

To preserve the system, governments should consider new international trade rules to address trade barriers that have been justified as national security measures. The original drafters of the national security provisions of trade agreements recognized the sensitivity of this issue and hoped for the good-faith application of such measures. But good faith seems to be disappearing from the trade policy world, and additional rules may be needed. In this regard, rules that allow for national security trade barriers but that encourage trade liberalization for other products and services as compensation could prevent a spiral of protectionism and maintain the stability of the trading system.

History of the GATT/WTO Security Exception

From the earliest proposals for an international trade organization, it was clear that the General Agreement on Tariffs and Trade (GATT) would include some sort of exception for security concerns. The specific wording evolved during negotiations, but in the final text of the GATT, Article XXI, titled “Security Exception,” explained that nothing in the agreement shall prevent a government from “taking any action which it considers necessary for the protection of its essential security interests.” When the WTO was created and trade rules were expanded to cover trade in services and intellectual property, the security exception was included for those areas as well.1

Over most of the history of the GATT/WTO, governments have, for the most part, been careful to invoke national security only when it was genuinely applicable. The original negotiators recognized the political difficulties that would arise and the potential for abuse, and governments presumably kept these concerns in mind over the ensuing decades.2 In one of the most comprehensive articles on this exception, written in 2011, legal scholar Roger Alford noted, “Member States have exercised good faith in complying with their trade obligations” as “invocations of the security exception have only been challenged a handful of times, and those challenges have never resulted in a binding GATT/WTO decision.” Alford recounted the few instances when tensions over Article XXI arose, including over export controls for Eastern Europe during the Cold War, an embargo of Argentina led by the European Community related to the Falklands War, and the U.S. embargoes on Nicaragua and Cuba.3 As a result of governments’ good-faith efforts, the GATT/WTO system has been able to avoid both major conflict over this issue and having to decide what Article XXI actually means.

The long period of harmony over Article XXI seems to be ending. A WTO dispute between Ukraine and Russia has provided the first WTO panel interpretation of the provision, but the more serious controversy will arise over the U.S. tariffs recently imposed by the Trump administration on imports of steel and aluminum.

The Trump Administration’s Aggressive Use of Section 232

Overview of Section 232

Section 232 of the Trade Expansion Act of 1962 gives the president the authority to adjust imports on national security grounds.4 A decision to impose restrictions is based on an investigation by the Department of Commerce, which includes consultations with the Secretary of Defense. The Department of Commerce investigation can be self-initiated, or it can take place at the request of any U.S. department or agency or at the request of the domestic industry that stands to benefit from the restrictions.

During a Section 232 investigation, the Department of Commerce considers a number of factors, including domestic production needed for national defense requirements, the capacity of domestic industries to meet such requirements, and how the importation of goods affects such industries and affects the capacity of the United States to meet national security requirements. The department must also take into consideration the impact of foreign competition on the economic welfare of individual domestic industries. These factors make clear that the national security justification under the statute is tied closely to economic considerations.

The statute provides that the investigation shall last no longer than 270 days, and the Secretary of Commerce is required to submit a report to the president with recommendations of action or inaction.5 Within 90 days of receiving the report, the president will make a decision, and may either follow the recommendations of the Department of Commerce or take other actions.6 Generally speaking, these actions will be in the form of tariffs or quotas.

To date, there have been 31 Section 232 investigations. In 16 cases, the Department of Commerce determined that the goods did not threaten to impair national security. In 11 cases, the Department of Commerce found that the imported goods threatened to impair national security and provided recommendations to the president. (In 8 of these 11 cases, the president took action.) One case was terminated at the petitioner’s request before a conclusion was reached. Three investigations are still pending.7

The first 24 cases occurred from 1963 to 1994. After that, the mechanism fell into disuse. There was a case brought in 1999 and one in 2001, but then nothing for 16 years. Since President Trump took office in January 2017, there have been five Section 232 investigations, on steel, aluminum, autos and auto parts, uranium, and titanium sponges. The Trump ad­ministration’s tariffs on steel and aluminum were the first and second times that trade restrictions have been imposed under this law for a product other than oil or petroleum.8 In the two years since Trump’s election, his administration has clearly tried to expand the scope of this previously narrow remedy.

Both Congress and private actors have tried to push back against the administration’s aggressive use of Section 232. Multiple bills are under consideration in Congress, and court challenges have been initiated against specific tariffs and against the Section 232 statute itself.9 These efforts could lead to a more appropriate allocation of powers between Congress and the president on trade and national security issues. However, as will be seen later, they would not necessarily address the international aspects of trade restrictions that are based on national security, which can arise even without an executive branch that is willing to push the boundaries of the law in order to pursue protectionist policies.

The Section 232 Actions on Steel and Aluminum

Trump’s enthusiasm for heavy manufacturing in general, and for steel and aluminum in particular, was evident during his election campaign. “We are going to put American steel and aluminum back into the backbone of our country,” Trump vowed at a 2016 campaign rally in a former steel town in Pennsylvania.10 Steel and aluminum were at the center of his America First trade policy.

After Trump took office, it quickly became clear that the administration might impose broad tariffs on steel and aluminum imports, using Section 232 as the vehicle. In April 2017, Trump instructed the Department of Commerce to initiate investigations on the national security threat posed by steel and aluminum imports.11 The department immediately initiated Section 232 investigations on steel and aluminum and sought public comments.12

In January 2018, the department issued its reports. It concluded that the importation of certain types of steel and aluminum products threatened to impair the national security of the United States and recommended that the president reduce imports through tariffs or quotas, suggesting three options each for steel and aluminum. For steel it recommended a tariff of 24 percent on all steel imports; a tariff of 53 percent or more on steel imports from 12 countries, plus a quota for all other nations that equaled their exports to the United States in 2017; or a quota of 63 percent of each country’s 2017 steel exports to the United States. For aluminum it recommended a tariff of 7.7 percent on all aluminum imports; a tariff of 23.6 percent on aluminum ­imports from five countries, plus a quota for all other nations that equaled their exports to the United States in 2017; or a quota of 86.7 percent of each country’s 2017 aluminum exports to the United States.13

On March 8, 2018, Trump issued two proclamations that imposed a 25 percent tariff on steel products and a 10 percent tariff on aluminum products; they were set to take effect on March 23, 2018. Some countries negotiated export quotas to avoid the tariffs, and others received temporary tariff exemptions, but as of June 1, 2018, the tariffs were being imposed on most U.S. trading partners.14 The tariffs have been estimated to apply to $44.9 billion worth of steel and aluminum imports.15

In terms of the actual purpose of the actions, there were reasons to doubt the claimed national security justification, as the Defense Department was skeptical of the value of the tariffs. Then secretary of defense James Mattis expressed concern that tariffs would sabotage relationships with key allies.16 He also acknowledged that the military’s requirements for steel and aluminum could be satisfied with about 3 percent of domestic production, casting doubt on the concerns about the impact of imports and on the justification of the Section 232 actions.17

Beyond national security, a number of explanations have been offered by Trump to justify the tariffs. At times, he has emphasized that the tariffs would protect the U.S. economy and jobs.18 He has also linked the tariffs to trade negotiations, suggesting that the tariffs have forced U.S. trading partners to the negotiating table.19 A further explanation is that the tariffs are being used to combat unfair trade practices.20 Ultimately, we do not know the true motivation of Trump for these tariffs, and views may vary within the administration. But it is worth noting that Trump often makes it clear that he simply likes tariffs.21

Many U.S. trading partners responded quickly to the imposition of the Section 232 tariffs by imposing retaliatory tariffs. Their argument was that the Section 232 measures are not really about national security but are in fact more like a safeguard measure designed to protect domestic industries from injury caused by imports. As a result, the special rebalancing provisions of the Safeguards Agreement (discussed in more detail below) apply here and justify immediate retaliation.22

In addition to the retaliatory tariffs, from April to August 2018 nine governments requested consultations at the WTO, which is the first step in WTO litigation. From November 2018 to January 2019, dispute settlement panels were established to hear the cases. In late January, the panels were appointed, and litigation will soon begin.23

The complainants’ legal claims are fairly straightforward, focusing on GATT Article I (MFN treatment) and GATT Article II (tariff commitments). As discussed in the next section, the U.S. defense constitutes a serious threat to the system, as the United States has invoked GATT Article XXI. As repeatedly stated by the United States at the relevant meetings of the WTO’s Dispute Settlement Body (DSB), in the U.S. view, after Article XXI is invoked the panel cannot even hear the case.24

While the steel and aluminum tariffs have caused great friction, an even bigger test of Section 232 lies ahead: the Department of Commerce has completed a Section 232 investigation on imports of automobiles and auto parts, and Trump is considering whether to take action against imports of these products based on the allegation that they are a national security threat.25 The value of trade potentially affected would be much larger than that of steel and aluminum. It is estimated that the Section 232 auto tariffs could cover more than $200 billion of auto and auto parts imports.26Some U.S. trading partners have already warned that they will retaliate if tariffs are imposed.27

The Threat to the WTO Dispute Settlement Mechanism

The administration’s use of Section 232 presents a challenge to the WTO dispute settlement system, and even to the WTO itself, because of the invocation of GATT Article XXI. WTO dispute settlement has had success over the years in adjudicating core trade issues such as ordinary tariffs, trade remedy tariffs, and regulatory trade barriers. It cannot induce governments to remove the measures that violate WTO rules in every case, but it has a fairly good record here. However, there are limits to what can be achieved, and it is clear that some sensitive measures cannot be dealt with through WTO litigation. National security measures pretty clearly fall into this category, and thus litigation of these measures has been carefully avoided over the years. But after decades of restraint over litigating the scope and meaning of Article XXI, the Section 232 measures threaten to undermine the system by creating a WTO litigation outcome that either takes the U.S. view and opens a Pandora’s box involving a proliferation of invocations of national security as a basis for trade restrictions, or rejects the U.S. view and risks the Trump administration pulling out of the WTO.

The problem with applying and interpreting Article XXI in these cases is part legal and part political. In terms of the law, there is no simple answer on the provision’s meaning. The use of the word “considers” in subparagraphs (a) and (b) of Article XXI gives the provision a self-judging nature, but the question is how far to take this. Alford describes the interpretive possibilities as follows:

According to one interpretation, a Member State can decide for itself whether a measure is essential to its security interests and relates to one of the enumerated conditions. Another interpretation would recognize a Member State’s prerogative to determine for itself whether a security exception is applicable, but would impose a good faith standard that is subject to judicial review. Under a third interpretation, a Member State can decide for itself whether “it considers” a measure to be “necessary for the protection of its essential security interests,” but the enumerated conditions are subject to judicial review.28

Questions about the scope of the exception were raised during the GATT negotiations, but they are not easy to resolve as an interpretive matter.29

This legal uncertainty is reflected in a political divide. Two leading powers, the United States and Russia, take one view of the provision’s interpretation, while most of the WTO membership takes another (as made clear by the parties’ submissions in a recently decided WTO case called Russia—Traffic in Transit). On one side, the United States and Russia argued that the WTO security provisions are nonjusticiable, meaning it is left entirely to governments to decide whether to impose trade restrictions for this purpose. In their view, once a party has invoked Article XXI, the WTO panel can no longer hear the case.30 In contrast, other members believe that WTO panels must engage in some degree of scrutiny of measures for which Article XXI has been invoked.31

The WTO panel in the Russia—Traffic in Transit case recently provided the first word on the issue of interpretation of GATT Article XXI, taking the view that the provision is not entirely self-judging and leaving room for some panel scrutiny.32 Other ongoing WTO panels that are hearing cases on similar issues may approach the interpretation of this provision similarly, but it is possible that there will be some variation in approaches. The ­Russia—Traffic in Transit panel report was not appealed, which means that the Appellate Body has not considered the issue. At some point in the future, the Appellate Body may provide additional clarification. The state of the Appellate Body reappointment process adds some complexity here. Currently, the United States is blocking the appointment of new Appellate Body judges, which has created a backlog of appeals and the possibility that by the end of the year there will not be enough people on the Appellate Body to hear cases.33

However, a problem larger than figuring out the proper interpretation of the provision looms: if a WTO panel or the Appellate Body were to rule that Article XXI did not justify the U.S. steel and aluminum tariffs, would the United States comply with the ruling? Given the U.S. rhetoric on the issue, it seems unlikely.34 (Worse yet, the Trump administration may pull out of the WTO. It has long complained that the organization’s dispute-settlement rulings are unfair to the United States.)35 In the event of noncompliance, the only remedy is for the DSB to authorize a suspension of concessions under which the complainants could impose tariffs or other retaliation of their own, but most of the complainants have already retaliated, relying on the legal theory that the U.S. measures are safeguard measures and that rebalancing under Safeguards Agreement Article 8 is permitted immediately.36 As a matter of law, such an assertion has little basis and further undermines confidence in the system.37 Responding to violations of the rules with other violations of the rules leaves everyone wondering if the rules have any value.

As a result, it is unclear how WTO dispute settlement can help in this case. Trump’s Section 232 actions called attention to the possibility of a broad national security loophole and triggered a response that could be characterized as abuse of the safeguards-rebalancing rules. In this environment there is a real worry that the system will no longer function.

While rebalancing as practiced by U.S. trading partners here may fail to solve the problem, the concept may nevertheless offer a way forward for this kind of dispute. Adapting it for use directly in the context of national security could provide a solution to the impasse. An attempt to expand the existing safeguard rules for rebalancing beyond their scope undermines the rule of law, but a new rebalancing regime designed specifically for the national security context could help restore it.

Rebalancing under the Safeguards Agreement

The idea of some type of rebalancing in response to safeguard measures originates in the reciprocal trade agreements negotiated by the United States and other countries in the 1930s. The first modern safeguard provision appeared in the United States-Mexico Reciprocal Trade Agreement of 1942. It provides that when a country will “withdraw or modify a concession” as a safeguard to protect domestic industry, “it shall give notice in writing to the Government of the other country as far in advance as may be practicable and shall afford such other Government an opportunity to consult with it in respect of the proposed action”; if no agreement is reached, the other government “shall be free within thirty days after such action is taken to terminate this Agreement in whole or in part on thirty days’ written notice.”38 The consultations provide an opportunity for the parties to reach agreement on compensation, for example, lowering tariffs on other products.39

This idea was carried over to the GATT negotiations, where the United States proposed the initial text. At this point, “terminat[ion]” was replaced with “suspension of obligations or concessions” as the appropriate response when compensation could not be agreed on.40 The provision was refined further during the negotiations, and the London Draft of the GATT refers to suspension of “substantially equivalent obligations or concessions.”41 In the final version of the GATT, the relevant provisions appear in Article XIX, paragraphs 2 and 3.42

Practice under the GATT suggests that compensation was used extensively early on but tapered off over the years. As of 1987, there had been 20 instances of agreement or offers of compensation (10 cases during 1950-1959, 8 in 1960-1969, 1 in 1970-1979, and 1 in 1980-1987).43

During the Uruguay Round of trade negotiations, the specific requirements for rebalancing were elaborated further in the Safeguards Agreement. Under Article 8 of the agreement, a government proposing to apply a safeguard measure or seeking an extension of one shall try to maintain a substantially equivalent level of concessions and other obligations, and in order to achieve this objective, “the Members concerned may agree on any adequate means of trade compensation for the adverse effects of the measure on their trade.”44 If compensation cannot be agreed on, retaliation is permitted almost immediately in cases where the justification for the safeguard measure is based only on a relative increase in imports, but it has to wait three years if there has been an absolute increase in imports.45

Why Rebalance at All?

The basic idea behind rebalancing is as follows. When countries negotiate trade agreements, the concessions and other obligations they take on—including commitments to reduce tariffs, commitments to avoid certain protectionist domestic laws, and various other requirements—are part of an overall balance. Roughly speaking, each side accepts a particular degree of liberalization or other obligations, which constitutes the balance that was agreed to.

There are times when things get out of balance, however. One example is when a government that is a party to the agreement believes that another party has taken actions that violate the agreement. After adjudication of the dispute, if a violation is found, the offending government can remove or modify the measure or offer some sort of compensation. If it does neither, it will be subject to trade retaliation by the complaining government in an amount equivalent to the effect of the violation. In this way, balance is restored.

In some circumstances, adjudication is not first required. In the context of safeguards, the very nature of the measure indicates that the balance has been upset. If a government imposes a tariff or quota as a safeguard measure, with rare exceptions that measure will constitute withdrawal or modification of a tariff concession or breach of the obligation not to impose quotas. When that happens, the balance needs to be restored. Ideally, rebalancing would take place through compensation in the form of trade liberalization in other areas by the government imposing the safeguard measure. However, when compensation cannot be worked out, the affected countries are allowed to raise their own tariffs in an equivalent amount. Such a scenario may not be ideal, but it acts as a deterrent against the abuse of safeguard measures.

A Rebalancing Proposal for National Security

Under WTO rules, governments may impose tariffs and other trade restrictions beyond what was agreed for a variety of reasons, including for temporary protection as safeguards; as a response to dumping or subsidies; for environmental, public morals, or public health reasons; or in support of national security. Whether to make rebalancing available is a political and policy decision. Traditionally, immediate rebalancing has been available only for safeguards, but the case could be made for rebalancing in other contexts too.

In the national security context, there are several arguments for allowing a similar kind of rebalancing. First, retaliation is already happening. In the case of the Section 232 tariffs, as noted above, a number of governments have declared the measures to be safeguard measures and have applied retaliatory tariffs. Instituting rebalancing rules in these cases would provide an opportunity to replace retaliatory tariffs with compensatory liberalization, which is impossible with the current retaliatory tariffs as the United States does not accept that the safeguards rules even apply here. In addition, in circumstances when compensation is impossible, rebalancing would formalize the retaliation process and make it more orderly, limiting the possibility of a trade war that spirals out of control.

Second, as explained earlier, WTO dispute settlement probably cannot help here. A ruling that the Section 232 measures violate GATT obligations and are not justified under Article XXI is unlikely to make the United States comply, and retaliation is already being imposed by many countries even without authorization.

Third, national security measures are like safeguard measures in the sense that there is often no debate about their consistency with the rules. It is acknowledged that they violate the rules, and national security is offered as the excuse. This makes national security more like safeguard measures than, say, environmental regulations, where the responding party generally argues that the regulation is not in violation.

Finally, rebalancing would afford an important benefit by limiting the abuse of the provisions. A full WTO dispute proceeding typically lasts from two to four years, depending on the complexity of the case. National security measures are particularly susceptible to abuse due to the vagueness of the national security exception’s language, and rebalancing would reduce the time that governments can impose import restrictions for national security purposes without any response from trading partners.

Rebalancing of national security measures can draw on principles from the safeguards arena but would have its own characteristics and a different focus.

One of the primary goals of national security rebalancing would be transparency. As things stand now, governments have the ability to impose trade restrictions for protectionist purposes but can later invoke Article XXI during litigation. It would be preferable to have all national security trade restrictions notified as such immediately to foster proper debate and discussion. Bringing these cases to light early, and having WTO members think carefully about the proper scope of the exception, would be of great value. To this end, the national security rebalancing rules should encourage notification and explanation of national security tariffs by offering more time before rebalancing can be applied when restrictions have been notified. For example, rebalancing can be immediate when an Article XXI justification is invoked as part of litigation when no notification or explanation has been given, but must wait six months to a year when notification has been given.

To help oversee the discussions, a WTO Committee on National Security Measures should be formed to examine these measures and any proposed rebalancing. Members should meet regularly to consider the practice in this area.

Compensation is the preferred approach to rebalancing. Ideally, governments that impose tariffs or other restrictions on specific products for national security purposes would offer to reduce tariffs or restrictions on other products or services. Adding services as a compensation option may be significant. One of the reasons compensation has worked less well in recent years in the safeguards context is that as tariff levels have decreased, it has become harder for countries invoking safeguards to find alternative products on which they could give meaningful concessions.46 Adding services to the mix would open a wide range of compensation possibilities, especially considering how few services commitments most countries have made and thus how much potential exists for additional liberalization.

Negotiations over the extent of the compensation will never be easy, but they can be facilitated through carefully designed rules. For example, there could be a requirement that in order to impose an import restriction for national security reasons, a government must identify three products or services for which it would consider negotiating compensatory liberalization.

When compensation cannot be agreed upon, however, retaliation designed to restore balance is a possibility. To prevent abuse, a quick arbitration process should be established for determining whether any retaliation is commensurate with the economic impact of the national security restrictions in question.

Conclusion

Not every dispute can be resolved through litigation. U.S. constitutional law has the political question doctrine. A similar principle may be appropriate for certain international trade disputes.

The proposals outlined here are designed to help provide a political solution to disputes over trade restrictions based on national security. They are fairly straightforward as a policy matter, although much more debate is needed.

The politics are more complicated, of course. The Trump administration is the main party pushing the boundaries of national security restrictions, so for the time being the United States is unlikely to be open to any reforms. The views of a future U.S. administration are uncertain but may not differ considerably from the current position.

As a result, any hope for change may have to come from other governments as they negotiate bilaterally, regionally, or on a plurilateral basis with countries that are interested in pursuing this idea. Governments that are concerned about the abuse of national security measures can incorporate provisions along these lines in agreements they sign that do not involve the United States. In this way, the norm can spread, with the hope that its usefulness will be demonstrated and with the aim of eventual inclusion in a multilateral agreement.

Notes

1. See Article XIV bis, General Agreement on Trade in Service, and Article 73, Agreement on Trade-Related Aspects of Intellectual Property Rights.

2. Simon Lester, “The Drafting History of GATT Article XXI: The U.S. View of the Scope of the Security Exception,” International Economic Law and Policy Blog, March 11, 2018; Simon Lester, “The Drafting History of GATT Article XXI: Where Did ‘Considers’ Come From?,” International Economic Law and Policy Blog, March 13, 2018.

3. Roger P. Alford, “The Self-Judging WTO Security Exception,” Utah Law Review 3 (2011): 697, 706-25. See also Tania Voon, “The Security Exception in WTO Law: Entering a New Era,” American Journal of International Law 113 (2019): 45-50.

4. 19 U.S.C. §1862.

5. 19 U.S.C. §1862(b)(3).

6. 19 U.S.C. §1862(c).

7. Congressional Research Service, “Section 232 Investigations: Overview and Issues for Congress,” April 2, 2019.

8. In addition, in a case on machine tools that was initiated in 1983, a formal decision on the Section 232 case was deferred, and the president “instead sought voluntary restraint agreements starting in 1986 with leading foreign suppliers and developed a domestic plan of programs to help revitalize the industry.” Congressional Research Service, “Section 232 Investigations: Overview and Issues for Congress,” Table B-1, April 2, 2019.

9. Legislative proposals aimed at restricting presidential power under Section 232 include the Bicameral Congressional Trade Authority Act of 2019, sponsored by Senator Pat Toomey and others (Bicameral Congressional Trade Authority Act of 2019, S.287/H.R.940, 116th Cong. [2019]); and the Trade Security Act of 2019, sponsored by Senator Rob Portman and others (Trade Security Act of 2019, S.365/H.R.1008, 116th Cong. [2019]). With regard to the courts, the Swiss company Severstal filed a case challenging the Section 232 steel tariffs, but after the Court of International Trade rejected a motion for a temporary restraining order, the parties filed a joint motion to dismiss. Inside U.S. Trade, “CIT Judge Unconvinced ­Severstal Can Succeed on Merits in 232 Challenge,” InsideTrade.com, April 5, 2018. In addition, the American Institute for International Steel brought a case claiming that the Section 232 statute is unconstitutional, which is currently pending before the Court of International Trade. Inside U.S. Trade, “In Steel Case, CIT Judges Probe Broad Executive Powers under Section 232,” InsideTrade.com, December 21, 2018.

10. Susan Jones, “Trump: ‘Put American Steel and Aluminum Back into the Backbone of Our Country,’” CNSNews, June 29, 2016.

11. Administration of Donald J. Trump, “Memorandum on Steel Imports and Threats to National Security,” April 20, 2017; Administration of Donald J. Trump, “Memorandum on Aluminum ­Imports and Threats to National Security,” April 27, 2017.

12. Department of Commerce, “Notice of Request for Public Comments and Public Hearing on Section 232 National Security Investigation of Imports of Steel,” 82 Fed. Reg. 19205, April 26, 2017; Department of Commerce, “Notice of Request for Public Comments and Public Hearing on Section 232 National Security Investigation of Imports of Aluminum,” 82 Fed. Reg. 21509, May 9, 2017.

13. Office of Public Affairs, U.S. Department of Commerce, “Secretary Ross Releases Steel and Aluminum 232 Reports in Coordination with White House,” press release, February 16, 2018.

14. Congressional Research Service, “Section 232 Investigations: Overview and Issues for Congress,” Table D-1, April 2, 2019.

15. Sherman Robinson et al., “Trump’s Proposed Auto Tariffs Would Throw U.S. Automakers and Workers under the Bus,” ­Peterson ­Institute for International Economics, May 31, 2018.

16. Ellen Mitchell, “Trump Tariffs Create Uncertainty for Pentagon,” The Hill, March 11, 2018.

17. Mitchell, “Trump Tariffs Create Uncertainty for Pentagon.”

18. Donald J. Trump (@realDonaldTrump), “We must protect our country and our workers. Our steel industry is in bad shape. IF YOU DON’T HAVE STEEL, YOU DON’T HAVE A COUNTRY!,” Twitter post, March 2, 2018, 5:01 a.m.

19. Andrew Mayeda, “Trump Turns Steel Tariffs into NAFTA Bargaining Chip,” Bloomberg.com, March 6, 2018.

20. A White House fact sheet explained, “President Donald J. Trump is addressing global overcapacity and unfair trade practices in the steel and aluminum industries by putting in place a 25 percent tariff on steel imports and 10 percent tariff on aluminum imports.” White House, “President Donald J. Trump Is Addressing Unfair Trade Practices That Threaten to Harm Our National Security,” Fact Sheet, March 8, 2018.

21. Donald J. Trump (@realDonaldTrump), “I am a Tariff Man. When people or countries come in to raid the great wealth of our Nation, I want them to pay for the privilege of doing so. It will always be the best way to max out our economic power. We are right now taking in $billions in Tariffs. MAKE AMERICA RICH AGAIN,” Twitter post, December 4, 2018.

22. Canada imposed 10-25 percent tariffs on approximately $12.05 billion of U.S. exports. Mexico imposed tariffs ranging from 7 to 25 percent on $3.52 billion of U.S. exports. The European Union imposed 10-25 percent duties on $2.91 billion worth of U.S. products. China imposed 15-25 percent tariffs on $2.52 billion worth of U.S. products. Russia and Turkey also imposed tariffs on selected U.S. products, ranging from 4 to 140 percent. See Congressional Research Service, “Section 232 Investigations: Overview and Issues for Congress,” April 2, 2019, figure 5; International Trade Administration, “Current Foreign Retaliatory Actions.”

23. Simon Lester, “Panels Composed in the Section 232/Retaliation Cases,” International Economic Law and Policy Blog, January 28, 2019.

24. World Trade Organization, “Panels Established to Review U.S. Steel and Aluminum Tariffs, Countermeasures on U.S. Imports,” November 21, 2018.

25. David Lawder and David Shepardson, “U.S. Agency Submits Auto Tariff Probe Report to White House,” Reuters, February 17, 2019.

26. Robinson et al., “Trump’s Proposed Auto Tariffs Would Throw U.S. Automakers and Workers under the Bus.”

27. Doug Palmer and Megan Cassella, “U.S. Allies Warn of Retaliation If Trump Imposes Auto Tariffs,” Politico, July 19, 2018.

28. Alford, “The Self-Judging WTO Security Exception.”

29. Lester, “The Drafting History of GATT Article XXI: The U.S. View of the Scope of the Security Exception”; Lester, “The Drafting History of GATT Article XXI: Where Did ‘Considers’ Come From?”; Lester, “More GATT Article XXI Negotiating History,” International Economic Law and Policy Blog, May 1, 2018.

30. Russia states that “neither the Panel nor the WTO as an institution has a jurisdiction” over the dispute. Russia’s first written submission, para. 7, cited in “European Union Third-Party Written Submission, Russia—Measures Concerning Traffic in Transit (DS512),” para. 10, November 8, 2017. Along the same lines, the United States argues, “The text of Article XXI, establishing that its invocation is non-justiciable, is supported by the drafting history of Article XXI. In particular, certain proposals from the United States during that process demonstrate that the revisions to what became Article XXI reflect the intention of the negotiators that the defence be self-judging, and not subject to the same review as the general exceptions contained in GATT 1994 Article XX.” “Responses of the United States of America to Questions from the Panel and Russia to Third Parties, ­Russia—Measures Concerning Traffic in Transit (DS512),” para. 3, ­February 20, 2018.

31. For instance, the EU argues that “Article XXI of GATT 1994 is a justiciable provision and that its invocation by a defending party does not have the effect of excluding the jurisdiction of a panel.” “European Union Third-Party Written Submission, Russia—Measures Concerning Traffic in Transit (DS512),” para. 21, November 8, 2017; and Australia argues, “[T]his deference to Russia does not preclude the Panel from undertaking any review of Russia’s invocation of Article XXI(b) or dispense with the Panel’s obligation to undertake an objective assessment of the matter before it, including an objective assessment of the facts of the case.” “Australia’s Third-Party Executive Summary, Russia—Measures Concerning Traffic in Transit (DS512),” para. 30, Feb­ruary 27, 2018.

32. WTO Panel Report, “Russia—Measures Concerning Traffic in Transit,” WT/DS512/R, adopted April 26, 2019.

33. James Bacchus, “How to Solve the WTO Judicial Crisis,” Cato at Liberty (blog), August 6, 2018.

34. In a recent DSB meeting, the United States reiterated that its invocation of Article XXI should not be reviewed by the panel: “[A WTO review] would undermine the legitimacy of the WTO’s dispute settlement system and even the viability of the WTO as a whole.” Inside U.S. Trade, “Azevêdo: Challenging U.S. 232 Tariffs at WTO a ‘Risky’ Strategy,” InsideTrade.com, December 6, 2018.

35. Gina Chon, “Trump’s Anti-WTO Rhetoric Hurts America First,” Reuters.com, December 11, 2017.

36. For an overview of rebalancing under the Safeguards Agreement, see Matthew R. Nicely and David T. Hardin, “Article 8 of the WTO Safeguards Agreement: Reforming the Right to Rebalance,” St. John’s Journal of Legal Commentary 23 (2008): 699.

37. Simon Lester, “How to Determine If a Measure Constitutes a Safeguard Measure,” International Economic Law and Policy Blog, August 15, 2018.

38. United States of America and Mexico, Reciprocal Trade Agreement, article XI, para. 2, December 23, 1942, 57 Stat. 833 (1943), E.A.S. No. 311.

39. John Jackson, World Trade and the Law of GATT (Charlottesville, VA: Michie Company, 1969), p. 565.

40. Suggested Charter for an International Trade Organization of the United Nations, article 29, para. 2, Publication 2598, Washington: Department of State.

41. London Draft of a Charter for an International Trade Organization, article 34, para. 2, Report of the First Session of the Preparatory Committee, UN Conference on Trade and Employment, UN Doc. E/PC/T/33 (Oct. 1946).

42. GATT, article XIX, paras. 2 and 3, April 15, 1994, 1867 U.N.T.S. 187.

43.“Drafting History of Article XIX and Its Place in GATT,” Background Note by the Secretariat, MTN.GNG/NG9/W/7, para. 22, September 16, 1987; and GATT Analytical Index, p. 525.

44. Article 8, para. 1, Agreement on Safeguards, April 15, 1994, WTO Agreement, Annex 1A.

45. Article 8, para. 3, Agreement on Safeguards states, “The right of suspension referred to in paragraph 2 shall not be exercised for the first three years that a safeguard measure is in effect, provided that the safeguard measure has been taken as a result of an absolute increase in imports and that such a measure conforms to the provisions of this Agreement.”

46. John Jackson, The World Trading System (Cambridge, MA: MIT Press, 1994), p. 168; Chad Bown and Meredith Crowley, “Safeguards in the World Trade Organization,” February 2003. (“Although compensation for safeguard measures was often ­negotiated in the 1960s and 1970s, as tariff rates fell and more products came to be freely traded, as a practical matter, it became difficult for countries to agree on compensation packages”); see also Matthew R. Nicely and David T. Hardin, “Article 8 of the WTO Safeguards Agreement: Reforming the Right to Rebalance,” St. John’s Journal of Legal Commentary 23 (2008): 699, 716.

Simon Lester is associate director and Huan Zhu is a research associate at the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies.

The Community Reinvestment Act in the Age of Fintech and Bank Competition

$
0
0

Diego Zuluaga

We have serious reservations as to whether any regulatory agency could have the wisdom necessary to administer such a system to the maximum benefit of competing economic interests.
— Robert Bloom, acting Comptroller of the Currency, March 28, 19771

The Community Reinvestment Act (CRA) requires banks to lend to low- and moderate-income (LMI) households in the areas where they take deposits. But it has become obsolete.

When the CRA came into force in 1977, banks were the main source of loans for home buyers and small businesses, and restrictions on bank branching posed a high barrier to competition. Today’s competitive environment is much changed. The removal of branching restrictions has allowed banks to expand and consolidate — leading to a 77 percent increase in the number of bank offices since the CRA’s passage. Furthermore, a growing share of mortgage and small-business lending now comes from financial institutions that are not subject to the CRA. In fact, LMI borrowers represent a larger share of these institutions’ borrowers than they do for banks, which are subject to the CRA.

Conversely, mounting evidence suggests the CRA is either ineffective or damaging. Before the financial crisis, community groups touted the act’s influence in lowering lending standards. Empirical research also shows that banks’ risk taking increases ahead of their CRA evaluations — contravening the CRA’s requirement that lending be consistent with bank safety and soundness. In cases where CRA lending is not riskier, evidence suggests that banks may be “skimming the top” — lending to high-income residents of low-income communities, thus meeting their regulatory mandate but failing to reach the people the CRA intends to help.

There is a strong case for repealing the CRA in favor of alternative policies that better achieve its goals. It would be a mistake to expand the CRA to cover online (fintech) lenders and credit unions, which already serve LMI borrowers as well as, or better than, many lenders that are subject to the act. If the CRA remains in place, its regulations should change to allow banks to trade their CRA obligations in order to encourage lender specialization and efficiency.

Introduction

The Community Reinvestment Act (CRA) is a 42-year-old statute that requires depository institutions “to demonstrate that their deposit facilities serve the convenience and needs of the communities in which they are chartered to do business … consistent with the safe and sound operation of such institutions.”2 The CRA ostensibly seeks to improve the welfare of low- and moderate-income (LMI) Americans by assessing and rating depository institutions on the basis of how much they lend to, invest in, and serve the communities in which LMI Americans live. Racial minorities were, and continue to be, disproportionately represented among LMI communities, so the CRA is considered part of the anti-discrimination legislation of the late 1960s and 1970s.3

For its first 18 years of existence, the CRA was “a vague statement of principle without much real-world effect.”4 Notably, a series of investigative articles in the Atlanta Journal-Constitution in 1988 documented large and persistent differences in the amount of bank credit extended to majority black communities compared to majority white ones.5 The reports uncovered evidence of redlining: the denial of services to poor and minority geographic regions.6 It was only after 1995, when changes to CRA enforcement shifted the focus from banks’ ex ante lending commitments to actual lending outcomes, that bank lending and other activities in LMI communities appeared to increase.7

But whether this increase was consistent with the safe and sound operation of banks is unclear. There is evidence that CRA-regulated institutions engage in significantly riskier lending in advance of CRA assessments, compromising their safety and soundness.8 This paper shows that, because such risky lending results in higher rates of default and harms the financial well-being of the borrowers who struggle to repay their loans, it is not clear that LMI borrowers benefit from the CRA.

There are still other reasons to question the present-day usefulness of the CRA. The 1977 act was inspired by a long-standing American tradition of bank localism, which has since ceased to characterize the U.S. banking system. With the removal of branching restrictions and statutory ceilings on savings and demand deposits, the concern that motivated the CRA’s drafters to mandate local credit extension, namely that potential borrowers would face few alternative suppliers, has become moot. Additionally, the use of CRA ratings in regulators’ deliberations on bank mergers creates incentives for inefficient lending and distracts attention from more important factors for consumer welfare, such as local bank competition.9

This paper argues for repealing the CRA, making the case that the act remains ill-defined in its policy objectives, arbitrary in its assessment practices, and is liable to harm borrowers and bank depositors. By contrast, reductions in regulatory barriers to branching and the growth of online lenders have significantly increased LMI Americans’ access to banking services, indicating that competitive markets can more efficiently achieve the CRA’s goals of serving these communities. The evidence presented here indicates that the case for outright repeal of the act is quite strong: short of repeal, its current system of ambiguous and bureaucratic assessment should at least be replaced with a system of tradable obligations related to the lending, investment, and service provision that the CRA seeks to encourage.10

A Brief Overview of the Community Reinvestment Act

Metrics and Requirements

The CRA applies to all insured depository institutions except credit unions.11 It is enforced by a depository institution’s primary regulator, which may be the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (FRB), or the Federal Deposit Insurance Corporation (FDIC).12 As of 2018, the FDIC was the primary regulator for 3,617 depository institutions out of the 5,644 institutions subject to the CRA. The OCC and FRB conducted CRA examinations of 1,210 and 817 institutions, respectively.13 Since 1990, regulators’ CRA reports have been available to the public, enabling activist groups to use CRA ratings to oppose bank expansions and mergers on the grounds that the bank has failed to satisfy its obligation to fulfill the needs of the communities where it conducts business.14

CRA examiners use multiple measures to evaluate a bank’s lending to low- and moderate-income borrowers within a given assessment area.15 Assessment areas consist of one or more metropolitan statistical areas or metropolitan divisions where an institution subject to the CRA has its main office, branches, and deposit-taking automatic teller machines (ATMs), as well as surrounding areas where the institution has originated or purchased a substantial portion of its loans.16 LMI borrowers are those whose incomes fall below 80 percent of the median income in the metropolitan area where a bank branch is located, as well as those who live in census tracts with an income that is 80 percent or less than the area median, as determined by the Census Bureau.17

In 1995, CRA regulations underwent a series of significant revisions that shifted the focus of its assessments from processes — how a bank planned to increase lending to LMI communities — to outcomes — how much of its lending and other activities actually went to LMI borrowers and communities.18 The revised regulations also created separate assessment tiers for banks of different asset sizes. In 2005, those tiers were indexed to the consumer price index.19 As of January 1, 2019, institutions with less than $321 million in assets qualified as “small banks” for CRA examination purposes. Those with more than $321 million but less than $1.284 billion in assets were designated “intermediate small” banks.20 Currently, examinations for these small and intermediate small banks are intended to be less onerous, and less frequent, than those for larger banks. Additionally, banks that receive high CRA examination grades (regardless of size) are rewarded with longer periods between examination cycles. The time between examinations can thus range from anywhere between 12 and 60 months, depending on the bank.21

Since the 1995 revisions, these examinations have taken the form of a one-, two-, or three-pronged test, graduated according to banks’ size. For banks with assets above the intermediate-small threshold, CRA regulators use the full three-pronged test, which evaluates the lending, investment, and services that banks provide to LMI customers and communities. The lending test evaluates the volume and distribution of an institution’s loans across borrowers’ income levels and geographic regions.22 The investment test examines the institution’s community development investments, such as activities that revitalize low-income geographic regions and disaster areas.23 The service test evaluates the geographic distribution of a bank’s branches and ATMs, as well as how effectively the bank’s services promote community development.24 There is some overlap in the activities that each of the three tests is meant to evaluate, and CRA regulations recognize this overlap by excluding activities counted under the lending or service tests from consideration in the investment test.25 The three tests apply only to depository institutions above the regulatory thresholds for small and intermediate small banks. Small banks are evaluated according to their lending performance only; intermediate small banks are assessed on both their lending and community development activities.26 Depository institutions subject to the CRA receive a rating according to their performance on each of the relevant tests. Each rating, in turn, is based on a qualitative assessment of the institution’s performance on the test’s different dimensions. For example, an institution’s rating is “outstanding” if, among other behaviors, it exhibits “excellent responsiveness to credit needs in its assessment area.”27 However, institutions are downgraded a notch, to “high satisfactory,” if regulators deem their responsiveness to local credit needs as just good.28 Perhaps unsurprisingly, the Treasury has criticized CRA ratings for lacking clear guidelines and leaving unexplained the criteria by which a bank can meet each level of performance.29

Table 1 reproduces the number of points awarded by CRA regulators for a given level of performance under each of the assessment tests. The lending test is the most heavily weighted and therefore the most important: for each level of performance, it counts at least as much as the other two categories combined.30

Table 2 shows how each aggregate point score translates into an overall CRA rating. The preponderance of the lending test means that no institution can receive an overall “satisfactory” rating unless it scores at least “low satisfactory” on the lending test.31

The CRA’s Flawed Foundations

The Community Reinvestment Act was one of several measures — the others being the Fair Housing Act (FHA, 1968), Equal Credit Opportunity Act (ECOA, 1974), and Home Mortgage Disclosure Act (HMDA, 1975) — aimed at reducing credit discrimination against poor and minority communities and otherwise improving those communities’ access to financial services.32

The policymakers who supported the CRA in 1977 worried about financial institutions engaging in “capital export.”33 This refers to the practice of lending deposits outside of the communities where those deposits are collected (and where the depositors themselves typically reside). Proponents of the CRA argued that depository institutions, most of which have enjoyed federal deposit insurance since 1933, had an obligation to lend within the communities from which they received their deposits.34 Sen. William Proxmire (D-WI), then chairman of the Senate Banking Committee and the CRA’s sponsor, argued that:

A public charter conveys numerous economic benefits and in return it is legitimate for public policy and regulatory practice to require some public purpose. … The authority to operate new deposit facilities is given away, free, to successful applicants even though the [sic] authority conveys a substantial economic benefit to the applicant. Those who invest in new deposit facilities receive a semiexclusive franchise. … The Government limitsentry [that] would unduly jeopardize existing financial institutions. The Government also restricts competition and the cost of money to the bank by limiting the rate of interest payable on savings deposits and prohibiting any interest on demand deposits. The Government provides deposit insurance through the FDIC [and] ready access to low cost credit through the Federal Reserve Banks or the Federal Home Loan Banks. … The regulators have … conferred substantial economic benefits on private institutions without extracting any meaningful quid pro quo for the public.35 [Emphases added.]

Federal and state authorities limited entry into the U.S. banking system, argued Proxmire, so it was only fair to require banks to lend in the communities from which they were able to extract rents — profits in excess of what the banks could earn in a competitive market — courtesy of government regulation.

The idea that bank deposits should remain in the areas where the depositors live is a long-standing one in American banking. Small agricultural interests were early supporters of this type of localism and were happy to support granting bankers local monopoly charters in exchange for the bankers’ commitment to lend to them in both good times and bad.36 Localism also informed the persistence of regulatory restrictions on intrastate and interstate branching well into the 1980s.37 Indeed, the prevalence of unit banking — that is, a statutory prohibition on operating more than one bank office — was deeply rooted in popular culture for much of American history. For example, the 1946 movie It’s a Wonderful Life— “one of the most beloved in American cinema”38— presents small-town thrift banker George Bailey (Jimmy Stewart) as the community’s bulwark against evil big business.39

Yet economically, the practice of confining savings to the localities where they are collected is very costly. A useful function of banks is to pool depositor savings and to deploy those funds as loans. Pooling facilitates beneficial diversification: acting on their own, individuals can only commit funds to one or a handful of projects, exposing themselves to the specific risks of those borrowers each time they do so. Banks, on the other hand, can allocate funds among hundreds of thousands of different lending opportunities. Diversification therefore both reduces portfolio risk for a given level of returns and helps depositors earn more at lower risk.40 Furthermore, trade in credit — that is, borrowing and lending funds — is like other forms of trade in that it is mutually beneficial, enabling the depositor to earn a satisfactory rate of return and enabling the borrower to secure capital for consumption and investment.41 Just as restricting trade in goods is harmful to the welfare of consumers and producers, restricting trade in credit (for instance, by requiring the borrower to reside in the same location as the depositor) can only reduce profitable opportunities for credit extension.

Moreover, bank branching enables banks to diversify their loan portfolios across assets and places, which in turn makes bank failure less likely.42 Banks with multiple branches can offset losses in some hard-hit areas with earnings from relatively less affected areas. For example, when the Great Depression began, California had the most developed bank branch network in the United States.43 Branch banking in that state not only made the banks that operated branches more stable; it also made the unit banks in competition with branch banks more stable than unit banks in places without branch banks, suggesting that competitive pressure provided a healthy check on inefficient unit bank practices.44 Another example is the Canadian banking system — characterized by a small number of large banks — which has exhibited a great deal of stability over 150 years, without detriment to depositor rates of return.45

Although restrictions against branching were the main impediment to bank loan diversification at the time of the CRA’s passage, the CRA further undermined geographic diversification, which helps to mitigate risk, by implicitly requiring that a share of deposits be lent out where those deposits are collected.46 Because only certain types of lending — mainly mortgages and small-business loans — and investment count for CRA credit, the CRA also reduces a bank’s opportunities for asset diversification. Furthermore, and contrary to the philosophy that informed the CRA, the deposit-taking activities of banks benefit communities quite apart from any related lending operations. Deposit facilities give their holders access to the benefits of loan diversification. They also contribute to depositors’ credit histories, facilitating future use of other credit products. Finally, deposit accounts give depositors a valuable reward for their funds, in the form of low-cost banking and payments, and, for some demand deposits, regular interest payments.

At the 1977 Senate hearings, Proxmire argued that the CRA would ultimately assist bank diversification by “alleviating fears that a more liberal branching policy would be inimical to community welfare,” thereby making the removal of branching restrictions more politically palatable.47 However, his suggestion overlooked the fact that, if branching were in fact permitted, the CRA would limit both its attractiveness to banks and banks’ ability to take full advantage of it.

It is difficult to picture a scenario, absent considerable information asymmetries, in which political direction of bank lending could improve upon the allocation resulting from market prices: if attractive projects in the local community can yield an adequate return for a given risk, there is no need for a political directive to mandate lending. If there are better opportunities elsewhere, such a mandate is harmful to depositor returns and banks’ safety and soundness. Proponents of the CRA in the 1970s and 1980s claimed that bank redlining warranted political intervention. As argued below, however, CRA regulations may be undermining LMI communities’ efforts toward financial inclusion in today’s much-changed banking environment.

The regulators charged with enforcing the CRA (the OCC, the FRB, and the FDIC) voiced some of these concerns in 1977. Then comptroller of the currency Robert Bloom warned that the CRA would harm credit institutions “established primarily to serve the needs of a particular segment of the United States population nationwide,” using as an example the case of an American Indian bank that aimed to offer banking services to that group on a nationwide basis.48 Fed Chairman Arthur Burns worried that mandating “standards for setting the proportion of total loans that an institution should allocate to local credit would necessarily be arbitrary.”49 FDIC Chairman Robert Barnett raised the concern that the CRA could “discourage financial institutions from making applications for offices in neighborhoods where funds are badly needed because of the reexamination that this would entail in [the] areas where they already have offices.” Barnett also worried about increased concentration of bank branches in affluent areas, and the duplicative reporting burden on institutions that were already subject to the Home Mortgage Disclosure Act.50

The Changing U.S. Banking Landscape

Two structural trends in U.S. banking since 1977 further strengthen the case for reconsidering the CRA: bank consolidation as a result of the removal of branching restrictions, and the growing market share of online (fintech) lenders.51 This section considers the merits of current CRA assessments in light of the rise of branch banking. A later section argues that the rise of fintech lending bolsters the case for repealing the CRA altogether.

Many of the anti-competitive restrictions that Proxmire cited to justify the CRA in 1977 have since been removed, improving the welfare of bank customers and weakening the case for the CRA’s implicit local lending mandates. The most important of these policy changes has been the steady liberalization of bank branching, that is, the ability of a single bank to operate multiple offices within states and beyond their home state. In 1970, only 13 states allowed banks to operate branches, and no state allowed out-of-state banks to operate branches within its borders.52 From the 1970s onward, however, a growing number of states authorized in-state and out-of-state branching, so that by 1990, all but five states allowed intrastate branching, and the same number (although not the same states) permitted interstate branching.53 This process of steady liberalization culminated in 1994 with the passage of the Riegle-Neal Act, which removed federal restrictions on branching.54

Branching deregulation ushered in rapid bank consolidation, with the average annual number of bank mergers more than doub­ling between the 1960s and the 1990s.55 The number of FDIC-insured commercial banks peaked at 14,496 in 1984 (see Table 3). It stood at 10,453 by the passage of the Riegle-Neal Act, dropping to 7,279 on the eve of the financial crisis, and to 4,918 by the end of 2017. The number of branches, on the other hand, had expanded from 42,731 in 1984 to 79,163 by 2017, only slightly lower than its 2009 peak of 83,130.56 This means that the number of bank offices (headquarters plus branches) is much higher today than at any time before the consolidation trend started — albeit below the number of bank offices in operation just before the financial crisis.

The CRA was passed during a period of extensive branching restrictions. At the time, there was a worry that without strict regulation, communities where locally headquartered banks did not lend would struggle to find a competing supplier. In addition, between 1933 and 1986 the Federal Reserve set an interest rate ceiling on bank savings deposits through Regulation Q.57 This regulation also banned interest on demand deposits until 2011. By restricting the interest that banks could offer to depositors, Regulation Q subsidized bank funding, creating rents for banks above the return they would earn in a competitive market. The weakened competition, both from Regulation Q’s subsidies and from branching restrictions, arguably strengthened the case for local lending mandates that forced banks to share some of their regulatory rents with customers.58 However, these rents were a product of interest rate controls and anti-competitive regulations, not market factors, so they could have been better addressed by repealing Regulation Q and liberalizing bank branching sooner. At any rate, the rationale for community reinvestment that Regulation Q provided disappeared with its repeal.59

Branching deregulation had several beneficial effects. First, it increased the efficiency of the banking sector by facilitating the expansion of the best-performing institutions and removing anti-competitive protections for the worst-performing ones. Greater competition in turn lowered both the share of bad loans on bank balance sheets and the average loan interest rate.60 Economic growth increased as states liberalized bank branching.61 Furthermore, thanks to branching liberalization, there are more banks serving any given individual community today than there were at the height of branching restrictions.62 The increased banking options now available to consumers (regardless of income level) have made deposit rates more competitive, increased loan options, and enabled consumers to benefit from large fee-free networks across the United States.63 One way to illustrate this expansion of choice and its effects is by examining the long-term increase in the average distance between small business borrowers and their lenders — from 100 miles in 1996 to 250 miles by 2016.64 When prospective borrowers have access to more distant lenders, the local bank’s willingness to lend can no longer determine whether borrowing will occur.

Yet the CRA remains in place, restraining further competition and growth by limiting where and to whom institutions can lend. Indeed, today’s CRA regulations do not just require banks to lend in the communities where they take deposits, they also ban branches deemed “primarily for the purpose of deposit production.”65 In other words, despite the fact that banks have had the ability to open branches outside their home state since the mid-1990s, regulators have the authority to close any branches they believe to be conducting insufficient local lending.66 While CRA enforcement authorities do not appear to have yet used this power,67 even before they acquired it there were reports of delayed and abandoned bank mergers because of pending CRA examinations and concerns that a bank’s low CRA rating might pose an obstacle to the merger.68

Contemporary Problems with the CRA

The CRA Encourages Banks to Make Riskier Loans

Although it is clear that the CRA places constraints on the way in which banks allocate credit, the act’s proponents have long argued that CRA loans are too small a share of total lending to constitute a prudential risk, and that there is no evidence that CRA loans are riskier or less profitable than other loans.69

The experience of the financial crisis suggests otherwise. In fact, pre-crisis testimony from community organization representatives explicitly pointed to the CRA as one cause of overly lenient underwriting standards. In a 2007 report, for example, the National Community Reinvestment Coalition touted “higher debt-to-equity ratios than … conventional loans,” “flexible underwriting standards,” “low or no down payment[s],” “commitments by secondary market institutions [notably, the government-sponsored enterprises (GSEs)] to purchase loans,” and “second review” of denied applications as consequences of the CRA.70 Raising loan-to-value ratios, relaxing borrower standards, and pushing secondary market institutions to buy more bank loans all make lending riskier.

The financial crisis cast a bright light upon the extent to which many households, particularly LMI ones, had taken on large housing debts.71 There is considerable evidence that the regulatory push to extend mortgage lending to LMI communities, which accelerated in the mid-1990s, and the accompanying promise that the GSEs (Freddie Mac and Fannie Mae) would buy those mortgages, drove that debt increase.72 The extent to which the CRA is responsible for unprofitable lending to LMI households remains a matter of debate, but the $4.5 trillion in CRA commitments between 1992 and 2007 tracks closely with the excess in affordable housing loans made by the GSEs (relative to their historical norm) during that same period.73

Even if the CRA was not the main contributor to bad mortgage credit growth in the run-up to the financial crisis, it may have enabled the proliferation of credit by giving aggressive lending practices the respectable cover of community reinvestment. Pre-crisis accounts of the CRA’s “success” support this hypothesis, as they focus on the growth of LMI lending and homeownership, rather than the CRA’s suitability for borrowers or its implications for bank safety and soundness.74 Even before the financial crisis, CRA supporters recognized the difficulty of attributing increases in low-income lending to the act, since both high rates of economic growth and other government policies — such as the loosening of GSE standards — could better explain the observed increase in lending to those communities.75

However, proving that the CRA was a success requires showing that it led to higher lending volumes without compromising the lenders’ safety and soundness. Pre-crisis evidence of the CRA’s impact, even when it suggested significant growth in LMI lending by depository institutions, failed to show that such credit was sound.76 The crisis and its aftermath, on the other hand, showed that mortgage lending on lenient terms could harm financial institutions and borrowers alike. It is not surprising that institutions subject to the CRA, especially those looking to grow and merge with others, would increase their LMI lending, since regulators take CRA ratings into account when approving bank expansions.77 Such lending may even have benefited banks and their managers in the short term. But was it good for borrowers, bank shareholders, taxpayers, and the economy in the long run?

Some of the evidence says no. A 2012 National Bureau of Economic Research (NBER) paper looking at CRA lending between 1999 and 2009 finds that banks significantly increased their lending around the time of CRA examinations, and that such loans were riskier. Specifically, lending volume increased by 5 percent and default rates increased by 15 percent in the quarters surrounding a bank’s examination.78 The increase in lending is particularly large and significant for banks with more than $50 billion in assets, which is consistent with the hypothesis that larger institutions, being more likely to expand and merge, will have a greater incentive to strive for high CRA ratings in hopes of having their mergers approved.79 The study’s authors also find that the increase in risky lending became more pronounced in the later years of the housing boom.80 Their finding agrees with the contention made by, among others, former Federal Reserve governor (and 1990s CRA reform architect) Lawrence Lindsey that, to avoid a CRA rating downgrade, before the crisis banks increasingly reached out to riskier borrowers as the demand from more creditworthy borrowers was satisfied.81

The NBER paper has been criticized for focusing on the quarters surrounding CRA examinations, thus failing to recognize that these examinations themselves evaluate lending in periods well before those dates.82 The authors counter, however, that depository institutions have an incentive to concentrate their CRA lending close to the exam so as to minimize recorded default rates, which might fall foul of the CRA’s requirement that lending be consistent with safety and soundness.83

In contrast, a 2013 Federal Reserve bulletin found that LMI loan delinquency rates in banks’ CRA assessment areas were lower than those outside their assessment areas, suggesting — according to the authors — that the impact of the CRA on financial fragility, if any, was comparably minor.84 However, the study cited only one year of evidence; furthermore, it showed that credit scores of LMI borrowers within the surveyed banks’ assessment areas were higher, and those borrowers much less likely to be subprime, than in the case of LMI borrowers outside CRA assessment areas.85 More creditworthy borrowers are less likely to default. Their preponderance among the LMI borrower cohorts of banks’ assessment areas suggests that banks might be “skimming the top”: lending to the most creditworthy borrowers in LMI areas to fulfill their CRA requirements while also minimizing risk.86 Such behavior may satisfy regulators, but it contradicts the assertion that CRA loans serve the marginal borrowers and communities that the statute ostensibly targets.

Furthermore, evidence that CRA-motivated lending was less risky than other types of lending to LMI borrowers does not prove that the CRA was beneficial, or even neutral, for bank balance sheets and the health of the wider banking system. Indeed, as recently as 2006, regulators issued draft rules to exempt CRA-related equity investments — such as providing capital and employment for community development purposes — from higher Basel II capital charges.87 Pro-CRA activists encouraged this move, which made it more attractive to make CRA investments at the expense of bank safety and soundness.88

Yet another problem with citing increases in LMI lending as evidence for the economic gains associated with the CRA is that the opportunity costs of CRA-induced lending may exceed the benefits. Gross growth rates of LMI loans ignore opportunity costs. Consider the scenario in Table 4, where a bank with $10 million worth of available funds faces a choice of four projects to finance.

In the absence of the CRA, and assuming for simplicity that all prospective borrowers face a similar interest rate, the bank would pick the projects with the highest likelihood of repayment; that is, Projects A, B, and C. Under the CRA, however, if the bank believes that its loan to Project B will not suffice to get the bank a high CRA rating, it may choose Project D over Project C because the loan applicant in D (with income below 80 percent of the area median) qualifies for LMI status, whereas the applicant in C does not.

While the philosophy of the CRA implies that lending to applicant D over applicant C has positive benefits beyond its return to the bank, it is important to note that rejecting applicant C comes with costs: first, to the bank’s shareholders, who will receive a lower expected return on capital; second, to applicant C, who, while not low-income by the regulatory definition, is not much better off than applicant D and still places below the median income of the bank’s assessment area. In fact, a 2000 Fed survey showed that 44 percent of respondent banks found their CRA mortgage loans to be less profitable than their other mortgage loans.89 Furthermore, 52 percent of respondents indicated that CRA-related mortgage loans were costlier to originate, on a per dollar basis, than non-CRA loans.90 These results suggest that, for many institutions, CRA lending involves higher costs than non-CRA lending — and these costs are passed on to other borrowers and shareholders as well.

The 2007-2009 financial crisis illustrated the harm that a single-minded drive to increase mortgage lending could do to vulnerable communities. It was a surfeit of politically induced housing credit, rather than a scarcity of it, that left households badly exposed when the crisis hit.91 Yet the CRA continues to assess depository institutions primarily on their lending to LMI areas, despite evidence that such lending is riskier and costlier to underwrite.

Compliance with the CRA Is Unnecessarily Burdensome

There are four levels of CRA performance: “outstanding,” “satisfactory,” “needs to improve,” and “substantial noncompliance.”92 Between 2006 and 2014, no more than 3.5 percent of depository institutions subject to the CRA received an overall rating below satisfactory in any year. More than 90 percent of institutions received a satisfactory rating in 2014.93 These statistics have caused some analysts to conclude that compliance with the CRA is not a burden on depository institutions.94 Their assumption is that, if the CRA were onerous, more institutions would fail their CRA exams.

In fact, things are not so simple. An institution’s CRA ratings tell us only that it passed the evaluation; they say nothing about the resources it dedicated toward doing so. Just as the low number of bank failures prior to the 2007-2009 financial crisis did not imply the absence of financial fragility, one cannot conclude that a high pass rate means the CRA is not burdensome.95 The resources a bank dedicates to merit a satisfactory or outstanding evaluation can be substantial, and these often come with further direct and indirect costs to banks, shareholders, and consumers. In fact, the CRA is responsible for 7.2 percent of community banks’ compliance costs, according to a Federal Reserve Bank of St. Louis survey.96 That is despite the fact that more than 80 percent of community banks, that is, those with less than $10 billion in assets, are subject to the less burdensome small or intermediate small CRA assessment protocols. As compliance costs represent between 5 and 10 percent of community banks’ noninterest expenses overall, the CRA can make a perceptible dent on bank operating margins, particularly on those of smaller banks that have lately seen higher compliance costs and lower rates of return.97

The CRA Fails to Promote Financial Inclusion

At the time of the CRA’s passage, there was a concern that certain depository institutions would systematically refuse to lend to minority communities, even when doing so would not mean taking on undue credit risk.98 Investigative reporting, notably by the Atlanta Journal-Constitution, continued to expose this practice of redlining in the years immediately after the CRA went into effect.99 However, 42 years later, the barriers to financial inclusion for low-income and minority communities are different. The CRA not only fails to address those barriers; it may contribute to the difficulty of overcoming them.

According to the FDIC, as of 2017 there were 8.4 million U.S. households (6.5 percent of households) without a bank account. Another 24.2 million have only limited access to banking services and must instead resort to alternative — usually costlier — providers.100 Unbanked rates are much higher for minorities: 16.9 percent of black households and 14 percent of Hispanic ones are unbanked, compared to 3 percent of white ones. Additionally, more than half of black and Hispanic households with incomes below $30,000 report no mainstream source of credit.101

Two commonly cited reasons for lacking a bank account are not having enough money to deposit and account fees being too high.102 Regulatory compliance costs are a principal driver of both account fees and minimum deposit requirements to avoid those (and other) fees. As mentioned earlier, CRA loans are costlier to originate than other loans, and CRA compliance costs account for 7.2 percent of all community bank compliance costs.103 While this is lower than the share of bank costs related to the Bank Secrecy and Truth in Lending Acts, it is still significant, especially considering that overall bank compliance costs have increased in recent years.104 Thus, while it might appear that the CRA’s low-income lending mandates promote financial inclusion among lower-income borrowers, its indirect impact on account charges likely reduces access to deposit and credit services among the very populations the CRA is meant to serve.

The decline of small banks (despite a steady rise in the number of bank offices), further bank consolidation since the financial crisis, and rising compliance costs have all contributed to the phenomenon of so-called banking deserts. These are census tracts with no bank branches within a 10-mile radius of their centers.105 As of 2016, 3.7 million Americans lived in banking deserts, while another 3.9 million lived in areas that may soon lose their last bank office.106 Banking deserts are mostly rural and therefore do not account for a large share of the unbanked population.107 Nevertheless, some states with a high population share living in banking deserts, such as Arizona, Nevada, and New Mexico, also have above-average unbanked rates (Table 5).

Median household incomes in banking deserts are lower than they are in nondeserts. Even for potential banking deserts, median household income sits at 10 to 20 percent below the nationwide median.108 The population of banking deserts is therefore not much different in its socioeconomic characteristics from the groups that the CRA aims to help.

Growing regulatory compliance expenses contribute significantly to the rising cost of operating bank branches, increasing the likelihood of branch closures and contributing to the spread of banking deserts.109 CRA branching restrictions worsen this problem by discouraging the establishment of bank branches and ATMs in sparsely populated areas. Without the implicit lending requirements of the CRA, depository institutions might be more willing to take deposits in, and to serve, low-density geographies. CRA regulations also discourage banks from expanding to areas with few lending opportunities by making it costlier to operate a branch, thus reducing banks’ incentive to open new branches or maintain old ones in marginally profitable locations.

The CRA Has Not Resolved “Rational Redlining”

Even some critics believe that the CRA may be justified, if informational asymmetries specific to LMI communities lead banks to ration credit more with them than they do among other populations — what one author calls “rational redlining.”110 For example, credit rationing can occur if a community has witnessed only a limited number of local property transactions, generating insufficient data on home prices and complicating banks’ capacity to make accurate loan appraisals.111 Uncertain appraisals, in turn, raise the down payments demanded by banks, further dampening loan volumes.112

However, even if rational redlining is a problem in some present-day LMI communities, CRA regulations can be of only limited help. The CRA’s assessment policy links lending obligations to deposit-taking. But that cannot resolve rational redlining: if banks take deposits within a community, they will have access to information about its credit quality, economic conditions, and property values. On the other hand, banks lacking such information are unlikely to operate or take deposits in that community precisely because they face uncertainty regarding available lending opportunities. CRA regulations target institutions that already have operations in local communities and have information about local market conditions. A better way to facilitate greater credit extension is to increase competition by attracting new lenders into communities, thereby helping to correct any information failures. As argued below, the CRA works against these efforts.

The CRA Is a Harmful Industrial Policy

One way that the CRA encourages bank compliance is by directing regulators to take a bank’s CRA rating into account when evaluating its application for a deposit facility — that is, whenever that bank wants to set up a branch or merge with another bank.113 On the one hand, such use of CRA ratings encourages regulators and activist groups to oppose the applications of banks that they perceive as having underperformed in their lending to LMI communities.114 Indeed, groups such as the National Community Reinvestment Coalition have explicitly linked periodic waves of bank mergers to increases in those banks’ commitments to lend, suggesting that the merger would not have taken place without the banks’ lending promises.115 On the other hand, a bank’s positive CRA record can lead regulators to overlook other concerns related to its activities, such as the impact a large bank merger could have on local credit spreads — the difference between the interest banks charge borrowers and what they pay to depositors. Because the credit spread is a proxy measure for competition within a local banking market, it is often a more economically significant indicator of a merger’s impact on consumer welfare than the merging banks’ CRA ratings.116

The CRA has thus become a tool of industrial policy, rewarding institutions for meeting political goals and threatening to punish those perceived to have fallen short. This facet of the CRA raises three important concerns. First, it may reduce efficiency by blocking consolidation that would lower bank operating costs and increase loan diversification, which, as discussed above, promotes safety and soundness. There has been a steady trend of bank consolidation since passage of the Riegle-Neal Act, but many small banks remain: for example, as of the third quarter of 2018, the FDIC reported 1,335 supervised institutions with assets below $100 million, with an average return on equity 3 percentage points below that for larger banks.117 Both figures suggest that some gains from economies of scale remain to be grasped in U.S. banking. Foreclosing such efficiency-enhancing mergers would harm depositor returns and undermine bank safety and soundness.

Second, the CRA may reduce competition if a bank merger gives the resulting institution sufficient market power to raise prices. The United States has a long history of monopolistic and oligopolistic local banking markets. Competition only started to become the norm from the late 1980s onward, and the evidence suggests it has had positive effects on economic growth and consumer well-being.118 The CRA, by making it costlier to establish branches and expand operations, can have a deleterious impact on local bank competition.119 This possibility is particularly worrying in the post-crisis U.S. banking landscape, which is characterized by very few new banks120 and strong restrictions on the number of new charters issued by the FDIC.121

Finally, CRA regulations weaken the incentive for banks to guard against unprofitable lending, if banks perceive the benefits from easier consolidation to outweigh the losses incurred from CRA loans.122 Between 1992 and 2007, cumulative CRA lending commitments increased 500-fold, suggesting that banks were willing to spend heavily to please their regulators once branching liberalization increased merger and expansion opportunities.123 As discussed earlier, the evidence also suggests that loans timed to coincide with banks’ CRA evaluations are riskier than other loans.124

Credit volumes are an imperfect proxy, and certainly not a substitute, for the welfare of communities and households. In 1977, the CRA focused on LMI lending because there was evidence of widespread redlining, abetted by nationwide restrictions on bank branching. Four decades later, the CRA, as currently enforced, raises many concerns regarding the effectiveness of its LMI lending mandates, its compliance cost to banks, and its impact on prudential standards. It also fails to address the contemporary issues facing LMI communities, such as the high rate of unbanked households and the growth of banking deserts.

Better Ways to Promote Community Development

If the goal of the CRA is to raise the real incomes of LMI communities, then a more diverse array of policies offers greater promise for achieving it. These alternative policies would also have fewer adverse consequences for bank safety and soundness than the CRA’s implicit lending mandate. They include liberalizing zoning laws to lower the cost of housing, reducing low-income tax burdens, curbing occupational licensing to facilitate employment and entrepreneurship, lowering tariffs on food and clothing imports, and relaxing overly strict childcare regulations.125 The high cost of living in many urban areas hurts LMI communities in particular, but that is a problem that neither banks nor financial regulation can readily solve.

Of course, improving LMI households’ access to credit can also improve the well-being of those households. But there are better ways to facilitate LMI access to credit than by imposing CRA mandates. The most effective alternative is to ease banks’ entry into the lending business.

Facilitate Lender Entry

Regulators should make entry into local lending markets easier by issuing more charters and reducing regulatory barriers for nondepository institutions. The rate of new bank creation has slowed dramatically, from an average of more than 100 banks per year between 1990 and 2008 to just 13 for the eight years between 2010 and 2018.126 While low interest spreads and higher post-crisis rates of regulatory burden account for some of the decline, the FDIC also toughened its capital and supervisory regime for new banks in 2009, discouraging newcomers’ entry into lending markets.127 Since then, the stabilization of the financial system and expansion of the economy, together with new leadership at the FDIC, have created an opportunity to ease new charter policy for the benefit of depositors and borrowers.128

In the meantime, the growth of online lending has further reduced the loan market share of CRA-subject depository institutions.129 The volume of CRA lending has thus become less representative of overall credit conditions in LMI communities. Online lenders have devised ways to allocate credit profitably and competitively without an established relationship with prospective borrowers. Indeed, recent evidence suggests that online lenders can allocate credit more efficiently — with higher loan volumes at lower interest rates — than depository institutions.130 Online lending has therefore reduced the potential for asymmetric information to lead to rationing in local credit markets.131 Other research suggests that online lenders tend to serve communities with a small number of banks and bank branches, which increases competition and credit availability in areas to which banks may not previously have fully catered.132

The OCC’s proposed special-purpose national bank charter for fintech firms promises to make nationwide operations by nonbank lenders easier and less expensive (currently, the cost of state-by-state licensing and examinations can reach up to $30 million).133 Comptroller Joseph Otting has previously estimated that as many as 30 to 40 online lenders could apply for a fintech charter.134 Unfortunately, state-level legal challenges to the charter have led to policy uncertainty, discouraging firms from taking up the OCC’s offer for the time being.135

Branching liberalization and the advent of online lending have allowed for freer local bank entry, substantially reducing the likelihood of persistently low lending rates in LMI communities. For example, as Table 6 shows, recent Home Mortgage Disclosure Act data reveal that on average, 26.2 percent of mortgages originated by the largest nonbank lenders (including fintech) are issued to LMI borrowers. Among those same lenders, 23.9 percent of all mortgage loans are issued to minorities. By comparison, LMI borrowers and minorities account for 20 percent and 22.2 percent of mortgages from the largest banks, respectively.136 (Together, the top 25 banks and nonbanks — including mortgage companies and credit unions — account for 33.6 percent of all loan originations.137) In short, market developments are already solving the primary issues that the CRA has spent the past 42 years trying to address.

Additionally, racial desegregation of many inner-city neighborhoods, itself a welcome development, has weakened the link between geography and CRA-targeted populations. For example, CRA lending in the historically black Philadelphia neighborhood of Point Breeze now seems to be reaching mostly newer (and better-off) white residents.138 Because CRA regulations evaluate a census tract’s LMI status by comparing its median income with the median income of the metropolitan area, loans to better-off borrowers in LMI tracts still count for CRA assessment purposes.139 Urban desegregation, perversely, has undermined the CRA’s effectiveness in promoting lending to vulnerable communities.

Let Fair Lending Laws Help

The objectives of the CRA remain vague and ill-defined. Regulators should clarify these objectives, both among themselves and to eligible institutions and community organizations. Is the goal of the CRA to fight lending discrimination, to increase lending in LMI communities, to raise living standards in LMI communities, or to achieve other public-interest goals?

If the CRA is supposed to fight discrimination, then the Fair Housing Act and the Equal Credit Opportunity Act are better tools, as they specifically address the disparate treatment of prospective borrowers according to race, gender, age, marital status, or other protected characteristics because they prohibit lending discrimination in the mortgage and small-business lending markets that the CRA targets.140 There are concerns that the Consumer Financial Protection Bureau has been overbroad in its interpretation of the ECOA’s meaning in recent enforcement actions.141 Yet, unlike the CRA, the FHA and the ECOA focus on preventing the unfair treatment of individual vulnerable borrowers. Also unlike the CRA, they explicitly ban discriminatory practices and instruct financial regulators to prosecute violations.142 These are more efficient means of achieving public-policy goals than the CRA’s implicit requirement that banks lend in specific locations or risk having future expansion or merger applications rejected.

Increasing lending to poor communities is not a sound policy goal on its own, as it can encourage unprofitable loans that end up harming borrowers and bank balance sheets. There was a time when banks could profitably ration credit and exclude vulnerable populations due to branching restrictions and interest-rate caps. But the liberalization of bank branching in the 1980s and 1990s and the growth of nonbank lenders have increased competition in local banking markets and given consumers a more diverse set of credit options. Today, ensuring that public policies do not drive credit to borrowers who can ill afford it is as important as enabling financial institutions to serve all communities.

How Should the CRA Change if It Remains in Place?

There is reason to believe that the CRA is outdated and ill-suited to the current needs of LMI communities. Given how radically banking and credit markets in the United States have changed since 1977, Congress should strongly consider repealing the act. If the CRA remains in force, however, it would be a mistake to expand its mandate to cover nonbanks, such as fintech lenders and credit unions. Short of repealing the act, Congress and regulators should consider more efficient ways for depository institutions to discharge their CRA duties, such as by establishing a system of tradable lending obligations.

Allow Fintech Firms to Remain Exempt

The growth of online lending has led proponents of the CRA to call for the act’s extension to “branchless” fintech lenders.143 Such an extension would not be possible under the present CRA evaluation framework, which defines eligible assessment areas as those where institutions have offices, branches, or ATMs.144 Moreover, the rationale for mandating community reinvestment by banks — that they enjoy government deposit insurance — fails to apply to fintech and other nonbank lenders.145

The proposed extension would also have negative practical effects. Nonbank fintech lenders have gained a substantial foothold in mortgage lending over the last decade, owing to their technological advantage as well as new regulations placed on banks.146 Indeed, critics of the CRA — as well as some regulators — have warned of the act’s potentially adverse impact on depository institutions’ ability to compete with nonbank lenders.147 The answer to these criticisms, however, is not to subject fintech lenders to the same CRA regulations: that would discourage their participation in marginal lending markets, which are the very markets that fintech lenders are more likely to serve than traditional lenders. Their withdrawal would have a disproportionately adverse impact on credit conditions and welfare in those communities.148

In 1977, politicians justified the CRA by claiming that the government was underwriting bank credit risk and granting economic privileges to banks through restricted charters and federal deposit insurance.149 That argument does not apply to fintech lenders, who neither hold charters nor enjoy the benefits of a taxpayer-guaranteed public deposit insurance scheme.150 Extending the CRA to nondepository institutions such as fintech firms would therefore not create a level playing field. Rather, it would broaden the scope of a statute whose policy efficacy is already in doubt to institutions for which it was never intended.

Allow Credit Unions to Remain Exempt

Credit unions, which are exempt under the CRA’s current provisions, have recently become the target of similar calls for the act’s expansion. A bill introduced by Sen. Elizabeth Warren (D-MA) in September 2018 would have made credit unions, as well as nonbank lenders, subject to the CRA,151 although Warren subsequently revised her bill to remove credit unions from the set of institutions covered.152 The American Bankers Association, in a recent public filing with the OCC, likewise called for applying CRA regulations to credit unions.153

Subjecting credit unions to CRA regulations would be counterproductive. In order to meet the conditions of the Federal Credit Union Act (FCUA), credit unions are already subject to restrictions on their activity that make them fundamentally different, for the CRA’s purposes, than banks.154 The FCUA restricts credit union membership to groups that share a “common bond of occupation or association,” and to “persons or organizations within a well-defined community, neighborhood, or rural district.”155 These common-bond provisions are at once redundant and incompatible with the CRA. Both acts are similar in that they aim to ensure that lending institutions serve their constituents. Yet the FCUA’s provisions would make enforcing the CRA among credit unions impossible: whereas CRA compliance relates to a bank’s lending activities within a given geographic region, the common bond that credit union members share under the FCUA may be professional, social, or demographic instead of geographic. Thus, credit unions are an example of the type of institutions that Comptroller Bloom, during the 1977 hearings, feared the CRA would undermine.

Additionally, there is evidence that credit unions already serve CRA-targeted populations. Since the financial crisis, the share of mortgages originated by credit unions has increased steadily, rising from 2.6 percent in 2007 to 8.7 percent as of mid-2018.156 Recent HMDA data also show that credit unions originate a larger share of their mortgage loans to LMI borrowers than small banks do: 13.4 percent versus 12.5 percent.157 Several factors could be behind these findings. For one, credit unions securitize a smaller portion of their loans than other mortgage originators, which may make them more sensitive to portfolio risk and lead them to spend more resources screening for creditworthy LMI borrowers.158 Indeed, credit unions reject a larger share of mortgage loan applicants than do other institutions, which is consistent with the hypothesis of tighter screening owing to increased risk sensitivity.159 Moreover, perhaps the increase in mortgage lending regulation has affected small banks more than credit unions, or perhaps banks are more vulnerable to nonbank lender competition than credit unions are. Finally, it could be that the FCUA’s common-bond provisions facilitate risk management by giving credit unions information about the credit quality of their borrowers that other institutions, whose customers need not share similar characteristics, cannot easily observe.

Credit unions appear to be achieving the CRA’s policy goals without being subject to its regulations. Applying the CRA to credit unions would impose substantial new compliance costs that are both unnecessary and incompatible with the nature of credit unions themselves. If policymakers are concerned about the changing business model of credit unions — particularly larger ones — the appropriate route to address such concerns is to revise the FCUA.160

A Quantitative Score Has Clear Advantages — but Also Problems

The Treasury’s April 2018 memorandum on improving the CRA recommended “an approach to … CRA that incorporates less subjective evaluation techniques.”161 The memorandum pointed out that relevant performance indicators in CRA assessments, “such as ‘excellent,’ ‘substantial,’ and ‘extensive,’ are undefined.”162 Other analysts have raised similar concerns on the use of “innovativeness” and “complexity” in the CRA investment test.163 The OCC has subsequently suggested the use of a metric-based framework for CRA performance assessments.164 While the details of a quantitative approach remain unclear, it would likely involve assigning CRA ratings based on the share of CRA-eligible loans, investments, and services in bank deposits, assets, or capital.165

There are clear advantages to a metric-based approach. It would make assessments less arbitrary and provide greater certainty to institutions regarding the expectations of regulators. Quantitative assessment would also make it easier to compare performance between institutions and time periods. In these ways, a metric-based approach could reduce the administrative and compliance costs of the CRA.

Yet a metric-based approach also raises new concerns. Banks have warned that a quantitative method might not account for differences in business context across assessment areas.166 Furthermore, in practice a metric-based approach would resemble a quota system for bank lending, investment, and services, unless regulators linked quantitative scores to qualitative judgements. Quotas, however, contradict the spirit of the CRA, which — in the words of Senator Proxmire — should not involve “costly subsidies, or mandatory quotas, or a bureaucratic credit allocation scheme.”167

The advantage of a metric-based approach is that it would make it easier for banks to understand how best to demonstrate their LMI lending to regulators and estimate their performance in advance of an evaluation. But if regulators want to move toward quantitative forms of CRA assessment, there are more efficient ways to do so than a rigid quota scheme. Instead of fixed quotas, regulators should quantify the aggregate amount of CRA lending they expect to see in each assessment area and allow the most productive institutions to bid for their fulfillment.

Make CRA Obligations Tradable

If the CRA remains in place, there is a better way to encourage banks to improve the quality of the lending and other financial services they provide to LMI communities: create a market for tradable CRA obligations.168 Under this system, the regulator would define the specific lending, investment, and services obligations among banks within a given assessment area. Obligations could be allocated in various ways, but for consistency with present CRA practice — which ties lending obligations to deposit-taking — it might be easiest to determine them according to an institution’s local deposit-market share. Lenders, including nondepository institutions such as fintech firms and community development financial institutions, would be able to bid for the obligation to fulfill CRA lending in exchange for a fee from banks.

A system of tradable CRA obligations would have several advantages over the current CRA enforcement regime. First, it would ask regulators to quantify the lending, investment, and services needs of the various LMI communities, thus introducing rigor into the assessment process and making explicit the community obligations of banks. As a result of trading in CRA obligations, a price would emerge to reflect the cost of fulfilling the act’s requirements. Importantly, the price of individual CRA obligations would vary according to the difficulty of profitably fulfilling them. For example, since it becomes more difficult to find creditworthy borrowers the more a community’s credit needs are satisfied, the price of a CRA lending obligation would rise as local CRA lending increased, serving as a useful bellwether for excessive lending in a particular community.

Second, a system of tradable obligations would encourage specialization and competition among lenders while ensuring that CRA obligations continued to be met.169 The CRA as currently enforced deters specialization by requiring banks to lend roughly proportionately wherever they take deposits.170 Under a tradable obligation regime, lenders from outside the assessment area, including those not subject to the CRA, would have an incentive to participate in the market if they could lend efficiently to local LMI communities. Given the role that fintech lenders play in providing credit to lower-income communities, for instance, their participation in such a trading scheme could increase the efficiency of CRA lending.171 As competition increased, the cost — that is, the market price for a representative obligation — of complying with the CRA would decline.

Third, tradable obligations would give CRA-subject banks increased opportunities for portfolio diversification. Because of the capital export rationale underpinning the 1977 act, the CRA currently forces banks to restrict some of their lending to the communities where they operate branches, even if they would like to lend elsewhere. For small banks in particular, the CRA’s local bias can impair geographic diversification. A system of tradable obligations, on the other hand, would enable depository institutions to lend in the locations best suited to their expertise and overall loan portfolio, while compensating other institutions for fulfilling CRA obligations on their behalf.

Fourth, a system of tradable obligations would reduce assessment uncertainty for depository institutions. Instead of grappling with a mounting list of ill-defined objectives, banks would discharge their obligations either by lending and investing directly, or by paying more efficient competitors to do so on their behalf. This would reduce compliance costs for CRA-eligible firms and reduce evaluation costs for the regulator. Meanwhile, communities would get what they need — or at least, what regulators think they need.

In fact, quantifying CRA commitments based on the needs of individual communities would require regulators to face an important challenge. The difficulty of ascertaining the level of unmet, yet profitable, credit demand is precisely why developed financial systems largely rely on markets — not regulators — to make determinations about credit allocation.172 As of now, the CRA forecloses this possibility. If regulators are required to set individual CRA obligations by region, however, they will likely have to do so in conjunction with banks and community groups in order to enlist their local market knowledge. This form of decisionmaking is still imperfect, as it would leave such obligations, and therefore regulatory compliance, vulnerable to interest-group pressures. But that is already the case under the current CRA’s uncertain and bureaucratic assessment regime. Moving to a system of tradable obligations would facilitate the benefits listed above while revealing the opportunity costs of the CRA — in terms of both foregone loans and overall safety and soundness.

Conclusion

The lending landscape in the United States has changed substantially since the 1977 enactment of the CRA. Written for what was then a competitive environment shaped by branching restrictions, the act took no account of the possibility that technological innovation would expand the opportunities for financial services provision. Today, the CRA is ill-suited to address the problems of unequal access to banking and credit as they currently affect low- and moderate-income borrowers.

In today’s landscape of widespread branching and diverse lending sources, the CRA has become a law in search of a public policy role. Congress should consider whether the benefits of preserving it justify the costs, or whether the act’s original goals can be (and already are) more effectively fulfilled through other channels. Fair lending laws can better prevent financial exclusion. Supply-side policies outside of financial regulation stand a greater chance of improving living standards in LMI communities. Perhaps the time of the CRA has simply passed.

However, if the CRA remains in place, policy­makers should take steps to make compliance less arbitrary and costly for banks. Implementing a system of tradable obligations that can be fulfilled by the most efficient lender at a market-determined rate combines the benefits of a clearly defined, quantitative approach with the flexibility and choice that America’s highly diverse credit market demands. Such a system would increase the efficiency of CRA enforcement and finally recognize that U.S. retail credit markets are much changed, and in many ways much improved, from the landscape that prevailed 42 years ago.

Notes

This paper is an expanded version of the author’s public filing with the Office of the Comptroller of the Currency. See Diego Zuluaga, “Reforming the Community Reinvestment Act Regulatory Framework,” Docket ID OCC-2018-0008, November 28, 2019.

1“Community Credit Needs: Hearings on S. 406, Before the Senate Committee on Banking, Housing, and Urban Affairs,” 95th Cong. (January 24, 1977)(statement of Robert Bloom).

2 Community Reinvestment Act of 1977, 12 U.S.C. § 2901 (1977).

3 Raymond H. Brescia, “Part of the Disease or Part of the Cure: The Financial Crisis and the Community Reinvestment Act,” University of South Carolina Law Review 60 (2009): 627-28.

4 Jonathan R. Macey and Geoffrey P. Miller, “The Community Reinvestment Act: An Economic Analysis,” Virginia Law Review 79, no. 2 (March 1993): 292.

5 Bill Dedman, “The Color of Money,” Atlanta Journal-Constitution, May 1-4, 1988.

6 Redlining is “the practice of denying services, either directly or through selectively raising prices, to residents of certain geographies.” See Department of the Treasury, “Memorandum for the Office of the Comptroller of the Currency, the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation: Community Reinvestment Act — Findings and Recommendations,” April 3, 2018.

7 Richard Marsico, “Democratizing Capital: The History, Law, and Reform of the Community Reinvestment Act,” New York Law School Review 49 (2004-2005): 723.

8 Sumit Agarwal, Efraim Benmelech, Nittai Bergamn, and Amit Seru, “Did the Community Reinvestment Act (CRA) Lead to Risky Lending?,” Kreisman Working Papers Series in Housing Law and Policy no. 8, October 2012, pp. 14-19.

9 Charles W. Calomiris and Stephen H. Haber, in Fragile by Design: The Political Origins of Banking Crises and Scarce Credit (Princeton: Princeton University Press, 2014), pp. 208-11, 216-26, document the ways in which the CRA influenced regulators’ decisions on bank mergers.

10 Michael Klausner, “A Tradable Obligation Approach to the Community Reinvestment Act,” in Revisiting the CRA: Perspectives on the Future of the Community Reinvestment Act (Boston/San Francisco: Federal Reserve Banks of Boston and San Francisco, September 2009).

11 Office of the Comptroller of the Currency, “Fact Sheet: Community Reinvestment Act,” March 2014.

12 12 U.S.C. § 2901.

13 Department of the Treasury, “Memorandum,” April 3, 2018, p. 28.

14 Macey and Miller, “The Community Reinvestment Act,” pp. 322-23.

15 However, financial institutions face uncertainty about individual LMI loans’ eligibility for CRA credit. See ABA Banking Journal, “Bonus Podcast: Key Points on CRA Modernization,” American Bankers Association, November 14, 2018, podcast audio, 3:45, https://bankingjournal.aba.com/2018/11/bonus-podcast-key-points-on-cra-modernization/.

16 12 C.F.R. 25.41 (1995, amended 2004).

17 Ben Horowitz, “Defining ‘Low- and Moderate-Income’ and ‘Assessment Area,’” in Community Dividend (Minneapolis: Federal Reserve Bank of Minneapolis, March 8, 2018).

18 Brescia, “Part of the Disease or Part of the Cure,” p. 634.

19 Darryl E. Getter, “The Effectiveness of the Community Reinvestment Act,” Congressional Research Service, January 7, 2015, p. 5.

20 Federal Financial Institutions Examination Council (FFIEC), “Explanation of the Community Reinvestment Act Asset-Size Threshold Change.”

21 Community Reinvestment Act of 1977, 12 U.S.C. § 2908 (1977); and William C. Apgar and Mark Duda, “The Twenty-Fifth Anniversary of the Community Reinvestment Act: Past Accomplishments and Future Regulatory Challenges,” Federal Reserve Bank of New York Economic Policy Review (June 2003): 174.

22 12 C.F.R. 25.22.

23 12 C.F.R. 25.23. The regulatory definition of community development includes affordable housing, community services aimed at LMI individuals, local economic development activities, and activities that revitalize or stabilize distressed areas. See 12 C.F.R. 25.12.

24 12 C.F.R. 25.24.

25 12 C.F.R. 25.23.

26 12 C.F.R. 25.26.

27 12 C.F.R. 25 Appendix A.

28 12 C.F.R. 25 Appendix A.

29 Department of the Treasury, “Memorandum,” pp. 9-10.

30 FFIEC, “Community Reinvestment Act: Interagency Questions and Answers Regarding Community Reinvestment,” 66 Fed. Reg. 36639, 33640 (July 12, 2001).

31 Apgar and Duda, “The Twenty-Fifth Anniversary of the CRA,” p. 174.

32 Brescia, “Part of the Disease or Part of the Cure,” p. 628.

33 Brescia, “Part of the Disease or Part of the Cure,” p. 630.

34“Community Credit Needs: Hearings on S. 406,” Senate Committee on Banking, Housing, and Urban Affairs, 95th Cong. (January 24, 1977)(statement of William Proxmire).

35 Proxmire, “Community Credit Needs,” pp. 9-10.

36 Charles A. Calomiris, U.S. Bank Deregulation in Historical Perspective (New York: Cambridge University Press, 2000), pp. 61-62.

37 Calomiris and Haber, Fragile by Design, pp. 183-95.

38“It’s a Wonderful Life,” Wikimedia Foundation, last modified March 3, 2019, https://en.wikipedia.org/w/index.php?title=It%27s_a_Wonderful_Life&oldid=886040649.

39 Macey and Miller, “The Community Reinvestment Act,” pp. 303-11.

40 Ross Levine, “Finance and Growth: Theory and Evidence,” in Handbook of Economic Growth, vol. 1A, eds. Philippe Aghion and Steven Durlauf (Amsterdam: North Holland, 2005).

41 Macey and Miller, “The Community Reinvestment Act,” pp. 307-10.

42 Calomiris, U.S. Bank Deregulation in Historical Perspective, pp. 22-28.

43 Mark Carlson and Kris James Mitchener, “Branch Banking as a Device for Discipline: Competition and Bank Survivorship during the Great Depression,” Journal of Political Economy 117, no. 2 (April 2009): 169.

44 Carlson and Mitchener, “Branch Banking as a Device for Discipline,” pp. 201-03.

45 Michael D. Bordo, Hugh Rockoff, and Angela Redish, “The U.S. Banking System from a Northern Exposure: Stability versus Efficiency,” The Journal of Economic History 54, no. 2 (June 1994): 325-41.

46 Marsico, in “Democratizing Capital,” pp. 724-25, argued that CRA regulators should assess compliance by comparing a bank’s share of loans to low-income communities with its competitors’ shares. Such a reform would significantly increase the role of regulation in credit allocation.

47 Proxmire, “Community Credit Needs,” p. 11.

48 Proxmire, “Community Credit Needs,” p. 13.

49 Proxmire, “Community Credit Needs,” p. 14.

50 Proxmire, “Community Credit Needs,” pp. 15-16.

51 Regulators have explicitly cited these two trends as reasons to review CRA enforcement. See Office of the Comptroller of the Currency, “Reforming the Community Reinvestment Act Regulatory Framework,” September 5, 2018, p. 45054.

52 Jith Jayaratne and Philip E. Strahan, “The Benefits of Branching Deregulation,” Regulation 22, no. 1 (Spring 1999): 10.

53 Jayaratne and Strahan, “The Benefits of Branching Deregulation,” p. 10. The states that did not allow intrastate branching as of 1990 were Arkansas, Colorado, Iowa, Minnesota, and New Mexico. The states that still forbade interstate branching as of that year were Hawaii, Iowa, Kansas, Montana, and North Dakota.

54 Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, H.R. 3841, 103rd Cong. (1993-1994).

55 Margaret Z. Clarke, “Geographic Deregulation of Banking and Economic Growth,” Journal of Money, Credit and Banking 36, no. 5 (October 2004): 931-32.

56 Federal Deposit Insurance Corporation, “Number of Institutions, Branches and Total Offices,” Historical Bank Data, 2019, https://www.fdic.gov/bank/statistical/. The number of FDIC-supervised commercial banks had dropped to 4,774 as of September 30, 2018. See Federal Deposit Insurance Corporation, “FDIC Statistics at a Glance,” September 2018, https://www.fdic.gov/bank/statistical/stats/2018sep/industry.pdf.

57 Paul Calem, “The New Bank Deposit Markets: Goodbye to Regulation Q,” Federal Reserve Bank of Philadelphia Business Review (November/ December 1985), p. 20.

58 R. Alton Gilbert, “Requiem for Regulation Q: What It Did and Why It Passed Away,” Federal Reserve Bank of St. Louis, February 1986.

59 76 Fed. Reg. 42015.

60 Jayaratne and Strahan, “The Benefits of Branching Deregulation,” p. 11.

61 Clarke, in “Geographic Deregulation,” pp. 938-940, reports statistically significant increases in income growth of 1.2 percent as a result of branching deregulation, using the size of banks’ geographic market as a proxy.

62 Esteban Rossi-Hansberg, Pierre-Daniel Sarte, and Nicholas Trachter report a decreased local market concentration in the finance, insurance, and real estate sector that has accompanied the increase in concentration at the national level. See Esteban Rossi-Hansberg, Pierre-Daniel Sarte, and Nicholas Trachter, “Diverging Trends in National and Local Concentration,” NBER Working Paper no. 25066, National Bureau of Economic Research, Cambridge, Massachusetts, September 2018.

63 Katherine Ho and Joy Ishii, “Location and Competition in Retail Banking,” International Journal of Industrial Organization 29, no. 5 (September 2011): 537-46; Astrid A. Dick, “Nationwide Branching and Its Impact on Market Structure, Quality, and Bank Performance,” Journal of Business 79, no. 2 (March 2006): 591.

64 João Granja, Christian Leuz, and Raghuram Rajan, “Going the Extra Mile: Distant Lending and Credit Cycles,” NBER Working Paper no. 25196, National Bureau of Economic Research, Cambridge, Massachusetts, October 2018. Other scholars have noted a similar increase in the distance between borrower and lender in mortgage markets. See James Charles Smith, “The Structural Causes of Mortgage Fraud,” Syracuse Law Review 60 (2010): 473-503.

65 12 C.F.R. 25.61 (1997). This provision was added to the CRA with passage of the 1994 Riegle-Neal Act (H.R. 3841), which removed restrictions on interstate bank branching.

66 12 C.F.R. 25.65 (1997).

67 Author’s private correspondence with OCC officials.

68 Macey and Miller, “An Economic Analysis,” pp. 322-24.

69 See, for instance, Raymond H. Brescia, “The Community Reinvestment Act: Guilty, but Not as Charged,” St. John’s Law Review 88, no. 1 (Spring 2014): 2-3; Michael S. Barr, “Credit Where It Counts: The Community Reinvestment Act and Its Critics,” New York University Law Review 80 (May 2005): 516; and Neil Bhutta and Daniel Ringo, “Assessing the Community Reinvestment Act’s Role in the Financial Crisis,” FEDS Notes, Board of Governors of the Federal Reserve System, May 26, 2015.

70 National Community Reinvestment Coalition (NCRC), “CRA Commitments,” September 2007, pp. 21-22.

71 Edward J. Pinto, “Government Housing Policies in the Lead-Up to the Financial Crisis: A Forensic Study,” American Enterprise Institute discussion draft, February 5, 2011, pp. 5-6, http://www.aei.org/wp-content/uploads/2010/10/Pinto-Government-Housing-Policies-in-the-Lead-up-to-the-Financial-Crisis-Word-2003-2.5.11.pdf.

72 Calomiris and Haber, Fragile by Design, pp. 231-246; and Pinto, “Government Housing Policies,” p. 15.

73 Pinto, “Government Housing Policies,” p. 14.

74 Barr, “Credit Where It Counts,” pp. 566-67.

75 Barr, “Credit Where It Counts,” pp. 568, 74.

76 Barr, “Credit Where It Counts,” pp. 560-80. Barr gives a comprehensive review of econometric evidence on the effects of the CRA. Even the studies that find the CRA increased lending do not address the question of the loans’ impact on bank soundness.

77 Community Reinvestment Act of 1977, 12 U.S.C. § 2903 (1977). See also NCRC, “CRA Commitments,” September 2007, p. 5, which recounts how CRA commitments by financial institutions ebbed and flowed with the waves of bank mergers in the late 1990s and 2000s.

78 Agarwal et al., “Did the CRA Lead to Risky Lending?,” p. 3.

79 Agarwal et al., “Did the CRA Lead to Risky Lending?,” p. 17.

80 Agarwal et al., “Did the CRA Lead to Risky Lending?,” p. 22.

81 Lawrence B. Lindsey, “The CRA as a Means to Provide Public Goods,” in Revisiting the CRA: Perspectives on the Future of the Community Reinvestment Act (Boston/San Francisco: Federal Reserve Banks of Boston and San Francisco, September 2009), p. 164.

82 Bhutta and Ringo, “Assessing the CRA’s Role in the Financial Crisis.”

83 Agarwal et al., “Did the CRA Lead to Risky Lending?,” p. 15.

84 Neil Bhutta and Glenn B. Canner, “Mortgage Market Conditions and Borrower Outcomes: Evidence from the 2012 HMDA Data and Matched HMDA-Credit Record Data,” Federal Reserve Bulletin 4, no. 99 (November 2013): 42.

85 Bhutta and Canner, “Mortgage Market Conditions and Borrower Outcomes,” p. 34.

86 Anecdotal evidence that banks in CRA-eligible communities prefer to lend to newer, wealthier residents is consistent with the hypothesis that banks are “skimming the top.” See Aaron Glantz and Emmanuel Martinez, “Gentrification Became Low-Income Lending Law’s Unintended Consequence,” RevealNews.org, February 16, 2018, https://www.revealnews.org/article/gentrification-became-low-income-lending-laws-unintended-consequence/.

87 Office of the Comptroller of the Currency, Board of Governors of the Federal Reserve System, and Federal Deposit Insurance Corporation, “Risk-Based Capital Standards: Advanced Capital Adequacy Framework and Market Risk; Proposed Rules and Notices,” 71 Fed. Reg. 55895, no. 185 (September 25, 2006). The Basel Committee on Banking Supervision is an international body that promulgates standards for the prudential regulation of banks. There is no economic reason why prudential standards should be laxer for CRA-related equity investments by banks than for their other investments.

88“Poverty, Public Housing and the CRA: Have Housing and Community Investment Incentives Helped Public Housing Families Achieve the American Dream?,” Subcommittee on Federalism and the Census of the Committee on Government Reform, U.S. House of Representatives (June 20, 2006)(statement of Judith A. Kennedy), p. 58.

89“The Performance and Profitability of CRA-Related Lending,” report by the Board of Governors of the Federal Reserve System, submitted to the Congress pursuant to section 713 of the Gramm-Leach-Bliley Act of 1999, July 17, 2000, p. 45.

90“The Performance and Profitability of CRA-Related Lending,” p. 51.

91 Pinto, “Government Housing Policies in the Lead-Up to the Financial Crisis,” p. 26ff.

92 FFIEC, “CRA: Interagency Questions and Answers,” p. 36639.

93 Getter, “The Effectiveness of the CRA,” p. 9.

94 Kenneth H. Thomas, “Dear Regulators: Don’t Take CRA’s Revamp Too Far,” American Banker, editorial, October 30, 2018.

95 For a year-by-year summary of bank failures since 2001, see FDIC, “Bank Failures in Brief,” May 31, 2019, https://www.fdic.gov/bank/historical/bank/.

96 Federal Reserve Bank of St. Louis, “Compliance Costs, Economies of Scale and Compliance Performance: Evidence from a Survey of Community Banks,” April 2018, p. 5.

97 Federal Reserve Bank of St. Louis, “Compliance Costs, Economies of Scale and Compliance Performance,” p. 9. For the comparably poor performance of small banks, see, for example, FDIC, “Quarterly Banking Profile: Third Quarter 2018,” p. 7. FDIC-supervised institutions with fewer than $100 million in assets have an average return on equity of 8.28 percent, compared to 11 to 13 percent for larger banks.

98 Raymond H. Brescia, “The Community Reinvestment Act: Guilty, but Not as Charged,” St. John’s Law Review 88, no. 1 (Spring 2014): 5-6.

99 Dedman, “The Color of Money.”

100 FDIC, “National Survey of Unbanked and Underbanked Households, 2017,” October 2018, pp. 17, 19, 24.

101 FDIC, “National Survey of Unbanked and Underbanked Households,” p. 11. A household is considered to have used mainstream credit if it used a credit card; a personal loan or line of credit from a bank; a store credit card; an auto loan; a student loan; a mortgage, home equity loan, or home equity line of credit (HELOC); or other personal loans or lines of credit from a company other than a bank in the past 12 months. The FDIC’s definition of mainstream credit does not include alternative financial services (AFS), such as money orders, check cashing, international remittances, payday loans, refund anticipation loans, rent-to-own services, pawn shop loans, and auto title loans (see p. 39).

102 FDIC, “National Survey of Unbanked and Underbanked Households,” p. 4.

103 Federal Reserve Bank of St. Louis, “Compliance Costs, Economies of Scale and Compliance Performance,” p. 5.

104 Federal Reserve Bank of St. Louis, “Compliance Costs, Economies of Scale and Compliance Performance,” p. 13.

105 Drew Dahl and Michelle Franke, “ ‘Banking Deserts’ Become a Concern as Branches Dry Up,” Federal Reserve Bank of St. Louis, Regional Economist, Second Quarter 2017, pp. 20-21.

106 Michelle Franke, “Who Would Be Affected by More Banking Deserts?,” Federal Reserve Bank of St. Louis, On the Economy (blog), July 17, 2017.

107 Donald P. Morgan, Maxim Pinkovskiy, and Davy Perlman, “The ‘Banking Desert’ Mirage,” Federal Reserve Bank of New York, Liberty Street Economics (blog), January 10, 2018.

108 Franke, “Who Would Be Affected by More Banking Deserts?”

109 This is both because higher fixed compliance costs induce consolidation and higher regulatory costs raise the required return on a bank branch. See Julie Stackhouse, “Why Are Banks Shuttering Branches?,” Federal Reserve Bank of St. Louis, On the Economy (blog), February 26, 2018.

110 Michael Klausner, “Market Failure and Community Investment: A Market-Oriented Alternative to the Community Reinvestment Act,” University of Pennsylvania Law Review 143 (1995): 1565-68.

111 Barr, “Credit Where It Counts,” p. 516.

112 William W. Liang and Leonard I. Nakamura, “A Model of Redlining,” Journal of Urban Economics 33, no. 2 (March 1993): 223-34.

113 12 U.S.C. § 2903.

114 Lindsey, “The CRA as a Means to Provide Public Goods,” p. 160.

115 NCRC, “CRA Commitments,” p. 6.

116 Calomiris and Haber, in Fragile by Design, pp. 216-17, cite the Fleet Financial-BankBoston merger of 1999 as an example of a time when good CRA performance caused the Fed to approve a merger despite concerns about its competitive effect.

117 FDIC, “Quarterly Banking Profile: Third Quarter 2018,” FDIC Quarterly 12, no. 4 (2018): 7.

118 Jayaratne and Strahan, “The Benefits of Branching Deregulation,” p. 14.

119 Lawrence J. White, “The Community Reinvestment Act: Good Goals, Flawed Concept,” December 18, 2008, p. 5.

120 American Bankers’ Association, “ABA Data Bank: Economic Recovery Leaving De Novo Banks Behind,” ABA Banking Journal (website), September 28, 2018.

121 FDIC, “Enhanced Supervisory Procedures for Newly Insured FDIC-Supervised Depository Institutions,” FIL-50-2009, August 28, 2009.

122 Macey and Miller, “The Community Reinvestment Act,” p. 323.

123 Pinto, “Government Housing Policies in the Lead-Up to the Financial Crisis,” p. 15.

124 Agarwal et al., “Did the CRA Lead to Risky Lending?,” p. 3.

125 For a comprehensive discussion, see Ryan Bourne, “Government and the Cost of Living: Income-Based vs. Cost-Based Approaches to Alleviating Poverty,” Cato Institute Policy Analysis no. 847, September 2018.

126 American Bankers’ Association, “ABA Data Bank.”

127 FDIC, “Enhanced Supervisory Procedures for Newly Insured FDIC-Supervised Depository Institutions.”

128 FDIC Chairman Jelena McWilliams recently indicated interest in easing de novo bank entry. See Back to Basics, Federal Reserve Bank of Chicago 13th Annual Community Bankers Symposium, Chicago (November 16, 2018)(remarks of Jelena McWilliams).

129 Apgar and Duda, “The Twenty-Fifth Anniversary of the CRA,” p. 180.

130 Julapa Jagtiani and Catharine Lemieux, “The Roles of Alternative Data and Machine Learning in Fintech Lending: Evidence from the LendingClub Consumer Platform,” Federal Reserve Bank of Philadelphia Working Paper 18-15, April 2018, pp. 12-13.

131 Klausner, “Market Failure and Community Investment.” Klausner cites the canonical credit-rationing model in Joseph E. Stiglitz and Andrew Weiss, “Credit Rationing in Markets with Imperfect Information,” American Economic Review 71, no. 3 (June 1981): 393-410.

132 Julapa Jagtiani and Catharine Lemieux, “Do Fintech Lenders Penetrate Areas That Are Underserved by Traditional Banks?,” Federal Reserve Bank of Philadelphia Working Paper 18-13, March 2018, p. 12.

133 U.S. Government Accountability Office, “Financial Technology: Additional Steps by Regulators Could Better Protect Consumers and Aid Regulatory Oversight,” Report to Congressional Requesters, March 2018, p. 45.

134 Lalita Clozel (@laliczl), “OCC’s Otting,” Twitter post, February 7, 2019, 12:35 p.m., https://twitter.com/laliczl/status/1093563942050455553.

135 Nutter, McClennen & Fish LLP, “Fintech in Brief: OCC Fintech Charter Continues to Face Legal Challenges,” January 30, 2019.

136 Author’s calculations based on Bureau of Consumer Financial Protection, “Data Point: 2017 Mortgage Market Activity and Trends,” May 2018, pp. 70-72.

137 Bureau of Consumer Financial Protection, “Data Point: 2017 Mortgage Market Activity and Trends,” May 2018, p. 64.

138 Glantz and Martinez, “Gentrification Became Low-Income Lending Law’s Unintended Consequence,” RevealNews.org, February 16, 2018.

139 Horowitz, “Defining ‘Low- and Moderate-Income’ and ‘Assessment Area.’”

140 15 U.S C. § 1691.

141 Daniel Press, “The CFPB and the Equal Credit Opportunity Act,” Competitive Enterprise Institute, On Point (blog), May 15, 2018.

142 15 U.S.C. § 1691c.

143 Kenneth H. Thomas, “Why Fintechs Should Be Held to CRA Standards,” American Banker, editorial, August 24, 2018.

144 12 C.F.R. 25.41.

145 Macey and Miller, “The Community Reinvestment Act,” p. 313.

146 Greg Buchak, Gregor Matvos, Tomasz Piskorski, and Amit Seru, “Fintech, Regulatory Arbitrage, and the Rise of Shadow Banks,” NBER Working Paper no. 23288, National Bureau of Economic Research, Cambridge, Massachusetts, September 2018. The authors find that 60 percent of the growth of “shadow banks” is due to regulation, whereas 30 percent is due to technology.

147 Macey and Miller, “The Community Reinvestment Act,” pp. 312-13; Bloom, “Community Credit Needs,” pp. 15-16.

148 Jagtiani and Lemieux, “Do Fintech Lenders Penetrate Areas That Are Underserved by Traditional Banks?,” p. 10.

149 Proxmire, “Community Credit Needs,” pp. 9-10.

150 Additionally, as discussed earlier, deposit-taking institutions no longer operate local monopolies or oligopolies, because of the removal of branching restrictions.

151 American Housing and Economic Mobility Act of 2018, S. 3503, 115th Cong. (2018).

152 American Housing and Economic Mobility Act of 2019, H.R. 1737, 116th Cong. (1st Sess. 2019).

153 Krista Shonk, “Reforming the Community Reinvestment Act Regulatory Framework,” American Bankers Association comment letter to the Comptroller of the Currency, November 15, 2018, p. 33, https://www.aba.com/Advocacy/commentletters/Documents/cl-CRA20181115.pdf.

154 Federal Credit Union Act of 1934, 12 U.S.C. §§ 1752-1775 (1934).

155 Federal Credit Union Act of 1934, 12 U.S.C. § 1759 (1934).

156 Mortgage Bankers’ Association (MBA) and Credit Union National Association (CUNA) data. The author is grateful to Mike Schenk from CUNA for sharing it.

157 James DiSalvo and Ryan Johnston, “Credit Unions’ Expanding Footprint,” Banking Trends (Philadelphia: Federal Reserve Bank of Philadelphia, First Quarter 2017), p. 20.

158 Securitization rates are around 35 percent for credit unions and 70 percent for all mortgage originators. See CUNA data (note 161) and Urban Institute, “Housing Finance at a Glance: A Monthly Chartbook,” research report, June 2018.

159 DiSalvo and Johnston, “Credit Unions’ Expanding Footprint,” pp. 19-20.

160 Aaron Klein, “Banklike Credit Unions Should Follow Bank Rules,” American Banker, editorial, June 25, 2018.

161 Department of the Treasury, “Memorandum for the OCC,” p. 11.

162 Department of the Treasury, “Memorandum for the OCC,” p. 9.

163 Getter, “The Effectiveness of the CRA,” p. 8.

164 Office of the Comptroller of the Currency, “Reforming the Community Reinvestment Act Regulatory Framework,” Advance Notice of Proposed Rulemaking, 83 Fed. Reg. 172 (September 5, 2018): 45053.

165 Office of the Comptroller of the Currency; Shonk, “Reforming the CRA Regulatory Framework,” pp. 13-14.

166 Shonk, “Reforming the CRA Regulatory Framework,” pp. 11-12.

167 Proxmire, “Community Credit Needs,” p. 9.

168 See Klausner, “Market Failure and Community Investment,” and Klausner, “A Tradable Obligation Approach,” for the original proposals that inform the approach outlined in this section.

169 Klausner, “Market Failure and Community Investment,” pp. 1586-88.

170 Klausner, “Market Failure and Community Investment,” pp. 1575-76.

171 Jagtiani and Lemieux, “Do Fintech Lenders Penetrate Areas That Are Underserved by Traditional Banks?,” p. 10.

172 F. A. Hayek, “The Use of Knowledge in Society,” American Economic Review 35, no. 4 (September 1945): 519-30.

Diego Zuluaga is a policy analyst at the Cato Institute’s Center for Monetary and Financial Alternatives.

Debunking Protectionist Myths: Free Trade, the Developing World, and Prosperity

$
0
0

Arvind Panagariya

More than 170 years ago, Frédéric Bastiat noted in his masterly work Economic Sophisms that the “opposition to free trade rests upon errors, or, if you prefer, upon half-truths.”1 Ever since Adam Smith successfully replaced mercantilist orthodoxy with free trade doctrine in his celebrated book The Wealth of Nations, free trade critics have repeatedly challenged the doctrine, offering half-truths to bolster their case. In each instance, free trade advocates have successfully exposed the falsehood of arguments made by critics. Although free trade has gained increasing acceptance among policymakers over time, challenges to it have remained omnipresent.

The latest of these challenges has manifested itself in increased tariffs on steel and aluminum in the United States and on a number of selected products in India. At the heart of these tariff hikes has been the belief that through targeted protection and industrial policy, governments can produce outcomes that are superior to those that free trade and competition would produce.2 Intellectual inspiration for this belief in recent decades has come from writings of a group of influential scholars who have interpreted the experiences of the highly successful East Asian “tiger” economies — Hong Kong, Singapore, South Korea, and Taiwan — during the early decades following the Second World War and of China during more recent decades as being the result of selective protection and industrial targeting.

Systematic evidence, however, demonstrates that free trade rather than selective protection and industrial policy must be credited with propelling these economies to miracle-level growth. Just as Bastiat observed, the case made by free trade critics in favor of industrial policy and selective protection is based on half-truths. Contrary to the assertions by these critics, a logical case for infant industry protection does not exist. Moreover, compelling empirical evidence linking trade openness causally to higher per capita incomes is now available.

A Quick Historical Perspective

In the immediate aftermath of the Second World War, there was consensus among economists and policymakers that economic recovery in industrial countries required progressive opening of trade among them. Simultaneously, it was agreed that newly independent developing countries needed protection so that they could industrialize by substituting domestic output for imported manufactures. The former idea led to the signing of the General Agreement on Tariffs and Trade (GATT), which became the vehicle for progressive liberalization of trade among industrial countries. The latter idea led to the grant of special and differential treatment to developing countries within the GATT framework. During the early decades following the Second World War, these countries got full freedom to protect their industries.

The idea that import substitution industrialization (ISI) was the right policy for the newly independent developing countries had its origins in the assumption that their comparative advantage lay in primary products and that exports of these products could not serve as the engine of growth. The reason was that both income and price elasticities of demand for these products were low. Low income elasticity meant that over time, rising incomes in industrial countries would shift global demand away from these products and thus shift the terms of trade against them. Low price elasticity meant that any efforts by developing countries themselves to expand exports through increased investment or enhanced productivity would lead to a sharp decline in primary product prices, resulting in reduced export revenues.

These logically correct arguments led economists and policymakers to the conclusion that faster growth required industrialization, and hence, protection. The “infant industry” argument was then invoked to impart intellectual legitimacy to protection for industry. Thus, in the immediate aftermath of the Second World War, virtually all developing economies wound up embracing import substitution. Only Hong Kong, which the British had owned and maintained as a free port, remained a free trading entity.

Interestingly, however, by the early 1960s, Singapore, Taiwan, and South Korea broke away from this consensus. Having completed the substitution of domestic output for imports of labor-intensive products, they were faced with choosing between extending import substitution to more capital-intensive products or expanding further into labor-intensive products by switching to export expansion. Recognizing the small size of the domestic market, they opted for the latter strategy and became progressively outward-oriented. The strategy proved an unqualified success. They could achieve increases in per capita incomes and reductions in poverty in three decades what Western industrial economies had taken more than a century to achieve.

The success of these economies exposed a key flaw in the model on which the original consensus in favor of ISI was based. By conceptualizing the economy as consisting of only two sectors — primary products and industry — the model ended up arguing that ISI offered the only road to industrialization. What the experience of the East Asian tigers revealed was that the industrial sector was not a monolith but a collection of many products, of which some were labor-intensive and others capital-intensive. It was therefore possible for labor-abundant developing countries to industrialize by specializing in and exporting labor-intensive industrial products while importing capital-intensive ones.3

Comparative studies of East Asian tiger economies and economies that remained wedded to ISI, such as India, Mexico, and Egypt, were carried out during the 1970s and early 1980s, which led to a complete turnaround in the conventional wisdom. Economists such as Bela Balassa, Jagdish Bhagwati, Anne Krueger, and Ian Little concluded that openness to trade was as desirable for developing countries as for developed countries. In the following years, these economists emerged as influential proponents of industrialization and development through outward-oriented policies.

Influenced by this new conventional wisdom, and also under pressure from U.S. president Ronald Reagan, who firmly believed in the efficacy of open markets, the International Monetary Fund and the World Bank went on to aggressively promote trade liberalization in developing countries during the 1980s. Predictably, the turnaround in academic opinion in favor of free trade and its embrace by influential international financial institutions produced a reaction from pro-protection forces. This reaction found expression in what has been called a revisionist interpretation of the experiences of the East Asian tiger economies. Political scientists Alice Amsden and Robert Wade argued that the success of South Korea and Taiwan, respectively, was the result of cleverly designed industrial policies and selective protection.4 Economists Dani Rodrik and Ha-Joon Chang later voiced their agreement with Amsden and Wade.5

Although pro-free trade economists such as Bhagwati and Little have challenged some of the arguments of revisionists, a systematic response to the latter and a full-scale defense of free trade as the engine of growth and poverty alleviation in developing countries has been lacking. This is the task I undertake in my book Free Trade and Prosperity.6 In the following, I offer some flavor of the book by exposing a number of myths spread by revisionists. The first myth relates to the superiority of the ISI approach to development, taking developing countries as a whole. The remaining myths relate to the experiences of fast-growing developing economies, most notably those of South Korea and Taiwan.

Myth 1: The Years 1960-1973 Represent the Golden Age of Growth in Developing Countries

Writing in 1999, Rodrik argued that taken together, developing countries grew the fastest during 1960-1973 when they followed inward-looking, import-substitution industrialization policies.7 Later, Chang forcefully echoed this argument in his 2007 book.8 But there are three serious problems with the thesis.

First, factually, developing countries as a group did not grow the fastest during 1960-1973. As Table 1 makes amply clear, developing countries have grown the fastest during the decades following 1990. This was the period during which these countries came to genuinely embrace and own liberal policies instead of being forced into accepting them by international financial institutions in return for access to financial resources. At the time Rodrik wrote, he may have lacked these data, but by 2007, when Chang published his book, available evidence was loud and clear.

Second, had Rodrik gone into individual-country details, he would have found that even during 1960-1973, the fastest-growing economies were those that had embraced outward-oriented policies. I have already mentioned the four East Asian tiger economies, which grew at rates ranging from 8 to 10 percent during this period. But even Brazil, a much larger country that saw its growth accelerate during this period, had brought down its tariffs and devalued its currency multiple times to correct for overvaluation of the latter.

Finally, the OECD countries had grown significantly faster during 1960-1973 than during post-1990 decades. As such, developing-country growth during the earlier period received a boost from OECD growth. Similar pull-up effect had been missing from the post-1990 period. Instead, the impetus for growth in developing countries during this period came from their own economic policies, including trade liberalization.

Myth 2: Industrial Policy, Including Selective Protection, Was behind the Success of East Asian Tiger Economies

This is the key claim made by free trade critics, which has given an edge to continued advocacy of protection by many. But consider the experience of South Korea. As Table 2 shows, the country grew at an annual average rate of 9.1 percent during the decade 1963-1973 compared with 4.2 percent during 1954-1962 and 6.9 percent during 1974-1982. Years 1954-1962 are identified with import substitution, while years 1963-1972 saw South Korea adopt an export-oriented strategy. This latter decade was characterized by policies that were sectorally neutral. Selective industry promotion was limited to cement, fertilizer, and petroleum refining in the early 1960s and to steel and petrochemicals in the late 1960s and early 1970s. Calculations by Larry Westphal show that when the economy-wide implications of all interventions are considered, the policy regime exhibited a slight bias in favor of exports relative to what would have prevailed under free trade.9 Among other things, neutrality gave rise to growth of sectors no one had predicted: wigs and human hair exports, entirely absent until 1963, came to account for 10.1 percent of Korean exports by 1970.

When critics claim success for industrial targeting, they entirely eschew the discussion of the crucial decade of 1963-1973. Instead, they focus on the following decade, in which Korea did engage in a heavy and chemical industry (HCI) drive. But the growth rate during 1974-1982 actually fell to 6.9 percent. Moreover, toward the end of this period, the economy faced serious macroeconomic instability, culminating in the abandonment of the HCI drive and the restoration of a neutral policy regime. That in turn returned the country to 8.7 percent growth during 1983-1995. Chang has claimed that the policy was nevertheless successful because industries promoted under the HCI drive eventually became profitable. But this amounts to a post hoc fallacy. After a decade of rapid growth and near double-digit annual increases in real wages, South Korea had been becoming more and more labor-scarce and capital-abundant. Therefore, capital-intensive sectors promoted under HCI would have emerged even absent the HCI drive. What the HCI drive did was to advance that process by a few years. To legitimately claim his case, Chang must demonstrate that the benefits of advancing the process exceeded its costs.10

Myth 3: Export Expansion Cannot Be Credited with Catalyzing Growth Because It Followed, Rather than Led, the Acceleration in GDP Growth

Rodrik has argued that expansion in exports in Korea and Taiwan actually followed acceleration in growth. Therefore, expansion could not have catalyzed growth. There are two counterarguments here. First, even if the catalyst to growth was domestic in nature, it is highly unlikely that these countries could have sustained 8 to 10 percent growth for several decades without a massive expansion of exports. For example, in South Korea, exports expanded from just 5 percent of the GDP in 1965 to more than 20 percent by 1972, and imports rose from 10 percent to more than 25 percent of GDP over the same period. By the time South Korea seriously got down to targeted promotion of HCI, it was already a highly open economy.

Second, and far more important, Rodrik is wrong about his claim that exports were not a catalyst to growth. His error lies in the failure to disaggregate the total exports into its components. The shift in GDP growth to more than 8 percent in 1963 from less than 5 percent in the prior years had been preceded by a gradual shift in policy toward reducing anti-export and pro-import-substitution bias in policy. The first major step in this direction in 1959 eliminated tariffs exporters paid on inputs contained in their exports. In the early 1960s, exporters also got exemption from indirect taxes. By the late 1950s, the exchange rate had also become considerably overvalued. Devaluation of domestic currency from 65 won per dollar to 100 won per dollar in January 1961 and to 130 won per dollar in February 1962 brought it closer to the market rate. The government also worked toward removing infrastructure-related barriers to trade, especially at ports.

These measures produced a salutary effect on the exports of manufactures during the early 1960s. Between 1961 and 1964, they grew at the average annual rate of 87.9 percent per annum. This rate was higher than in any other subsequent four-year period. Over the same period, the share of manufactures exports in the total exports rose from 21.9 percent to 62.3 percent. Total exports mask this major structural shift in exports. Moreover, because primary product exports performed poorly during the early 1960s, total exports also give the misleading impression that exports were unimportant to the shift in the growth rate beginning in 1963. This point applies equally to Taiwan.

Myth 4: Exports Were Too Tiny to Have Been the Engine of Growth

Rodrik has also argued that in the first half of 1960s, exports as a proportion of GDP were too small to serve as the engine of growth in South Korea and Taiwan. Although plausible on the surface, this argument, too, fails to withstand close scrutiny. There are two problems with the argument. First, even if export sales were small in relation to the GDP, the total sales of exportable products were not. The latter include domestic sales of export products. When profitability of exports rises and sales of export products are diverted from domestic to foreign markets, domestic prices of those products rise, making domestic sales profitable as well. Therefore, the pull effect of export incentives works not just on exports but on domestic sales of export products as well. Reinforcing this factor is the ability of efficient export firms to exploit scale economies. Vastness of the export markets enables these firms to rapidly expand and lower production costs, which in turn enables them to expand domestic sales.

Second, as Bhagwati has pointed out, improved export incentives such as duty-free entry of inputs used in exports, exemption from indirect taxes, and elimination of overvaluation of the exchange rate enhance the profitability of not just existing export products but also potential export products.11 Sufficiently large export incentives may turn many nontraded but tradable products — and even imported products — into export products. For example, wigs and human hair were entirely absent from South Korea’s export basket until 1963. But by 1970, they came to account for 10.1 percent of its total exports. Similarly, Taiwan exported no electrical machinery and appliances until 1959. They made their debut in 1960 and came to account for 12.3 percent of Taiwan’s vastly expanded total exports by 1970. Clothing and footwear had expanded from 0.8 to 2.6 percent of the total exports during the import-substitution phase from 1952 to 1960, but they shot up to 16.8 percent in 1970.

Myth 5: Success of Taiwan and South Korea Is Proof that Interventions Helped, Rather than Hurt, Growth

In his book on Taiwan, Wade offers a catalog of government interventions, big and small, without a coherent explanation of how they added up to the growth miracle and whether these interventions would have led to the miracle without the policies identified as important by advocates of outward-oriented strategy. He makes repeated references to the government acting strategically in specific contexts, but without articulating a “strategic action” model of economic development that he could recommend to other countries. The bottom line he offers is this:

The fact of big leadership or big followership does not mean that government intervention has been effective in promoting economic growth; it only means that government intervention cannot be dismissed as having made a negligible difference to outcomes. But the balance of presumption must be that government industrial policies, including sectoral ones, helped more than they hindered. To argue otherwise is to suggest that economic performance would have been still more exceptional with less intervention, which is simply less plausible than the converse.12

This statement illustrates in sharp relief how revisionists set a very low standard when it comes to providing the proof of their own thesis in comparison to what they demand from free trade advocates.13 More important, they shy away from asking critical questions that may lead them to an answer they may not like. This is the point Little made when he responded to Wade’s claim in these terms: “Since the less interventionist Hong Kong, Singapore, and Taiwan grew faster than Korea, it is unclear why Wade thinks it simply less plausible that less intervention would have been better, given also the widespread failure of government industrial policies elsewhere. I find it simply more plausible that Korea grew fast despite its industrial policies, than because of them.”14

Echoes of the argument made by Wade can also be heard in the argument made by Rodrik and Chang to explain the more recent successes of China and India. Like Wade, Rodrik argues that because numerous government interventions remain present in China, its experience does not support the case of trade liberalization. Chang goes a step further by arguing that China and India succeeded because they refused to wear a free trade straitjacket. But liberalization in the early 1980s had already placed China on a 10 percent growth trajectory. If protection and interventions that still remained were behind this success, further liberalization should have hurt its growth. But it was precisely through sustained liberalization, culminating in its entry into the World Trade Organization in 2001, that China sustained its high growth. Likewise, it took the dismantling of a large number of interventions for India to finally see its economy grow at an 8 percent rate beginning in 2003. Subsequently, as it suspended the process of import liberalization after 2007 and returned to more interventionist policies during 2009 to 2014, its growth suffered.

Conclusion

History forcefully demonstrates the power of openness to trade. Between 1960 and 1990, East Asian tiger economies succeeded in achieving increases in per capita income that Western industrial economies took a century to achieve. Their growth also led to the elimination of abject poverty despite no significant redistributive social programs. During 1980 to 2010, China has achieved the same success for its much larger population by shedding its Mao Zedong-era autarkic policies and giving greater play to markets. Today, India is poised to achieve something similar for its equally large population, provided it does not descend back into its failed illiberal external and internal policies. Lessons from the experiences of these countries apply equally to the developed world. The United States, in particular, must weigh the harmful long-term consequences of its recent turn to protectionism. It should not forget that in the medium to long term, a tax on imports is a tax on exports even when partner countries do not retaliate. When partner countries retaliate, the damage compounds.

Notes

1 Frédéric Bastiat, Economic Sophisms, trans. and ed. Arthur Goddard (Irvington-on-Hudson: Foundation for Economic Education, 1996), p. 3, http://www.econlib.org/library/Bastiat/basSoph1.html. Italics are as in the original.

2 Protection has also seen an upsurge in the form of a trade war between the United States and China. The initial intent of the United States behind the tariffs against China was to press the latter into opening its markets wider. But lately it seems to have shifted to taking the view that the tariffs are helping local industry grow. As such, even the trade war with China has come to have a protectionist angle to it. The shift in thinking on the part of the United States is also reflected in its desire to expand the scope of protection to auto imports.

3 Anne Krueger, “Trade Policy and Economic Development: How We Learn,” American Economic Review 81, no. 1 (1997): 1-22.

4 Alice Amsden, Asia’s Next Giant: South Korea and Late Industrialization (New York: Oxford University Press, 1989); and Robert Wade, Governing the Market: Economic Theory and the Role of the Government in East Asian Industrialization (Princeton: Princeton University Press, 2004).

5 Dani Rodrik, “Getting Interventions Right: How South Korea and Taiwan Grew Rich,” Economic Policy 20 (1995): 55-107; Ha-Joon Chang, Bad Samaritans: Rich Nations, Poor Policies and the Threat to the Developing World (London: Random House Business Books, 2007); and Larry E. Westphal, “Industrial Policy in an Export-Propelled Economy: Lessons from South Korea’s Experience,” Journal of Economic Perspectives 4, no. 3 (1990): 41-59. Economist Larry Westphal and his coauthors were among the earliest to study the success of South Korea, but they took a more nuanced view of its experience. Westphal partially credited industrial policy for the success of South Korea but also saw openness as being critical to it. He also felt that effective use of industrial policy requires very able and effective leadership, which is usually lacking in most developing countries.

6 Arvind Panagariya, Free Trade and Prosperity: How Openness Helps the Developing Countries Grow Richer and Combat Poverty (New York: Oxford University Press, 2019).

7 Dani Rodrik, The New Global Economy and Developing Countries: Making Openness Work (Washington: Overseas Development Council, 1999).

8 Chang, Bad Samaritans.

9 Westphal, “Industrial Policy in an Export-Propelled Economy,” Table 1.

10 Economists David Dollar and Kenneth Sokoloff provide more direct evidence of relatively poor performance of the highly capital-intensive sectors supported by the HCI. They note, “It is interesting that, of the industries supported by the HCI program, it is the very capital-intensive ones that exhibit poor TFP [total factor productivity] growth, while those of medium and light intensity generally show high TFP growth” (p. 322). David Dollar and Kenneth Sokoloff, “Patterns of Productivity Growth in South Korean Manufacturing Industries, 1963-1979,” Journal of Development Economics 33 (1990): 309-327.

11 Jagdish Bhagwati, “The ‘Miracle’ That Did Happen,” in Erik Thorbecke and Henry Wan, eds., Taiwan’s Development Experience: Lessons on Roles of Government and Market (Boston: Kluwer Academic Publishers, 1999), pp. 21-39.

12 Wade, Governing the Market, pp. 305-6. At the beginning of this quote, Wade uses the term “big leadership” to describe a situation in which the government leads private entrepreneurs through initiatives that significantly alter their investment and production patterns. Analogously, he uses the term “followership” to capture a situation in which the government follows the lead of private entrepreneurs in designing its interventions.

13 When evaluating the thesis advanced by Ian Little and others, revisionists, including Wade, demand that they demonstrate that their policy package offers a sufficient explanation of the Taiwanese miracle and not merely a positive contribution to it on balance. But for his own thesis, Wade wants to get away with simply demonstrating that the government industrial policies “helped more than they hindered” the process of development. In a similar vein, in relating trade to higher per capita income, free trade critics demand a causal connection between the two according to the highest standards of econometrics. Yet they have not even made an attempt to show that high protection is positively correlated with high per capita incomes, let alone tried to establish causation between the two variables.

14 I. M. D. Little, “Trade and Industrialization Revisited,” Pakistan Development Review 33, no. 4 (1994): 365.

Arvind Panagariya is a professor of economics and the director of the Deepak and Neera Raj Center on Indian Economic Policies in the School of International and Public Affairs at Columbia University. He is the author of Free Trade and Prosperity: How Openness Helps Developing Countries Grow Richer and Combat Poverty (Oxford University Press, 2019), from which this bulletin draws.

Challenging the Social Media Moral Panic: Preserving Free Expression under Hypertransparency

$
0
0

Milton Mueller

Social media are now widely criticized after enjoying a long period of public approbation. The kinds of human activities that are coordinated through social media, good as well as bad, have always existed. However, these activities were not visible or accessible to the whole of society. As conversation, socialization, and commerce are aggregated into large-scale, public commercial platforms, they become highly visible to the public and generate storable, searchable records. Social media make human interactions hypertransparent and displace the responsibility for societal acts from the perpetrators to the platform that makes them visible.

This hypertransparency is fostering a moral panic around social media. Internet platforms, like earlier new media technologies such as TV and radio, now stand accused of a stunning array of evils: addiction, fostering terrorism and extremism, facilitating ethnic cleansing, and even the destruction of democracy. The social-psychological dynamics of hypertransparency lend themselves to the conclusion that social media cause the problems they reveal and that society would be improved by regulating the intermediaries that facilitate unwanted activities.

This moral panic should give way to calmer reflection. There needs to be a clear articulation of the tremendous value of social media platforms based on their ability to match seekers and providers of information in huge quantities. We should also recognize that calls for government-induced content moderation will make these platforms battlegrounds for a perpetual intensifying conflict over who gets to silence whom. Finally, we need a renewed affirmation of Section 230 of the 1996 Telecommunications Act, which shields internet intermediaries from liability for users’ speech. Contrary to Facebook’s call for government-supervised content regulation, we need to keep platforms, not the state, responsible for finding the optimal balance between content moderation, freedom of expression, and economic value. The alternative of greater government regulation would absolve social media companies of market responsibility for their decisions and would probably lead them to exclude and suppress even more legal speech than they do now. It is the moral panic and proposals for regulation that threaten freedom and democracy.

Introduction

In a few short years, social media platforms have gone from being shiny new paragons of the internet’s virtue to globally despised scourges. Once credited with fostering a global civil society and bringing down tyrannical governments, they are now blamed for an incredible assortment of social ills. In addition to legitimate concerns about data breaches and privacy, other ills — hate speech, addiction, mob violence, and the destruction of democracy itself — are all being laid at the doorstep of social media platforms.

Why are social media blamed for these ills? The human activities that are coordinated through social media, including negative things such as bullying, gossiping, rioting, and illicit liaisons, have always existed. In the past, these interactions were not as visible or accessible to society as a whole. As these activities are aggregated into large-scale, public commercial platforms, however, they become highly visible to the public and generate storable, searchable records. In other words, social media make human interactions hypertransparent.1

This new hypertransparency of social interaction has powerful effects on the dialogue about regulation of communications. It lends itself to the idea that social media causes the problems that it reveals and that society can be altered or engineered by meddling with the intermediaries who facilitate the targeted activities. Hypertransparency generates what I call the fallacy of displaced control. Society responds to aberrant behavior that is revealed through social media by demanding regulation of the intermediaries instead of identifying and punishing the individuals responsible for the bad acts. There is a tendency to go after the public manifestation of the problem on the internet, rather than punishing the undesired behavior itself. At its worst, this focus on the platform rather than the actor promotes the dangerous idea that government should regulate generic technological capabilities rather than bad behavior.

Concerns about foreign interference and behavioral advertising brought a slowly simmering social media backlash to a boil after the 2016 election. As this reaction enters its third year, it is time to step back and offer some critical perspective and an assessment of where free expression fits into this picture. As hypertransparency brings to public attention disturbing, and sometimes offensive, content, a moral panic has ensued — one that could lead to damaging regulation and government oversight of private judgment and expression. Perhaps policy changes are warranted, but the regulations being fostered by the current social climate are unlikely to serve our deepest public values.

Moral Panic

The assault on social media constitutes a textbook case of moral panic. Moral panics are defined by sociologists as “the outbreak of moral concern over a supposed threat from an agent of corruption that is out of proportion to its actual danger or potential harm.”2 While the problems noted may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm.” In a moral panic, sociologist Stanley Cohen says, “the untypical is made typical.”3 The exaggerations build upon themselves, amplifying the fears in a positive feedback loop. Purveyors of the panic distort factual evidence or even fabricate it to justify (over)reactions to the perceived threat. One of the most destructive aspects of moral panics is that they frequently direct outrage at a single easily identified target when the real problems have more complex roots. A sober review of the claims currently being advanced about social media finds that they tick off all these boxes.

Fake News!

Social media platforms are accused of generating a cacophony of opinions and information that is degrading public discourse. A quote from a respected media scholar summarizes the oft-repeated view that social media platforms have an intrinsically negative impact on our information environment:

An always-on, real-time information tsunami creates the perfect environment for the spread of falsehoods, conspiracy theories, rumors, and “leaks.” Unsubstantiated claims and narratives go viral while fact checking efforts struggle to keep up. Members of the public, including researchers and investigative journalists, may not have the expertise, tools, or time to verify claims. By the time they do, the falsehoods may have already embedded themselves in the collective consciousness. Meanwhile, fresh scandals or outlandish claims are continuously raining down on users, mixing fact with fiction.4

In this view, the serpent of social media has driven us out of an Eden of rationality and moderation. In response, one might ask: in human history, what public medium has not mixed fact with fiction, has not created new opportunities to spread falsehoods, or has not created new challenges for verification of fact? Similar accusations were levelled against the printing press, the daily newspaper, radio, and television; the claim that social media are degrading public discourse exaggerates both the uniqueness and the scope of the threat.

Addiction and Extremism

A variant on this theme links the ad-driven business model of social media platforms to an inherently pathological distortion of the information environment: as one pundit wrote, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”5 A facile blend of pop psychology and pop economics equates social media engagement to a dopamine shot for the user and increasing ad revenue for the platform. The way to prolong and promote such engagement, we are told, is to steer the user to increasingly extreme content. Any foray into the land of YouTube videos is a one-way ticket to beheadings, Alex Jones, flat-earthism, school-shooting denial, Pepe the Frog, and radical vegans. No more kittens, dog tricks, or baby pictures: for some unspecified reason, those nice things are no longer what the platform delivers.

In the quote below, an academic evokes all the classical themes of media moral panics — addiction, threats to public health, and a lack of confidence in the agency of common people — into a single indictment of YouTube algorithmic recommendations:

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation. In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal.6

Another social media critic echoed similar claims:

Every pixel on every screen of every Internet app has been tuned to influence users’ behavior. Not every user can be influenced all the time, but nearly all users can be influenced some of the time. In the most extreme cases, users develop behavioral addictions that can lower their quality of life and that of family members, co-workers and close friends.7 

If one investigates the “science” behind these claims, however, one finds little to differentiate social media addiction from earlier panics about internet addiction, television addiction, video game addiction, and the like. The evidence for the algorithmic slide toward media fat, salt, and sugar traces back to one man, Jonathan Albright of Columbia University’s Tow Center, and it is very difficult to find any published, peer-reviewed academic research from Albright. All one can find is a blog post on Medium, describing “the network of YouTube videos users are exposed to after searching for ‘crisis actor’ following the Parkland event.”8 In other words, the blog reports the results of one search and one selected search phrase; there is no description of a methodology nor is there any systematic conceptualization or argumentation about the causal linkage between YouTube’s business model and the elevation of extreme and conspiratorial content. Yet Albright’s claims echoed through the New York Times and dozens of other online media outlets.

The psychological claims also seem to suffer from a moral panic bias. According to Courtney Seiter, a psychologist cited by some of the critics, the oxytocin and dopamine levels generated by social media use generate a positive “hormonal spike equivalent to [what] some people [get] on their wedding day.” She goes on to say that “all the goodwill that comes with oxytocin — lowered stress levels, feelings of love, trust, empathy, generosity — comes with social media, too … between dopamine and oxytocin, social networking not only comes with a lot of great feelings, it’s also really hard to stop wanting more of it.”9 The methodological rigor and experimental evidence behind these claims seems to be thin, but even so, wasn’t social media supposed to be a tinderbox for hate speech? Somehow, citations of Seiter in attacks on social media seem to have left the trust, empathy, and generosity out of the picture.

The panic about elevating conspiratorial and marginalized content is especially fascinating. We are told in terms reminiscent of the censorship rationalizations of authoritarian governments that social media empowers the fringes and so threatens social stability. Yet for decades, mass media have been accused of appealing to the mainstream taste and of marginalizing anything outside of it. Indeed, in the 1970s, progressives tried to force media outlets to include marginalized voices in their channel lineup through public access channels. Nowadays, apparently, the media system is dangerous because it does precisely the opposite.

But the overstatement of this claim should be evident. Major advertisers come down hard on the social platforms very quickly when their pitches are associated with crazies, haters, and blowhards, leading to algorithmic adjustments that suppress marginal voices. Users’ ability to “report” offensive content is another important form of feedback. But this has proven to cut both ways: lots of interesting but racy or challenging content gets suppressed. Some governments have learned how to game organized content moderation to yank messages exposing their evil deeds. (See the discussion of Facebook and Myanmar in the next section.) In the ultramoderated world that many of the social media critics seem to be advocating, important minority-viewpoint content is as likely to be targeted as terrorist propaganda and personal harassment.

Murder, hate speech, and ethnic cleansing. Another key exhibit in the case against social media pins the responsibility for ethnic cleansing in Myanmar, and similar incitement tragedies in the developing world, on Facebook. In this case, as in most of the other concerns, there is substance to the claim but its use and framing in the public discourse seems both biased and exaggerated. In Myanmar, the Facebook platform seems to have been systematically utilized as part of a state-sponsored campaign to target the Rohingya Muslim minority.10 The government and its allies incited hatred against them, while censoring activists and journalists documenting state violence, by reporting their work as offensive content or in violation of community standards. At the same time, the government-sponsored misinformation and propaganda against the Rohingya managed to avoid the scrutiny applied to the expression of human-rights activists. Social media critics also charged that the Facebook News Feed’s tendency to promote already popular content allowed posts inciting violence against the minority to go viral. As a result, Facebook is blamed for the tragedies in Myanmar. I have encountered people in the legal profession who would like to bring a human-rights lawsuit against Facebook.11 If any criticism can be leveled at Facebook’s handling of genocidal propaganda in Myanmar, it is that Facebook’s moderation process is too deferential to governments. This, however, militates against greater state regulation.

But these claims show just how displaced the moral panic is. Why is so much attention being focused on Facebook and not on the crimes of a state actor? Yes, Myanmar military officers used Facebook (and other media) as part of an anti-Rohingya propaganda campaign. If the Burmese generals used telephones or text messages to spread their poison, are they going to blame those service providers or technologies? How about roads, which were undoubtedly used by the military to oppress Rohingya? In fact, violent conflict between Rohingya Muslims and Myanmar’s majority population goes back to 1948, when the country achieved independence from the British and the new government denied citizenship to the Rohingya. A nationalist military coup in 1962 targeted them as a threat to the new government’s concept of national identity; the army closed Rohingya social and political organizations, expropriated Rohingya businesses, and detained dissenters. It went on to regularly kill, torture, and rape Rohingya people.

Facebook disabled the accounts of the military propagandists once it understood the consequences of their misuse, although this happened much more slowly than critics would have liked. What’s remarkable about the discussion of Facebook, however, is the way attention and responsibility for the oppression has been diverted away from a military dictatorship engaged in a state-sponsored campaign of ethnic cleansing, propaganda, and terror to a private foreign social media platform. In some cases, the discussion seems to imply that the absence of Facebook from Myanmar would solve, or even improve, the conflict that has been going on for 70 years. It is worth remembering that Facebook’s status as an external platform not under the control of the local government was the only thing that made it possible to intervene at all. Interestingly, the New York Times article that broke this story notes that pro-democracy officials in Myanmar say Facebook was essential for the democratic transition that brought them into office in 2015.12 This claim is as important (and as unverified and possibly untestable) as the claim that it is responsible for ethnic cleansing. But it hasn’t gotten any play lately.

Reviving the Russian menace. Russia-sponsored social media use during the 2016 election provides yet another example of the moral panic around social media and the avalanche of bitter exaggeration that goes with it. Indeed, the 2016 election marks the undisputed turning point in public attitudes toward social media. For many Americans, the election of Donald Trump came as a shocking and unpleasant surprise. In searching for an explanation of what initially seemed inexplicable, however, the nexus between the election results, Russian influence operations, and social media has become massively inflated. It has become too convenient to overlook Trump’s complete capture of the Republican Party and his ability to capitalize on nationalistic and hateful themes that conservative Republicans had been cultivating for decades. The focus on social media continues to divert our attention from the well-understood negatives of Hillary Clinton as well as the documented impact of James Comey’s decision to reopen the FBI investigation of Clinton’s emails at a critical period in the presidential campaign. It overlooks, too, the strength of the Bernie Sanders challenge and the way the Clinton-controlled Democratic National Committee alienated his supporters. It also tends to downplay the linkages that existed between Trump’s campaign staff, advisers, and Russia that had nothing to do with social media influence.

How much more comforting it was to focus on a foreign power and its use of social media than to face up to the realities of a politically polarized America and the way politicians and their crews peddle influence to a variety of foreign states and interests.13 As this displacement of blame developed, references to Russian information operations uniformly became references to Russian interference in the elections.14 Interference is a strong word — it makes it seem as if leaks of real emails and a disinformation campaign of Twitter bots and Facebook accounts were the equivalent of stuffing ballot boxes, erasing votes, hacking election machines, or forcibly blocking people from the polls. As references to foreign election interference became deeply embedded in the public discourse, the threat could be further inflated to one of national security. And so suddenly, the regulation of political speech got on the agenda of Congress, and millions of liberals and progressives became born-again Cold Warriors, all too willing to embrace nationalistic controls on information flows.

In April 2016 hackers employed by the Russian government compromised several servers belonging to the Democratic National Committee, exfiltrated a trove of internal communications, and published them via Wikileaks using a “Guccifer 2.0” alias.15 The emails leaked by the Russians were not made up by the Russians; they were real. What if they had been leaked by a 21st-century Daniel Ellsberg instead of the Russians? Would that also be considered election interference? Disclosures of compromising information (e.g., Trump’s Access Hollywood tape) have a long history in American politics. Is that election interference? How much of the cut-and-thrust of an open society’s media system, and how many whistleblowers, are we willing to muzzle in this moral panic?

The Death of Democracy. Some critics go so far as to claim that democracy itself is threatened by the existence of open social media platforms. “[Facebook] has swallowed up the free press, become an unstoppable private spying operation and undermined democracy. Is it too late to stop it?” asks the subtitle of one typical article.16 This critique is as common as it is inchoate. In its worst and most simple-minded form, the mere ability of foreign governments to put messages on social media platforms is taken as proof that the entire country is being controlled by them. These messages are attributed enormous power, as if they are the only ones anyone sees; as if foreign governments don’t routinely buy newspaper ads, hire Washington lobbyists, or fund nonprofits and university programs. Worse still, those of this mindset equate messages with weapons in ceaseless “information warfare.” It is claimed that social media are being, or have been, “weaponized” — a transitive verb that was popularized after being applied to the 9/11 attackers’ use of civilian aircraft to murder thousands of people.17 Users of this term show not the slightest embarrassment at a possible overstatement implicit in the comparison.

Cybersecurity writer Thomas Rid made the astounding assertion that the most “open and liberal social media platform” (Twitter) is “a threat to open and liberal democracy” precisely because it is open and liberal, thus implying that free expression is a national security threat.18 In a Time Magazine cover story, a former Facebook executive complained that Facebook has “aggravated the flaws in our democracy while leaving citizens ever less capable of thinking for themselves.”19 The nature of this threat is never scientifically documented in terms of its actual effect on voting patterns or political institutions. The only evidence offered is simple counts of the number of Russian trolls and bots and their impressions — numbers that look unimpressive compared to the spread of a single Donald Trump tweet. What we don’t often hear is that social media is the most important source of news for only 14 percent of the population. Research by two economists concluded that “… social media have become an important but not dominant source of political news and information. Television remains more important by a large margin.” They also conclude that there is no statistically significant correlation between social media use and those who draw ideologically aligned conclusions from their exposure to news.20

The most disturbing element of the “threat to democracy” argument is the way it militarizes public discourse. The view of social media as information warfare seems to go hand-in-hand with the contradictory idea that imposing more regulation by the nation-state will “disarm” information and parry this threat to democracy. In advancing what they think of as sophisticated claims that social media are being weaponized, the joke is on our putative cybersecurity experts: it is Russian and Chinese doctrine that the free flow of information across borders is a subversive force that challenges their national sovereignty. This doctrine, articulated in a code of conduct by the Shanghai Cooperation Organization, was designed to rationalize national blocking and filtering of internet content.21 By equating the influence that occurs via exchanges of ideas, information, and propaganda with war and violence, these pundits pose a more salient danger to democracy and free speech than any social media platform.

Any one of these accusations — the destruction of public discourse, responsibility for ethnic cleansing and hate speech, abetting a Russian national security threat, and the destruction of democracy — would be serious enough. Their combination in a regularly repeated catechism constitutes a moral panic. Moral panics should inspire caution because they produce policy reactions that overshoot the mark. A fearful public can be stampeded into legal or regulatory measures that serve a hidden agenda. Targeted actors can be scapegoated and their rights and interests discounted. Freedom-enhancing policies and proportionate responses to problems never emerge from moral panics.

Media Panics in the Past

One antidote to moral panic is historical perspective. Media studies professor Kirsten Drotner wrote, “[E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms … In some cases, debate of a new medium brings about — indeed changes into — heated, emotional reactions … what may be defined as a media panic.”22 We need to understand that we are in the midst of one of these renegotiations of the norms of public discourse and that the process has tipped over into media panic — one that demonizes social media generically.

We can all agree that literacy is a good thing. In the 17th and 18th centuries, however, some people considered literacy’s spread subversive or corrupting. The expansion of literacy from a tiny elite to the general population scared a lot of conservatives. It meant not only that more people could read the Bible, but also that they could read radical liberal tracts such as Thomas Paine’s Rights of Man. Those who feared wider literacy believed that it generated conflict and disruption. In fact, it already had. The disintermediation of authority over the interpretation of the written word by the printing press and by wider literacy created centrifugal forces. Protestants had split with Catholics, and later, different Protestant sects formed around different interpretations of scripture. Later, in the 17th and 18th centuries, the upper class and the religious also complained about sensationalistic printed broadsheets and printed ballads that appealed to the “baser instincts” of the public. Commercial media that responded to what the people wanted were not perceived kindly by those who thought they knew best. Yet are these observations an argument for keeping people illiterate? If not, then what, exactly, do these concerns militate for? A controlled, censored press? A press licensed in “the public interest”? Who in those days would have been made the arbiter of public interest? The Pope? Absolutist kings?

Radio broadcasting was an important revolution in mass media technology. It seems to have escaped the intense, concentrated panic we are seeing around contemporary social media, but in the United States, where broadcasting had relatively free and commercial origins, those in power felt threatened by its potential to evolve into an independent medium. Thomas Hazlett has documented the way the 1927 Federal Radio Act and the regulatory commission it created (later to become the Federal Communications Commission) nationalized the airwaves in order to keep the new medium licensed and under the thumb of Congress.23 Numerous scholarly accounts have shown how the public-interest licensing regime erected after the federal takeover of the airwaves led to a systematic exclusion of diverse voices, from socialists to African Americans to labor unions.24

There is another relevant parallel between radio and social media. Totalitarian dictatorships, particularly Nazi Germany, employed radio broadcasting extensively in the 1930s. Those uses, some of which sparked the birth of modern communications effects research, were much scarier than the uses of social media by today’s dictatorships and illiberal democracies. But oddly, our current panic tends to promote and support precisely the types of regulation and control favored by those very same modern dictatorships and illiberal democracies: centralized content moderation and blocking by the state and holding social media platforms responsible for the postings of their users.

Comic books generated a media panic in the 1940s and 50s.25 A critic of American commercial culture, Frederic Wertham, believed that comic books encouraged juvenile delinquency and subverted the morality of children for the sake of profit. The presence of weirdness, violence, horror, and sexually tinged images led to charges that the comics were dangerous, addictive, and catered to baser instincts. A comic-book scare ensued, complete with a flood of newspaper stories, Congressional hearings, and a transformation of the comic book industry. The comic-book scare seems to have pioneered the three themes that characterize so much public discourse around new media in the 20th century: anti-commercialism, protecting children, and addiction. All are echoed in the current fight over social media. The same themes sounded in policy battles over television. Television’s status as a cause of violence was debated and researched endlessly. Its pollution of public discourse, the way it “cultivated” inaccurate and harmful stereotypes, and its addictive qualities were constant sources of discussion.26 Again the similarity to current debates about social media is apparent.

In examining historical cases, it becomes apparent that it is the retailers and instigators of media panic who generally pose the biggest threat to free expression and democracy. For at their root, attacks on new media, past and present, are expressions of fear: fear of empowering diverse and dissonant voices, the elites’ fears over losing hegemony over public discourse, and a lack of confidence in the ability of ordinary people to control their “baser instincts” or make sense of competing claims. The more sophisticated variants of these critiques are rationalizations of paternalism and authoritarianism. In the social media panic, we have both conservative and liberal elites recoiling from the prospect of a public sphere over which they have lost control, and both are preparing the way for regulatory mechanisms that can tame diversity, homogenize output, and maintain their established place in society.

What’s Broken?

A recent exchange on Twitter exposed the policy vacuity of those leading the social media moral panic. Kara Swisher, a well-known tech journalist with more than a million followers, tweeted to Jack Dorsey, the CEO of Twitter:

Overall here is my mood and I think a lot of people when it comes to fixing what is broke about social media and tech: Why aren’t you moving faster? Why aren’t you moving faster? Why aren’t you moving faster?27

Swisher’s impatient demand for fast action seemed to assume that the solutions to social media’s ills were obvious. I tweeted in reply, asking what “fix” she wanted to implement so quickly. There was no answer.

Here is the diagnosis I would offer. What is “broken” about social media is exactly the same thing that makes it useful, attractive, and commercially successful: it is incredibly effective at facilitating discoveries and exchanges of information among interested parties at unprecedented scale. As a direct result of that, there are more informational interactions than ever before and more mutual exchanges between people. This human activity, in all its glory, gore, and squalor, generates storable, searchable records, and its users leave attributable tracks everywhere. As noted before, the emerging new world of social media is marked by hypertransparency.

From the standpoint of free expression and free markets there is nothing inherently broken about this; on the contrary, most of the critics are unhappy precisely because the model is working: it is unleashing all kinds of expression and exchanges, and making tons of money at it to boot. But two distinct sociopolitical pathologies are generated by this. The first is that, by exposing all kinds of deplorable uses and users, it tends to funnel outrage at these manifestations of social deviance toward the platform providers. A man discovers pedophiles commenting on YouTube videos of children and is sputtering with rage at … YouTube.28 The second pathology is the idea that the objectionable behaviors can be engineered out of existence or that society as a whole can be engineered into a state of virtue by encouraging intermediaries to adopt stricter surveillance and regulation. Instead of trying to stop or control the objectionable behavior, we strive to control the communications intermediary that was used by the bad actor. Instead of eliminating the crime, we propose to deputize the intermediary to recognize symbols of the crime and erase them from view. It’s as though we assume that life is a screen, and if we remove unwanted things from our screens by controlling internet intermediaries, then we have solved life’s problems. (And even as we do this, we hypocritically complain about China and its alleged development of an all-embracing social credit system based on online interactions.)

The reaction against social media is thus based on a false premise and a false promise. The false premise is that the creators of tools that enable public interaction at scale are primarily responsible for the existence of the behaviors and messages so revealed. The false promise is that by pushing the platform providers to block content, eliminate accounts, or otherwise attack manifestations of social problems on their platforms, we are solving or reducing those problems. Combing these misapprehensions, we’ve tried to curb “new” problems by hiding them from public view.

The major platforms have contributed to this pathology by taking on ever-more-extensive content-moderation duties. Because of the intense political pressure they are under, the dominant platforms are rapidly accepting the idea that they have overarching social responsibilities to shape user morals and shape public discourse in politically acceptable ways. Inevitably, due to the scale of social media interactions, this means increasingly automated or algorithmic forms of regulation, with all of its rigidities, stupidities, and errors. But it also means massive investments in labor-intensive manual forms of moderation.29

The policy debate on this topic is complicated by the fact that internet intermediaries cannot really avoid taking on some optional content regulation responsibilities beyond complying with various laws. Their status as multisided markets that match providers and seekers of information requires it.30 Recommendations based on machine learning guide users through the vast, otherwise intractable amount of material available. These filters vastly improve the value of a platform to a user, but they also indirectly shape what people see, read, and hear. They can also, as part of their attempts to attract users and enhance the platforms’ value to advertisers, discourage or suppress messages and forms of behavior that make their platforms unpleasant or harmful places. This form of content moderation is outside the scope of the First Amendment’s legal protections because it is executed by a private actor and falls within the scope of editorial discretion.

What’s the Fix?

Section 230 of the Communications Decency Act squared this circle by immunizing information service providers who did nothing to restrict or censor the communications of the parties using their platforms (the classical “neutral conduit” or common-carrier concept), while also immunizing information service providers who assumed some editorial responsibilities (e.g., to restrict pornography and other forms of undesirable content). Intermediaries who did nothing were (supposed to be) immunized in ways that promoted freedom of expression and diversity online; intermediaries who were more active in managing user-generated content were immunized to enhance their ability to delete or otherwise monitor “bad” content without being classified as publishers and thus assuming responsibility for the content they did not restrict.31

It is clear that this legal balancing act, which worked so well to make the modern social media platform successful, is breaking down. Section 230 is a victim of its own success. Platforms have become big and successful in part because of their Section 230 freedoms, but as a result they are subject to political and normative pressures that confer upon them de facto responsibility for what their users read, see, and do. The threat of government intervention is either lurking in the background or being realized in certain jurisdictions. Fueled by hypertransparency, political and normative pressures are making the pure, neutral, nondiscriminatory platform a thing of the past.

The most common proposals for fixing social media platforms all seem to ask the platforms to engage in more content moderation and to ferret out unacceptable forms of expression or behavior. The political demand for more-aggressive content moderation comes primarily from a wide variety of groups seeking to suppress specific kinds of content that is objectionable to them. Those who want less control or more toleration suffer from the diffuse costs/concentrated benefit problem familiar to us from the economic analysis of special interest groups: that is, toleration benefits everyone a little and its presence is barely noticeable until it is lost; suppression, on the other hand, offers powerful and immediate satisfaction to a few highly motivated actors.32

At best, reformers propose to rationalize content moderation in ways designed to make its standards clearer, make their application more consistent, and make an appeals process possible.33 Yet this is unlikely to work unless platforms get the backbone to strongly assert their rights to set the criteria, stick to them, and stop constantly adjusting them based on the vagaries of daily political pressures. At worst, advocates of more content moderation are motivated by a belief that greater content control will reflect their own personal values and priorities. But since calls for tougher or more extensive content moderation come from all ideological and cultural directions, this expectation is unrealistic. It will only lead to a distributed form of the heckler’s veto, and a complete absence of predictable, relatively objective standards. It is not uncommon for outrage at social media to lead in contradictory directions. A reporter for The Guardian, for example, is outraged that Facebook has an ad-targeting category for “vaccine controversies” and flogs the company for allowing anti-vaccination advocates to form closed groups that can reinforce those members’ resistance to mainstream medical care.34 However, there is no way for Facebook to intervene without profiling their users as part of a specific political movement deemed to be wrong, and then suppressing their communications and their ability to associate based on that data. So, at the same time Facebook is widely attacked for privacy violations, it is also being asked to leverage its private user data to flag political and social beliefs that are deemed aberrant and to suppress users’ ability to associate, connect with advertisers, or communicate among themselves. In this combination of surveillance and suppression, what could possibly go wrong?

What stance should advocates of both free expression and free markets take with respect to social media?

First, there needs to be a clearer articulation of the tremendous value of platforms based on their ability to match seekers and providers of information. There also needs to be explicit advocacy for greater tolerance of the jarring diversity revealed by these processes. True liberals need to make it clear that social media platforms cannot be expected to bear the main responsibility for sheltering us from ideas, people, messages, and cultures that we consider wrong or that offend us. Most of the responsibility for what we see and what we avoid should lie with us. If we are outraged by seeing things we don’t like in online communities comprised of billions of people, we need to stop misdirecting that outrage against the platforms that happen to expose us to it. Likewise, if the exposed behavior is illegal, we need to focus on identifying the perpetrators and holding them accountable. As a corollary of this attitudinal change, we also need to show that the hypertransparency fostered by social media can have great social value. As a simple example of this, research has shown that the much-maligned rise of platforms matching female sex workers with clients is statistically correlated with a decrease in violence against women — precisely because it took sex work off the street and made transactions more visible and controllable.35

Second, free-expression supporters need to actively challenge those who want content moderation to go further. We need to expose the fact that they are using social media as a means of reforming and reshaping society, wielding it like a hammer against norms and values they want to be eradicated from the world. These viewpoints are leading us down an authoritarian blind alley. They may very well succeed in suppressing and crippling the freedom of digital media, but they will not, and cannot, succeed in improving society. Instead, they will make social media platforms battlegrounds for a perpetual intensifying conflict over who gets to silence whom. This is already abundantly clear from the cries of discrimination and bias as the platforms ratchet up content moderation: the cries come from both the left and the right in response to moderation that is often experienced as arbitrary.

Finally, we need to mount a renewed and reinvigorated defense of Section 230. The case for Section 230 is simple: no alternative promises to be intrinsically better than what we have now, and most alternatives are likely to be worse. The exaggerations generated by the moral panic have obscured the simple fact that moderating content on a global platform with billions of users is an extraordinarily difficult and demanding task. Users, not platforms, are the source of messages, videos, and images that people find objectionable, so calls for regulation ignore the fact that regulations don’t govern a single supplier, but must govern millions, and maybe billions, of users. The task of flagging user-generated content, considering it, and deciding what to do about it is difficult and expensive. And is best left to the platforms.

However, regulation seems to be coming. Facebook CEO Mark Zuckerberg has published a blog post calling for regulating the internet, and the UK government has released a white paper, “Online Harms,” that proposes the imposition of systematic liability for user-generated content on all internet intermediaries (including hosting companies and internet service providers).36

At best, a system of content regulation influenced by government is going to look very much like what is happening now. Government-mandated standards for content moderation would inevitably put most of the responsibility for censorship on the platforms themselves. Even in China, with its army of censors, the operationalization of censorship relies heavily on the platform operators. In the tsunami of content unleashed by social media, prior restraint by the state is not really an option. Germany responded in a similar fashion with the 2017 Netzwerkdurchsetzungsgesetz, or Network Enforcement Act (popularly known as NetzDG or the Facebook Act), a law aimed at combating agitation, hate speech, and fake news in social networks.

The NetzDG law immediately resulted in suppression of various forms of politically controversial online speech. Joachim Steinhöfel, a German lawyer concerned by Facebook’s essentially jurisprudential role under NetzDG, created a “wall of shame” containing legal content suppressed by NetzDG.37 Ironically, German right-wing nationalists who suffered takedowns under the new law turned the law to their advantage by using it to suppress critical or demeaning comments about themselves. “Germany’s attempt to regulate speech online has seemingly amplified the voices it was trying to diminish,” claims an article in The Atlantic.38 As a result of one right-wing politician’s petition, Facebook must ensure that individuals in Germany cannot use a VPN to access illegal content. Yet still, a report by an anti-hate-speech group that supports the law argues that it has been ineffective. “There have been no fines imposed on companies and little change in overall takedown rates.”39

Abandoning intermediary immunities would make the platforms even more conservative and more prone to disable accounts or take down content than they are now. In terms of costs and legal risks, it will make sense for them to err on the safe side. When intermediaries are given legal responsibility, conflicts about arbitrariness and false positives don’t go away, they intensify. In authoritarian countries, platforms will be merely be indirect implementers of national censorship standards and laws.

On the other hand, U.S. politicians face a unique and interesting dilemma. If they think they can capitalize on social media’s travails with calls for regulation, they must understand that governmental involvement in content regulation would have to conform to the First Amendment. This would mean that all kinds of content that many users don’t want to see, ranging from hate speech to various levels of nudity, could no longer be restricted because they are not strictly illegal. Any government interventions that took down postings or deleted accounts could be litigated based on a First Amendment standard. Ironically, then, a governmental takeover of content regulation responsibilities in the United States would have to be far more liberal than the status quo. Avoidance of this outcome was precisely why Section 230 was passed in the first place.

From a pure free-expression standpoint, a First Amendment approach would be a good thing. But from a free-association and free-market standpoint, it would not. Such a policy would literally force all social media users to be exposed to things they didn’t want to be exposed to. It would undermine the economic value of platforms by decapitating their ability to manage their matching algorithms, shape their environment, and optimize the tradeoffs of a multisided market. Given the current hue and cry about all the bad things people are seeing and doing on social media, a legally driven, permissive First Amendment standard does not seem like it would make anyone happy.

Advocates of expressive freedom, therefore, need to reassert the importance of Section 230. Platforms, not the state, should be responsible for finding the optimal balance between content moderation, freedom of expression, and the economic value of platforms. The alternative of greater government regulation would absolve the platforms of market responsibility for their decisions. It would eliminate competition among platforms for appropriate moderation standards and practices and would probably lead them to exclude and suppress even more legal speech than they do now.

Conclusion

Content regulation is only the most prominent of the issues faced by social media platforms today; they are also implicated in privacy and competition-policy controversies. But social media content regulation has been the exclusive focus of this analysis. Hypertransparency and the subsequent demand for content control it creates are the key drivers of the new media moral panic. The panic is feeding upon itself, creating conditions for policy reactions that overlook or openly challenge values regarding free expression and free enterprise. While there is a lot to dislike about Facebook and other social media platforms, it’s time we realized that a great deal of that negative reaction stems from an information society contemplating manifestations of itself. It is not an exaggeration to say that we are blaming the mirror for what we see in it. Section 230 is still surprisingly relevant to this dilemma. As a policy, Section 230 was not a form of infant industry protection that we can dispense with now, nor was it a product of a utopian inebriation with the potential of the internet. It was a very clever way of distributing responsibility for content governance in social media. If we stick with this arrangement, learn more tolerance, and take more responsibility for what we see and do on social media, we can respond to the problems while retaining the benefits.

Notes

1 Milton L. Mueller, “Hyper-transparency and Social Control: Social Media as Magnets for Regulation,” Telecommunications Policy 39, no. 9 (2015): 804-10.

2 Erich Goode and Nachman Ben-Yehuda, “Grounding and Defending the Sociology of Moral Panic,” chap. 2 in Moral Panic and the Politics of Anxiety, ed. Sean Patrick Hier (Abingdon: Routledge, 2011).

3 Stanley Cohen, Folk Devils and Moral Panics (Abingdon: Routledge, 2011).

4 Ronald J. Deibert, “The Road to Digital Unfreedom: Three Painful Truths about Social Media,” Journal of Democracy 30, no. 1 (2019): 25-39.

5 Zeynep Tufekci, “YouTube, the Great Radicalizer,” New York Times, March 10, 2018.

6 Tufekci, “YouTube, the Great Radicalizer.”

7 Roger McNamee, “I Mentored Mark Zuckerberg. I Loved Facebook. But I Can’t Stay Silent about What’s Happening,” Time Magazine, January 17, 2019.

8 Jonathan Albright, “Untrue-Tube: Monetizing Misery and Disinformation,” Medium, February 25, 2018.

9 Courtney Seiter, “The Psychology of Social Media: Why We Like, Comment, and Share Online,” Buffer, August 20, 2017.

10 Paul Mozur, “A Genocide Incited on Facebook, With Posts from Myanmar’s Military,” New York Times, October 15, 2018.

11 Ingrid Burrington, “Could Facebook Be Tried for Human-Rights Abuses?,” The Atlantic, December 20, 2017.

12 Burrington, “Could Facebook Be Tried for Human-Rights Abuses?”

13 For a discussion of Michael Flynn’s lobbying campaign for the Turkish government and Paul Manafort’s business in Ukraine and Russia, see Rebecca Kheel, “Turkey and Michael Flynn: Five Things to Know,” The Hill, December 17, 2018; and Franklin Foer, “Paul Manafort, American Hustler,” The Atlantic, March 2018.

14 See, for example, “Minority Views to the Majority-produced ‘Report on Russian Active Measures, March 22, 2018’” of the Democratic representatives from the United States House Permanent Select Committee on Intelligence (USHPSCI), March 26, 2018.

15 Indictment at 11, U.S. v. Viktor Borisovich Netyksho et al., Case 1:18-cr-00032-DLF (D.D.C. filed Feb. 16, 2018).

16 Matt Taibbi, “Can We Be Saved from Facebook?,” Rolling Stone, April 3, 2018.

17 Peter W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (New York: Houghton Mifflin Harcourt, 2018).

18 Thomas Rid, “Why Twitter Is the Best Social Media Platform for Disinformation,” Motherboard, November 1, 2017.

19 McNamee, “I Mentored Mark Zuckerberg. I Loved Facebook. But I Can’t Stay Silent about What’s Happening.”

20 Hunt Allcott and Matthew Gentzkow, “Social Media and Fake News in the 2016 Election,” Journal of Economic Perspectives 31, no. 2 (2017): 211-36.

21 Sarah McKune, “An Analysis of the International Code of Conduct for Information Security,” CitizenLab, September 28, 2015.

22 Kirsten Drotner, “Dangerous Media? Panic Discourses and Dilemmas of Modernity,” Paedagogica Historica 35, no. 3 (1999): 593-619.

23 Thomas W. Hazlett, “The Rationality of US Regulation of the Broadcast Spectrum,” Journal of Law and Economics 33, no. 1 (1990): 133-75.

24 Robert McChesney, Telecommunications, Mass Media and Democracy: The Battle for Control of U.S. Broadcasting, 1928-1935 (New York: Oxford, 1995).

25 Fredric Wertham, Seduction of the Innocent (New York: Rinehart, 1954); and David Hajdu, The Ten-cent Plague: The Great Comic-book Scare and How It Changed America (New York: Picador, 2009), https://us.macmillan.com/books/9780312428235.

26“Like drug dealers on the corner, [TV broadcasters] control the life of the neighborhood, the home and, increasingly, the lives of children in their custody,” claimed a former FCC commissioner. Minow & LeMay, 1995. http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/abandonedinthewasteland.htm. Newton N. Minow & Craig L. LaMay, Abandoned in the Wasteland (New York: Hill and Wang, 1996)

27 Kara Swisher (@karaswisher), “Overall here is my mood and I think a lot of people when it comes to fixing what is broke about social media and tech: Why aren’t you moving faster? Why aren’t you moving faster? Why aren’t you moving faster?” Twitter post, February 12, 2019, 2:03 p.m., https://twitter.com/karaswisher/status/1095443416148787202.

28 Matt Watson, “Youtube Is Facilitating the Sexual Exploitation of Children, and It’s Being Monetized,” YouTube video, 20:47, “MattsWhatItIs,” February 27, 2019, https://www.youtube.com/watch?v=O13G5A5w5P0.

29 Casey Newton, “The Trauma Floor: The Secret Lives of Facebook Moderators in America,” The Verge, February 25, 2019.

30 Geoff Parker, Marshall van Alstyne, and Sangeet Choudhary, Platform Revolution (New York: W. W. Norton, 2016).

31 The Court in Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997), said Sec. 230 was passed to “remove the disincentives to self-regulation created by the Stratton Oakmont decision.” In Stratton Oakmont, Inc. v. Prodigy Services Co., (N.Y. Sup. Ct. 1995), a bulletin-board provider was held responsible for defamatory remarks by one of its customers because it made efforts to edit some of the posted content.

32 Robert D Tollison, “Rent Seeking: A Survey,” Kyklos 35, no. 4 (1982): 575-602.

33 See, for example, the “Santa Clara Principles on Transparency and Accountability in Content Moderation,” May 8, 2018, https://santaclaraprinciples.org/.

34 Julia Carrie Wong, “Revealed: Facebook Enables Ads to Target Users Interested in ‘Vaccine Controversies’,” The Guardian (London), February 15, 2019.

35 See Scott Cunningham, Gregory DeAngelo, and John Tripp, “Craigslist’s Effect on Violence against Women,” http://scunning.com/craigslist110.pdf (2017). See also Emily Witt, “After the Closure of Backpage, Increasingly Vulnerable Sex Workers Are Demanding Their Rights,” New Yorker, June 8, 2018.

36 Mark Zuckerberg, “Four Ideas to Regulate the Internet,” March 30, 2019; and UK Home Office, Department for Digital, Culture, Media & Sport, Online Harms White Paper, The Rt Hon. Sajid Javid MP, The Rt Hon. Jeremy Wright MP, April 8, 2019.

37 Joachim Nikolaus Steinhöfel, “Blocks & Hate Speech—Insane Censorship & Arbitrariness from FB,” Facebook Block - Wall of Shame, https://facebook-sperre.steinhoefel.de/.

38 Linda Kinstler, “Germany’s Attempt to Fix Facebook Is Backfiring,” The Atlantic, May 18, 2018.

39 William Echikson and Olivia Knodt, “Germany’s NetzDG: A Key Test for Combatting Online Hate,” CEPS Research Report no. 2018/09, November 2018.

Milton Mueller is Professor at the Georgia Institute of Technology’s School of Public Policy and director of the Internet Governance Project.

Bailouts, Capital, or CoCos: Can Contingent Convertible Bonds Help Banks Cope with Financial Stress?

$
0
0

Robert A. Eisenbeis

Since the 2008 financial crisis, banking regulators’ capital enhancement efforts have focused on permitting systemically important financial institutions to issue alternative forms of debt and quasi-debt instruments as a means of meeting their Basel III primary capital (Tier 1) and secondary capital (Tier 2) requirements. Among these alternatives are so-called contingent convertible capital securities (CoCos). Financial institutions are able to issue CoCos to investors as bonds with the stipulation that they will convert into equity if the institution fails to meet a given capital ratio.

This policy analysis evaluates two types of CoCos—“write-down” and “going-concern” CoCos—on the bases of the different metrics and mechanisms each uses to convert bonds into equity. It shows that so far, few (if any) of the CoCos that institutions have used to satisfy their countries’ capital requirements—many of which were issued prior to robust research on how to structure CoCos effectively—have met the standards necessary for them to achieve their intended purposes. Most of the CoCos issued to date have been write-down CoCos, which rely on backward-looking accounting measures to evaluate an institution’s creditworthiness and use risk-based capital standards to trigger a bond-equity conversion. Rarer are going-concern CoCos, whose market-based conversion triggers incentivize businesses and bank managers to take on increased leverage and more risk.

This policy analysis draws lessons from recent European experiences with both write-down and going-concern CoCos and concludes that, given their deficiencies, neither includes the design elements necessary to help financial institutions meet Basel III Tier 1 or Tier 2 capital standards. As a result, U.S. regulators should continue to approach CoCos with skepticism and caution. One alternative to CoCos they might consider is a modified version of the regulatory “off-ramp” provision of the 2017 Financial CHOICE Act, which holds the potential to increase bank capital while providing significant regulatory relief.

Introduction

The 2008 financial crisis revealed fundamental flaws in the way regulators handled financial distress in large, complex financial institutions that often led to delays in containing and correcting those problems. In several cases, the ultimate result was that the government injected massive amounts of taxpayer monies into troubled institutions, forced them to merge with somewhat stronger institutions (also with the help of taxpayer support), or placed them into conservatorship.

A major factor behind regulators’ slow response to these institutions’ distress was that they took a flawed approach to measuring the capital adequacy of financial institutions. Rather than examine the current market value of an institution’s equity, many regulators relied on backward-looking book values of equity, which delayed their recognition of a firm’s true financial condition. For example, a 2009 study found that each of the five largest banks that either failed or was merged during the financial crisis had reported Tier 1 Basel regulatory capital ratios in excess of 12 percent—considerably more than the regulatory minimum of 8 percent—one quarter before failure.1

Some observers argue that such problems could be avoided by having the Basel requirements refer to market rather than book values of capital.2 In times of financial distress, however—in which the market value of equity can plunge rapidly—such a policy change could compel firms that were already suffering from market capital shortages to meet the new Basel requirements through asset fire sales. These sales could accelerate the decline of multiple firms’ asset values and increase the severity of their capital losses. Furthermore, rewriting the Basel capital requirements so that they refer only to market values would not address any of the process or procedural issues related to measuring hard-to-value and nontraded assets. Indeed, predicating the Basel requirements on market values would simply trade one set of measurement problems for another.

In the aftermath of the 2008 financial crisis, both U.S. and international regulators tried to address some of these regulatory issues and limit problems associated with systemic risk through a combination of increased regulation and legislation.3 In particular, they imposed higher minimum book-capital requirements and applied stiffer regulations to certain systemically important financial institutions (SIFIs). According to conventional measures, these efforts have substantially increased SIFIs’ capital ratios. However, many of the weaknesses of the previous regulatory and supervisory regime, including its flawed approach to measuring capital adequacy, have remained unaddressed. Consequently, many experts believe that the new regulatory regime, with its higher capital requirements, will not suffice to rule out future bailouts.4

As a further means for bolstering bank capital, more recent reform proposals would allow alternative forms of debt and quasi-debt instruments to count toward financial institutions’ primary capital (Tier 1) and secondary capital (Tier 2) requirements. Most prominent among these have been proposals that would encourage SIFIs to issue contingent convertible capital securities (CoCos) and use them to meet some part of their regulatory capital requirements.5 European regulators in particular have embraced CoCos and have incorporated certain types into their Basel III capital standards. In contrast, U.S. regulators have been reluctant to allow CoCos to qualify toward their regulatory capital standards.

This policy analysis examines how effective CoCos have been at resolving the challenges they were designed to address and whether an alternative policy measure would better fulfill the same ends. First, it describes the most common types of CoCos that financial institutions have issued since the 2008 crisis. Next, it evaluates the argument that CoCos can play an important role in promoting financial stability, especially by ensuring that SIFIs remain adequately capitalized during times of financial distress. After reviewing three of Europe’s most significant postcrisis experiences with CoCos, it finds that historically they have failed to perform as regulators had intended. As a result, this analysis concludes by proposing a simple alternative to CoCos—one capable of accomplishing their objectives while avoiding their shortcomings.

What Are CoCos?

Sold to shareholders as bonds, CoCos convert into equity if the issuing financial institution’s Tier 1 capital ratio (the value of its equity and reserve-based capital to the value of its risk-weighted assets) drops below a certain threshold. They are therefore considered hybrid securities and are designed to provide an additional source of equity capital for banks and SIFIs to use in times of financial distress.6 As this policy analysis shows, CoCos can also enhance market discipline and address problems associated with the “too-big-to-fail” paradigm, in which regulators, believing that SIFIs cannot be allowed to collapse, feel compelled to inject massive amounts of taxpayer funds into struggling institutions.

There are three basic forms of CoCos. One type, commonly called “going-concern” or “bail-in” CoCos, convert existing debt into common equity when a specific conversion event, or “trigger,” occurs.7 Going-concern CoCos do not result in the injection of new funds into a troubled institution. Instead, they simply convert existing debt instruments on a firm’s balance sheet into common equity, thereby increasing its capital, facilitating deleveraging, and restoring capital adequacy.

The second form, “write-down” or “bail-out” CoCos, are debt instruments whose values are written down when a trigger is breached.8 Write-down CoCos simultaneously reduce a firm’s assets and its liabilities, thereby facilitating the firm’s reorganization at the point of nonviability. The resulting bailout can be permanent, temporary, partial, or total. Because write-down CoCos reward stockholders at the expense of the CoCo investors, they reverse the traditional rules of seniority, placing shareholders’ interests ahead of those of a particular set of debt holders.

The third form of CoCos, capital access bonds, resemble option securities in that they permit firms to issue new equity to their bondholders on prenegotiated terms when a triggering event occurs. Capital access bonds commit the investors who hold these bonds to injecting new funds into a troubled institution. Because such securities have not been widely issued, however, they will not be discussed here.

Problems That CoCos Attempt to Address

CoCos aim to resolve four systemic issues that the 2008 financial crisis revealed within the global financial system. First, the crisis showed that highly levered firms have an incentive to take on greater risk, which can distort the pricing of that risk. Second, it showed that a firm’s shareholders are often reluctant to issue new equity when that firm is experiencing financial distress. Although new equity capital helps protect a bank’s creditors, it harms existing shareholders (and helps debt holders) by diluting the value of the bank’s equity. Third, the crisis showed that although troubled institutions can reduce their leverage by selling assets, doing so can lead to fire-sale losses (in which institutions attempt to sell off assets at dramatically reduced prices, reducing their overall value to dangerously low levels). Cumulative fire sales can also result in a “death spiral,” where depreciating asset holdings lead to more and more fire sales. If several institutions experience distress at once, asset prices can plummet across the board, afflicting an entire sector of the financial system at once. This was certainly the case in 2008, when multiple death spirals led the asset-backed securities markets to experience particularly severe contagion effects.

Fourth, the crisis showed that regulatory delays in addressing financial distress tend to contribute to the perception that some institutions are “too big to fail”—that is, that regulators or political leaders must provide outsized protection to certain financial institutions rather than suffer the economic consequences of allowing them to fail.

Many believe that CoCos, if properly designed, could help solve all four of these problems. Indeed, the twofold attraction of CoCos is that they could help financial firms meet regulatory capital requirements and automatically absorb losses in times of financial distress—independent of government intervention. In addition, many European countries grant CoCos more favorable tax treatment than they do common equity. If CoCos were to receive a similar tax treatment in the United States, they would likely be more appealing to American issuers.9 CoCos can also be attractive to investors because they typically have higher interest rates than ordinary bank debt securities and because their returns correlate more strongly with returns on other debt securities than with returns on equity.10

Proponents also argue that properly designed CoCos would induce stockholders and managers to operate a company responsibly in hopes of avoiding a conversion altogether. Furthermore, in the case of going-concern CoCos, any conversion that does occur results in an automatic recapitalization, which helps the institution absorb any losses on its own. Not only does automatic recapitalization reduce the likelihood of taxpayers’ needing to sponsor a government-issued bailout, it also lowers any further costs that arise due to regulatory inaction or poor oversight. In short—and in theory—automatic recapitalization increases a firm’s access to its own sources of funding and liquidity, allowing it to continue operating independently.11

The validity of the above argument hinges on two central questions. The first is whether the threat of a bond-equity conversion would in fact incentivize management and shareholders to take the steps necessary to avoid triggering one. The second is whether CoCos would sufficiently restore an institution’s capital adequacy in the event of a conversion. In short, effective CoCos should help provide financial institutions with a sufficient cushion following a conversion, thereby assuring both market participants and regulators of those institutions’ ability to continue in business even after experiencing a significant loss of capital. The objectives of the conversion itself should be to minimize the probability of contagion and mitigate the risk of spillover effects across other institutions.

Satisfying the above objectives requires addressing three principal categories of CoCo design: the choice and structure of a conversion trigger, the capital ratio necessary to trigger a conversion, and the equity value that a CoCo bond should assume upon conversion. Each of these issues is considered in the following sections.

CoCo Design

Several factors are involved in structuring an optimal conversion trigger. The first relates to whether the bond-equity conversion trigger should be automatic or discretionary (and, if it is discretionary, whether such discretion should be exercised by management or by regulators). The second is whether the trigger’s threshold should refer to an accounting-based or a market-based measure of a firm’s financial condition. The third is whether the capital ratio necessary to trigger a conversion should be relatively high (when the institution remains solvent) or low (when the institution is close to failure).12 The final considerations relate to the value that CoCo bonds should take upon conversion and the amount of capital that an institution should receive post-conversion. Each of these features can critically affect the extent to which the CoCos achieve their intended effects.

Automatic vs. Discretionary Triggers

Many of the CoCos issued to date have had discretionary triggers, leaving the decision of whether or not to trigger a conversion to either financial regulators or the financial institution’s management. Both parties appreciate having the discretion to initiate a conversion because it gives them the flexibility to determine when, and under what circumstances, to recapitalize or close the institution.

However, granting both regulators and management the discretion to trigger a conversion carries unique risks.13 In the first place, it can introduce unnecessary uncertainty into the process. As history has demonstrated (most recently through the experiences of the Great Recession), regulatory discretion can lead to costly delays in addressing financial distress, especially in the absence of the appropriate regulatory oversight.14 Additionally, relying on regulatory discretion undermines the information content of a firm’s asset prices, which are meant to inform such discretion.15Finally, regulatory discretion can fail if regulators have insufficient information about an institution’s financial health, if they fail to supervise or examine an institution properly, or if they encounter political pressures to save an institution. There is also the risk that regulators will become unduly concerned about any potential contagion effects that might result from their “permitting” a large financial institution to fail.16

Similar issues are involved in allowing bank management the discretion to trigger a conversion.17 Management can face incentives to delay a conversion out of fear that conversion would come at the expense of their jobs or their institution’s investment capital. There is also the risk that management would delay conversion because of the possibility that it would dilute the equity of existing shareholders and board members. Management may even choose to avoid conversion entirely, gambling instead on the chance of a government-issued, taxpayer-funded bailout.18

Accounting-Based vs. Market-Based Triggers

Because of the weaknesses of discretionary triggers, CoCo proponents have typically favored automatic (nondiscretionary) triggers determined according to either accounting-based or market-based values of a firm’s equity. These proponents argue that because contingent capital would convert to common equity, it is the only security that is junior to common equity.19

Those who favor nondiscretionary triggers also tend to prefer ones that rely on market-based, rather than accounting-based, measures of capital, since the latter are lagging indicators of a firm’s financial condition.20 Accounting-based measures are also easier for a firm’s management to manipulate, and their retroactive nature can exacerbate regulatory delays in addressing any severe financial problems, especially ones likely to cause contagion effects.21 In contrast, market-based measures are more readily available, timely, and forward-looking than accounting-based measures. They are also harder to manipulate, and their integrity is less likely to be compromised, because they are not subject to regulatory or managerial discretion.22

There are, however, problems with relying on both accounting-based and market-based measures of equity. Market-based triggers can cause unnecessary conversions, whereas accounting-based triggers can avoid causing necessary conversions.23 Interestingly, almost all of today’s regulator-approved CoCos have had accounting-based triggers, which mainly refer to risk-based capital ratios.

Despite the supposed benefits of market-based triggers, there are circumstances in which conflicts between equity holders and CoCo investors make it uncertain whether, or under what terms, a conversion will occur.24 Such conflicts are mainly due to the fact that equity holders, unlike investors, tend to prefer either a delayed conversion or no conversion at all. Moreover, equity holders have an incentive to manipulate the firm’s stock price upward to avoid conversion. In contrast, CoCo investors have an incentive to prefer earlier conversions and may have an incentive to manipulate a firm’s stock price downward. These conflicting interests can also complicate decisions related to the post-conversion value of a CoCo’s equity, making the final outcome of a conversion uncertain.25

To address the conflict of interest between CoCo investors and equity holders, economists Charles W. Calomiris and Richard J. Herring propose a conversion policy based on a 90-day average of the ratio between the market value of a firm’s equity and the sum of that same market value and the book value of the firm’s debt. They claim this policy would clarify the price that CoCo equity should assume upon conversion.26 However, the hybrid nature of this metric, which averages a forward-looking indicator (the market value of a bond’s equity) with a backward-looking one (the book value of an institution’s debt) over a 90-day period is incredibly complex. If anything, their proposal seems to substitute one set of problems for another—or two others, since their metric is vulnerable to the weaknesses of both lagging accounting-based measures and volatile market-based ones.

Other CoCo design proposals are similarly complex. For example, the Squam Lake Group suggests implementing a dual-trigger CoCo, which would require regulators to determine both that a financial system was experiencing a systemic crisis and that an individual firm had violated its debt covenant (for instance, by allowing its risk-based capital to drop below a predetermined threshold) prior to conversion.27 Yet this proposal lacks sufficient details necessary for its implementation. It also suffers from the same flaws as Calomiris and Herring’s proposal, in that it includes all the weaknesses of both discretionary triggers and accounting-based triggers.28

Trigger Level

The ratio of the conversion trigger threshold to a firm’s existing or expected capital levels can critically influence shareholders’ and managers’ incentives to ward off a conversion. It can also affect the likelihood of a firm retaining (or restoring) its viability post-conversion. In general, high trigger levels (where conversion occurs after a firm has lost a relatively small amount of capital) tend to encourage more responsible banking practices and increase the chances that a firm will be able to recapitalize itself in the event of a conversion. In particular, capital trigger levels at or above 7 percent are a typical feature of going-concern CoCos, which primarily aim to prevent firms from ever reaching the point of insolvency. A high trigger should motivate management to raise more capital before the firm experiences significant problems or loses access to financial markets.

The threat of a triggering event should also cause management and shareholders to curtail leverage and reduce risk taking.29 High-trigger CoCos make the threat of share value dilution more imminent, encouraging shareholders and management alike to reduce risk, deleverage, and raise more capital.30 As the proximity of a firm’s capital ratio to the trigger threshold increases, the market value of its equity decreases, and a post-conversion wealth transfer—from management to CoCo bondholders—becomes more likely.

High triggers can also minimize the chance that shareholders will create an abrupt drop in share value by switching from a low-risk, low-probability-of-default portfolio to a high-risk, high-probability-of-default portfolio. However, if a debt-induced downward spiral in equity value does occur when CoCos are on the balance sheet, then shareholders can either reduce leverage and increase capital as intended or opt to declare bankruptcy before conversion.31

In contrast, CoCos with relatively low capital ratio triggers (for example, 5 percent) are better suited for facilitating an orderly, private-sector failure resolution process in the event that an institution approaches the point of nonviability. Low triggers provide management and stockholders with relatively weak incentives to control risk, and some researchers have raised concerns about runs and negative spillovers if a firm’s capital ratio nears an extremely low threshold, which can signal bankruptcy.32

Conversion Price

A CoCo’s conversion price and the number of shares converted can also influence both shareholders’ and managers’ incentives. In general, the outcome of a conversion depends on whether that conversion affects a predetermined number of shares, based on an ex ante fixed price; an ex post number of shares, based on their market price at the time of conversion; or some combination of the two.33

Ex post pricing, which sets the value of a CoCo’s equity at its market price upon conversion, can significantly dilute stock value for existing shareholders, since a CoCo’s ex post market price is often much lower than its original purchase price. This dilution can have the positive effect of incentivizing shareholders and management to avoid unnecessary risk taking in the interest of reducing the likelihood of a conversion.

In contrast, CoCos with an ex ante fixed price (one set prior to conversion) will clearly limit stock dilution and significantly reduce incentives to avoid a triggering event. Indeed, it is virtually impossible to avoid situations in which going-concern CoCos will not cause at least some wealth transfer from CoCo holders to existing shareholders.34 In the case of write-down CoCos, however, no wealth transfer occurs, because in most cases, the subsequent write-down wipes out any preexisting equity claims. As a result, some researchers suggest that write-down CoCos are better at mitigating management and shareholder risk taking because they provide greater incentives to reduce leverage.35 This factor could explain the preponderance of write-down CoCos relative to going-concern CoCos among institutions today.

Summary of Design Issues

When it comes to CoCo design, if the goal is to minimize the likelihood of a trigger event while motivating management and stockholders to reduce both leverage and risk-taking, then the evolving literature tends to favor going-concern CoCos with high, fixed, market-based triggers and a sharply diluting conversion rule at the then-market price. If the intent is for CoCos to facilitate a modified bankruptcy at the point of an institution’s nonviability, however, write-down CoCos remain a plausible option. However, the literature also suggests that although neither type of CoCo design is foolproof, write-down CoCos in particular provide minimal incentives to reduce the likelihood of conversion and are clearly the least desirable alternative.

Finally, the extensive research on CoCo triggers and related structures remains largely silent on the exact administrative mechanism that should initiate the conversion of a CoCo bond into equity. Who monitors the CoCos: investors or regulators? How should the trigger mechanism be enforced? By the courts or through some other legal action? The literature on discretionary CoCos addresses these issues somewhat. Yet even in these cases, the design process is rife with uncertainty and open to conflicts of interest. This situation further complicates the problems inherent in pricing CoCos with different structures and features.

CoCos in Practice

To examine how well CoCos have fulfilled their objectives in recent practice, we should first review the quantity, category, and origin of the CoCos that have been issued over the past several years. Although regulators in many nations allow CoCos to satisfy some part of banks’ regulatory capital requirements, the $521 billion of CoCos employed for this purpose thus far remains quite small compared to the approximately $5.3 trillion of bank equity capital worldwide.36 Figure 1 shows the dollar value of outstanding CoCos of institutions headquartered in various countries from 2009 through 2015.

There are two easy explanations for why most CoCos have so far originated mainly outside the United States. First, European and Asian-Pacific regulators have proactively incorporated CoCos into their Basel II, Basel III, and home-country capital standards, whereas U.S. regulators have yet to embrace CoCos in the same way. Second, and probably most important, European tax authorities have determined that interest expenses on CoCos are tax-deductible (as interest payments on other bank-issued debt instruments are), whereas the U.S. Internal Revenue Service has not yet ruled the same.37 According to a 2013 survey, 64 percent of CoCos originated in countries where their interest payments were tax-deductible, while about 20 percent originated in countries where their interest payments were not tax-deductible. (The survey did not determine the tax status of the remaining 16 percent.)38

Figure 2 shows the estimated breakdown of CoCos issued by type as of 2016. Note that fewer than one-third of all CoCos issued during that period were going-concern CoCos that would convert debt into equity. The remaining 70 percent provided for some form of write-down of the CoCo debt as a loss-absorbing mechanism.

The high proportion of write-down CoCos relative to going-concern ones may seem odd, considering that the latter are more effective at curtailing management and shareholder risk-taking. This predominance can be largely explained, however, by the fact that European and Asian-Pacific regulators generally had incorporated write-down CoCos into their Basel II and Basel III country capital standards before researchers had revealed either the weaknesses of those standards or the incentive problems associated with write-down CoCos.

In Europe, Basel III capital standards explicitly provide for CoCos to count toward Tier 1 and Tier 2 capital, but only in limited amounts and only when those CoCos possess certain characteristics.39 The Basel III capital requirements are now extremely complex and continue to evolve. The revised framework has increased from 2 ratios under Basel II capital rules (based on risk-weighted assets) to more than 12 ratios under Basel III rules. The Basel III standards are also based on risk-weighted assets, with total capital requirements ranging from 8.5 percent to 16.5 percent of total risk-weighted assets. (The 8.5 percent figure applied until 2016 and increases to 10.5 percent in 2019.)40

Figure 3 categorizes the CoCos issued between 2009 and 2015 according to their trigger types. The majority rely on either low book-value triggers or triggers that initiate conversions only when an institution nears the point of nonviability, when it is extremely unlikely that the institution will be able to restore its own capital following conversion.

Roughly 90 percent of all the CoCos issued between 2011 and 2016 used the Common Equity Tier 1 capital ratio (common equity plus retained earnings-to-risk assets) as their trigger. Another 8 percent of the CoCos issued used the Tier 1 capital ratio (common equity plus retained earnings and preferred stock) and a high trigger of at least 7 percent. The remaining 2 percent of the CoCos issued used the total risk-based capital ratio (the ratio of Tier 1 and Tier 2 capital to risk-based assets).41

Recent European Experiences with CoCos

Problems at three major European banks, two of which had outstanding CoCos at the time of distress, can provide clues as to how effective these securities are at achieving their intended purposes. These problems began in early 2016 with Germany’s Deutsche Bank and culminated the following summer with the resolution of Spain’s Banco Popular Español and the Italian government’s rescue of Monte dei Paschi di Siena in June.

Deutsche Bank. Concerns about the health of Deutsche Bank first surfaced in early January 2016 after the bank reported a loss for 2015. That loss heightened investors’ fears that the bank might not be able to make coupon payments on its outstanding CoCos, which were write-down securities with a low trigger (5.125 percent of the bank’s Tier 1 capital ratio). As European capital standards required, payments on these CoCos would occur at the issuer’s discretion, missed coupon payments would not be cumulative, and regulators retained the authority to trigger a bailout at any time.42

Deutsche Bank’s reported loss initiated an abrupt decline in the prices of its CoCo bonds the following year. Evidence suggests that this decline may have provoked a contagion effect, with other major European banks experiencing a similar decline in the value of their CoCo bonds at the same time.43 Prices also became increasingly volatile as uncertainty about Deutsche Bank’s financial condition persisted, compounded by fears that it would miss a coupon payment. Interestingly, in late December 2016, regulators issued an opinion (one largely favorable to Deutsche Bank) that attempted to clarify when a bank could make coupon payments after failing to meet capital adequacy standards. Despite this, the value of Deutsche Bank’s CoCos remained low until September of the following year. After a brief upward spike, however, they fell sharply again, partly in response to the U.S. Department of Justice’s decision to fine Deutsche Bank $14 billion for its role in the fraudulent sales of risky mortgage-backed securities in the years leading up to the financial crisis. (That fine was subsequently reduced to $7.2 billion.)

The prices of the bank’s credit default swaps on both its senior and junior debt followed a similar pattern, as did the prices of other banks’ CoCos. Deutsche Bank’s CEO John Cryan went public on September 30, 2016, to defend the bank’s condition, while several other analysts also confirmed that the bank was liquid and basically sound.44

Deutsche Bank’s recent experience illustrates the asset death spiral associated with poorly structured CoCos and the uncertainties they can create. It also validates several academics’ concerns that the discretionary aspect of write-down CoCos could increase the possibility of missed coupon payments and heighten confusion regarding the degree of regulatory intervention necessary to trigger a partial or total write-down.45

Banco Popular Español. The European Central Bank’s decision in June 2017 to declare Banco Popular Español (or Banco Popular for short) a failing or likely to fail institution prompted Europe’s recently established Single Resolution Board to merge it with Banco Santander, which purchased all the shares of Banco Popular for a total price of €1.46 Banco Popular’s troubles were a result of its having accumulated bad mortgage debt prior to the 2008 crisis. Making matters worse, the bank’s management easily papered over its losses, taking advantage of the fact that Spanish banking authorities had encouraged their practice of relying on book-value accounting and had a long-standing habit of putting off recognizing any new losses.

Banco Popular had foundered for several years after the 2008 financial crisis. In May 2016, it announced that it would have to raise €2.5 billion in additional funding to bolster its sagging capital, but this value was actually far lower than the €6.7 billion that JPMorgan Chase estimated the bank would need to support both its stock and CoCo bond prices.47 In just two days—from May 27 to May 28, 2016—the bank’s stock fell 32 percent. In April 2017, Banco Popular itself made downward adjustments to its 2016 financial statement, while Moody’s downgraded its senior unsecured debt, triggering another sharp drop in the value of both its shares and its bonds.

Like Deutsche Bank, Banco Popular’s CoCo bonds had qualified as Additional Tier 1 (AT1) capital. Unlike Deutsche Bank, however, these were high-trigger (7 percent), going-concern CoCos, which were set to convert into equity at a price not lower than €1.549 per share. The 7 percent trigger ratio was based on the Common Equity Tier 1 risk-based capital ratio. Like all CoCos that qualified for AT1 capital status, Banco Popular’s contract included a cancellation clause, and missed coupon payments were noncumulative.

On June 6, 2017, however—before conversion ever took place—the European Central Bank (ECB) declared Banco Popular in danger of failing. The next day, Europe’s Single Resolution Board merged the bank into Banco Santander, cutting short any conversion of the bank’s CoCos into equity. The move altogether eliminated its subordinated debt, along with any claim its equity holders or CoCo investors had to that debt. As in Deutsche Bank’s case, the CoCo bond and equity prices exhibited a death spiral, with hedgers taking short positions against those assets and triggering further sales of stock.48 In Banco Popular’s case, therefore, going-concern CoCos did not function as envisioned, insofar as they failed to induce management and shareholders to reduce risk, curtail leverage, and raise more capital.

Monte dei Paschi di Siena. In December 2016, the Italian government and the ECB put forward a “precautionary recapitalization” of Monte dei Paschi di Siena (or Monte), Italy’s oldest bank. Like many other Italian banks, Monte had been suffering since the 2008 financial crisis, plagued by significant asset quality problems, difficulties in passing recent regulatory stress tests, and a capital shortage. Indeed, more than 35 percent of Monte’s gross loans were nonperforming for the first three quarters of 2016.49

Although the bank had no outstanding CoCo bonds, it did have subordinated debt. As part of its recapitalization effort, the Italian government provided Monte €6.6 billion in support and wrote down its investors’ claims by €2.2 billion. The government’s support program followed on the heels of an announcement it had made the week prior, in which it voiced its intent to provide a €20 billion support package to the entire Italian banking system.

Monte’s condition was undeniably deteriorating. It needed capital and had failed in its attempts to raise more funds and find a merger partner. Like Banco Popular, its asset quality problems had reached the point at which it began experiencing a drop in liquidity: during the first nine months of 2016, it lost €20 billion in deposits, and between November and December, deposits fell from 7.6 percent to 4.8 percent of its total activities. In the final month of that year, Monte lost an additional €2 billion in deposits.50

The Italian government’s official rationale for using taxpayer funds to support Monte was that without doing so, investors would take an even bigger hit under the European Union’s new Bank Recovery and Resolution Directive, which specifies the order in which creditors are prioritized to take losses but prohibits such prioritization of losses for insolvent banks.51 Part of the reasoning behind this unusual structure was the EU’s concern about potential contagion effects on other Italian banks, since they had asset quality problems as well.

Monte’s experiences demonstrate that, at least in Europe, regulators are still inclined to protect financial institutions with policies that treat them as though they are “too big to fail.” Given the emergency provisions in place across the EU (like the ones that the ECB and the Italian government implemented in the cases above), the prospect of government intervention and supervisory discretion will likely continue to undermine the role that CoCo bonds are meant to play in institutions’ capital structures. This outcome, in turn, could weaken the discipline that CoCo bonds are supposed to provide.

Given the deficiencies inherent in CoCo design and implementation, U.S. regulators should continue to regard these securities with skepticism and caution. Other alternatives—ones not subject to the shortcomings discussed above—could better satisfy the objectives that many have tried—and failed—to accomplish through CoCos.

An Amended Financial CHOICE Act: A Better Way?

One of the principal attractions of most CoCo proposals is that their favorable tax treatment (which, at least outside the United States, they share with other forms of debt) makes them a lower-cost debt option to equity.52 Yet given their myriad vulnerabilities and complexities, there may be a better way to achieve CoCos’ intended effects—one that is not driven by tax provisions.

U.S. regulators should look instead at revising Title VI of the Financial CHOICE Act of 2017, which the House of Representatives passed that June. As written, the act provides regulatory relief from some of the Dodd-Frank Act’s most burdensome regulations in addition to certain Basel III regulations.53 Specifically, it exempts institutions with a capital ratio of at least 10 percent (defined as Tier 1 equity capital divided by total on- and off-balance sheet assets) from:

(1) any federal law or regulation addressing capital or liquidity requirements or standards; (2) any federal law, rule, or regulation that allows a federal financial agency to object to a capital distribution; (3) specified considerations as to whether the banking organization poses a risk to the stability of the financial system of the United States; and (4) other specified federal laws, rules, and regulations.54

The act’s regulatory off-ramp provision measures capital according to book value, making it vulnerable to some of the same criticisms that apply to CoCos with accounting-based triggers. Yet with suitable modifications, the off-ramp provision could provide a genuine alternative to costly prudential regulations.55 An amended off-ramp provision should differ from the House version in the following ways:

  • It should measure a consolidated financial institution’s equity by its market value, not by its book value. A market-based equity-to-asset ratio would capture the risk signals that accounting-based measurements can mask, such as fluctuations in monetary policy, shifts in stock or commodity prices, and other standard market risk measures.56
  • The current act’s 10 percent capital ratio exemption threshold is too low. As of 2018, 10.5 percent is the current Basel III minimum capital ratio. Institutions should not be afforded relief just because they meet this minimum.
  • Instead of the act’s present all-or-nothing approach to regulatory relief, a modified off-ramp provision should allow for incremental regulatory relief depending on a bank’s capital level. As an example, banks with capital ratios between 15 percent and 18 percent could qualify for marginal regulatory exemptions. Those with capital ratios between 18 percent and 20 percent could also be exempt from stress tests and related liquidity requirements.
  • The plan should also impose increasingly strict regulatory requirements whenever a firm’s capital ratio declines below the regulatory relief tranches.
  • The plan should require regulators to demonstrate how they plan to monitor risk taking among institutions that are transitioning between off-ramp capital thresholds. Regulators, not institutions, should bear the cost of these monitoring and enforcement measures.

The proposed tiered regulatory relief program responds to the critical question: When do we want intervention to occur? The answer is: long before a severe problem presents itself or an institution comes close to bankruptcy. As to the form that intervention should take, the above proposal assumes that intervention can (and often should) begin as freedom from intervention, and it contains several complementary attributes related to that assumption.

First, the plan creates a positive financial incentive for institutions to have higher capital ratios. Higher capital ratios allow institutions of various sizes to make more informed choices about how to balance the cost of their regulatory burden with the cost of their capital holdings. Smaller institutions, for example, may choose to hold higher levels of capital to avoid large compliance costs. (Estimates show that regulatory costs for small banks with fewer than $100 million in assets amount to 9.8 percent of their noninterest operating expenses, whereas those costs drop to 5.5 percent for banks with assets between $1 billion and $10 billion.)57

Second, the plan would subject well-capitalized institutions to less burdensome regulations than undercapitalized institutions. Doing so would provide institutions that were better off with a competitive advantage in the marketplace, since their exemptions would signal that they were low-risk and highly capitalized. Healthy, less regulated, well-capitalized, and viable institutions should, all else being equal, have a lower cost of funding overall. Additionally, it is hard to imagine that management and boards of directors would want massive increases in regulation owing to an erosion of their capital. The opportunity for regulatory relief would provide management with ample incentive to act in ways that would protect their institution’s regulatory independence.

Third, the above changes would make regulators responsible for monitoring an institution’s capital position and determining whether to continue granting regulatory relief if that position declined. Regulators would also need to justify those decisions publicly. In addition, because firms with fewer regulations would have higher levels of capital, regulators would be better able to react to warning signals and reimpose prudential regulations long before those institutions reached critically low capital levels. Under the current regulatory environment, the government must wait until banks demonstrate a critical need for capital before intervening. Often, these late-stage interventions are far more burdensome, not to mention costlier, than earlier ones.

Fourth, tying an institution’s capital ratio to regulatory costs would make it easier for the public to monitor regulatory oversight, which would in turn improve regulatory accountability. If an institution’s capital ratio fell below the regulatory relief tranches and regulators failed to act, their lack of response would be public and transparent.

Fifth, the plan would modify the procedure for reexempting formerly eligible institutions whose capital had fallen below the necessary off-ramp threshold. Section 601 of the Financial CHOICE Act requires that an institution qualifying for the off-ramp provision maintain a quarterly capital ratio of 10 percent. The responsible federal regulator has the discretion to declare an institution in noncompliance, giving it a year to recomply or lose its exemption status. As written, however, this requirement is too lenient, gives too much discretion to the regulator, and provides institutions with too much time to comply before losing their off-ramp status. To correct these weaknesses, the act should require that a market-based trigger determine an institution’s compliance and that institutions whose capital declines below a certain threshold lose their exemptions immediately. The act should also implement a one-year waiting period before allowing an institution to re-petition for off-ramp relief.

Although the above modifications would tighten Title VI’s current provisions, they would still provide institutions with enough flexibility to make choices that would help them avoid costly regulations. As a result, the best way to view this program is as a refined effort to incentivize lower risk taking on behalf of financial institutions and to prompt corrective action and early intervention from regulators, in line with the aims of the Federal Deposit Insurance Corporation Improvement Act. If an institution falls below the capital requirements necessary for regulatory relief, the first step should be to revoke its relief status. Furthermore, by setting those capital requirements above the Basel III minimum, the act would allow regulatory intervention to occur long before an institution became insolvent. Whereas CoCos historically have functioned more as an instrument for reinjecting capital into an institution at or past the point of failure, this proposal focuses on preventing an institution from ever reaching that stage.

Finally, the costs of structuring and implementing this modified off-ramp program would fall principally upon the regulators, not upon the financial institutions, whose main expenses would include only those related to assessing the tradeoffs between regulatory compliance and the loss of capital. At present, institutions have had to make these determinations by devoting a significant number of their staff members solely to regulatory compliance. Smaller institutions have struggled to meet these burdens, regardless of their actual capital or compliance levels, and many have been forced to merge into larger institutions as a result. This modified proposal would grant these and other firms the level of regulatory protection they need and—with prudent conduct—the level of regulatory relief they deserve.

Conclusion

A 2012 report by the Financial Stability Oversight Council succinctly summarized the benefits and problems with using CoCos as a means for enhancing financial stability.58 CoCos can help a struggling financial institution raise additional equity, absorb losses, and remain liquid. They can also encourage its managers to raise capital, and they can facilitate an orderly and timely resolution of any failure. But CoCos also have their drawbacks. Their complexity can make them difficult to price and create uncertainty as to whether conversion will actually occur soon enough to absorb losses. Once initiated, a conversion might also trigger a run on its issuer by raising doubts about the issuer’s health, as shown in the cases of Deutsche Bank and Banco Popular. In other cases, uncertainty surrounding the likelihood of a conversion has become a source of contagion and systemic risk.59

The current regulatory approach to incorporating CoCos into capital requirements, which predated much of the recent work on how CoCos should be structured, does not meet the standards of optimal design that would enable them to function effectively. Most of the CoCos issued thus far have been of the write-down form and are based on backward-looking accounting measures with triggers geared to risk-based capital standards. Few are going-concern CoCos with market-based triggers, which would discourage owners and managers from taking on increased leverage and risk.

Given Europe’s experiences with CoCos, and considering the conceptual difficulties involved in designing CoCos that would avoid similar problems in the future, U.S. regulators should continue to approach CoCos with skepticism and caution. An alternative worth considering is a modified version of the regulatory off-ramp proposal contained in the Financial CHOICE Act, which would provide greater relief from burdensome regulations as an institution’s capital increases.

Notes

1. Andrew Kuritzkes and Hal Scott, “Markets Are the Best Judge of Bank Capital,” Financial Times, September 23, 2009. For the required Basel II Tier 1 minimum standard, see “Basel II,” Investopedia, https://www.investopedia.com/terms/b/baselii.asp.

2. Shadow Financial Regulatory Committee, “Reforming Bank Capital Regulation,” Shadow Statement no. 160, March 2, 2000.

3. In the United States, Congress concluded that regulators had perpetuated the “too big to fail” paradigm and responded by passing the Dodd-Frank Act in 2010.

4. Neel Kashkari, “New Bailouts Prove ‘Too-Big-to-Fail’ Is Alive and Well,” Wall Street Journal, July 9, 2017.

5. Mark Flannery was one of the first to propose such an instrument. Mark J. Flannery, “No Pain, No Gain? Effecting Market Discipline via ‘Reverse Convertible Debentures,’” in Capital Adequacy Beyond Basel: Banking, Securities, and Insurance, ed. Hal S. Scott (Oxford: Oxford University Press, 2005), pp. 171–95. For a more recent article, see Mark J. Flannery, “Stabilizing Large Financial Institutions with Contingent Capital Certificates,” Quarterly Journal of Finance 6, no. 2 (2016): 1–26.

6. George M. von Furstenberg, Contingent Convertibles (CoCos): A Potent Instrument for Financial Reform (Singapore: World Scientific Publishing, 2014).

7. Patrick Bolton and Frédéric Samama, “Capital Access Bonds: Contingent Capital with an Option to Convert,” Economic Policy 27, no. 70 (2012): 275–317. For a comprehensive discussion of the case for CoCos and the key criteria they must satisfy, see George M. von Furstenberg, “Contingent Capital to Strengthen the Private Safety Net for Financial Institutions: Cocos to the Rescue?” Bundesbank Series 2 Discussion Paper no. 2011,01 (2011); and Financial Stability Oversight Council, “Report to Congress on Study of Contingent Capital Requirement for Certain Nonbank Financial Companies and Bank Holding Companies,” Washington, July 2012.

8. Stan Maes and Wim Schoutens, “Contingent Capital: An In-Depth Discussion,” Economic Notes by Banca Monte dei Paschi di Siena SpA 41, no. 1–2 (2012): 59–79; John C. Coffee, “Bail-Ins Versus Bail-Outs: Using Contingent Capital to Mitigate Systemic Risk,” Columbia Law and Economics Working Paper no. 380, October 2010; and Mark J. Flannery, “Contingent Capital Instruments for Large Financial Institutions: A Review of the Literature,” Annual Review of Financial Economics 6, no. 1 (2014): 225–40.

9. Stefan Avdjiev, Anastasia Kartasheva, and Bilyana Bogdanova, “CoCos: A Primer,” BIS Quarterly Review (September 2013): 43–56.

10. In Contingent Convertibles (CoCos), von Furstenberg argues that for CoCos to be attractive capital market investments, they must be able to help meet regulatory capital requirements, be rated investment grade, and have tax-deductible interest payments.

11. Researchers have devoted significant attention to this issue. See, for example, Boris Albul, Dwight M. Jaffee, and Alexei Tchistyi, “Contingent Convertible Bonds and Capital Structure Decisions,” SSRN Electronic Journal, January 2015; Bolton and Samama, “Capital Access Bonds”; Charles W. Calomiris and Richard J. Herring, “How to Design a Contingent Convertible Debt Requirement That Helps Solve Our Too-Big-to-Fail Problem,” Journal of Applied Corporate Finance 25, no. 2 (2013): 39–62; Christopher L. Culp, “Contingent Capital vs. Contingent Reverse Convertibles for Banks and Insurance Companies,” Journal of Applied Corporate Finance 21, no. 4 (2009): 17–27; Flannery, “No Pain, No Gain?”; Flannery, “Stabilizing Large Financial Institutions”; George Pennacchi, Theo Vermaelen, and Christian C. P. Wolff, “Contingent Capital: The Case for COERCs,” Journal of Financial and Quantitative Analysis 49, no. 3 (2014): 541–74; Surexh Sundaresan and Zhenyu Wang, “On the Design of Contingent Capital with a Market Trigger,” Journal of Finance 70, no. 2 (2015): 881–920; and von Furstenberg, Contingent Convertibles (CoCos).

12. Christoph Henkel and Wulf A. Kaal, “Contingent Capital in European Union Bank Restructuring,” Northwestern Journal of International Law and Business 32, no. 2 (2012): 191–262. Henkel and Kaal propose distinct trigger types that are either transaction based, automatic, statute based, or regulation based.

13. Julie Dickson, “Too-Big-to-Fail and Embedded Contingent Capital,” remarks at the Financial Services Invitational Forum, Cambridge, Ontario, May 6, 2010, http://www.osfi-bsif.gc.ca/Eng/Docs/jdlh20100506.pdf.

14. Calomiris and Herring, “How to Design a Contingent Convertible Debt Requirement.”

15. Sundaresan and Wang, “On the Design of Contingent Capital.”

16. Sundaresan and Wang, “On the Design of Contingent Capital.”

17. Bolton and Samama, “Capital Access Bonds.”

18. Similar gambling took place by Lehman Brothers management prior to its failure.

19. Stefan Avdjiev et al., “The Real Consequences of CoCo Issuance: A First Comprehensive Analysis,” Vox CEPR Policy Portal, December 22, 2017.

20. In “How to Design a Contingent Convertible Debt Requirement,” Calomiris and Herring note that regulatory capital requirements employ a mixture of book-value and fair-value measures of capital when determining compliance.

21. For a detailed discussion of the various kinds of manipulation that can be involved with different CoCo structures, see Robert L. McDonald, “Contingent Capital with a Dual Price Trigger,” Journal of Financial Stability 9, no. 2 (2013): 230–41. For a description of recent CoCos issued by Lloyds, Rabobank, and Credit Suisse, see Michalis Ioannides and Frank S. Skinner, “Contingent Capital Securities: Problems and Solutions,” in Derivative Securities Pricing and Modelling, ed. Jonathan Batten and Niclas Wagner (Castle Hill, Australia: Emerald Press, 2011).

22. See Sundaresan and Wang, “On the Design of Contingent Capital.” The loss-of-information argument is a variant of Goodhart’s law, which says, in paraphrased form, that when a measure becomes a target, it ceases to be a good measure. Charles Goodhart, “Problems of Monetary Management: The U.K. Experience,” in Papers in Monetary Economics (Sydney: Reserve Bank of Australia, 1975). See also Urs W. Birchler and Matteo Facchinetti, “Self-Destroying Prophecies? The Endogeneity Pitfall in Using Market Signals for Prompt Corrective Action,” Working Paper, Swiss National Bank, 2007.

23. A similar argument can be found in Philip Bond, Itay Goldstein, and Edward Simpson Prescott, “Market-Based Corrective Actions,” Review of Financial Studies 23, no. 2 (2010): 781–820.

24. Sundaresan and Wang, “On the Design of Contingent Capital,” argue that for a unique equilibrium price of the bank’s stock to exist, there can be no transfer of value between initial shareholders and CoCo investors either prior to or at the time of conversion. However, George Pennacchi and Alexei Tchistyi, “On Equilibrium When Contingent Capital Has a Market Trigger: A Correction to Sundaresan and Wang,” Journal of Finance 74, no. 3 (2019): 1559–76, point out an error in Sundaresan and Wang’s analysis, indicating that the wealth-transfer restriction need only apply at the time of conversion. See also Natalya Martynova and Enrico C. Perotti, “Convertible Bonds and Bank Risk-Taking,” De Nederlandsche Bank Working Paper no. 480, August 2015, for a similar argument about conflicting incentives.

25. Martynova and Perotti, “Convertible Bonds.”

26. Calomiris and Herring, “How to Design a Contingent Convertible Debt Requirement.”

27. Kenneth R. French et al., The Squam Lake Report: Fixing the Financial System (Princeton: Princeton University Press, 2010).

28. In “Contingent Capital with a Dual Price Trigger,” McDonald also argues for a dual-trigger approach, but unlike the approach taken in The Squam Lake Report, his proposal uses both a market-based stock price and a broad-based financial firm market index.

29. Ceyla Pazarbasioglu et al., “Contingent Capital: Economic Rationale and Design Features,” staff discussion note, International Monetary Fund, January 25, 2011, https://www.imf.org/external/pubs/ft/sdn/2011/sdn1101.pdf.

30. Coffee, “Bail-Ins Versus Bail-Outs.”

31. Non Chen et al., “Contingent Capital, Tail Risk, and Debt-Induced Collapse,” Review of Financial Studies 30, no. 11 (2017): 3722–58, https://doi.org/10.1093/rfs/hhx067. Indeed, the Swiss regulatory authority in 2010 proposed that a dual CoCo structure for capital with high-trigger securities (7 percent) should serve as a buffer to Tier 1 capital and that additional low-trigger securities (5 percent) should serve as loss-absorbing capital in the event of distress.

32. Financial Stability Oversight Council, “Report to Congress.”

33. Avdjiev, Kartasheva, and Bogdanova, “CoCos: A Primer.”

34. Martynova and Perotti, “Convertible Bonds.”

35. Martynova and Perotti, “Convertible Bonds.”

36. Stefan Avdjiev et al., “CoCo Issuance and Bank Fragility,” Bank for International Settlements Working Paper no. 678, November 2017. It is estimated that worldwide banking assets are about $27 trillion and capital is about $5.3 trillion. See http://stats.bis.org/statx/srs/table/b1.

37. To put this tax issue in perspective, U.S. banks had a tax rate of 35 percent and in 2016 paid about $303 billion in dividends. If all dividends were tax-deductible, the total loss to the Treasury would have been about $32 billion, which is less than the estimated cost of banking regulation and substantially greater than the anticipated loss in revenue if payments on CoCos were to be deemed tax-deductible. Federal Deposit Insurance Corporation, “Commercial Banks: Historical Statistics on Banking,” https://www5.fdic.gov/hsob/HSOBRpt.asp. Admati et al. argue that the cost of capital versus debt is overestimated. Anat R. Admati et al., “Fallacies, Irrelevant Facts, and Myths in the Discussion of Capital Regulation: Why Bank Equity Is Not Expensive,” Stanford University Business School Working Paper no. 2065, October 22, 2013. In “CoCos: a Primer,” Avdjiev, Kartasheva, and Bogdanova suggest that as of 2016, about 64 percent of CoCos in circulation had been issued in countries that regarded interest on CoCos as tax-deductible, while about 20 percent had been issued in countries where CoCo interest payments were not tax-deductible. They did not determine the tax status of the remainder.

38. Avdjiev, Kartasheva, and Bogdanova, “CoCos: A Primer.”

39. Bank for International Settlements, “Basel III: A Global Regulatory Framework for More Resilient Banks and Banking Systems,” June 2011.

40. The Shadow Financial Regulatory Committee has long argued against relying on risk-weighted assets as a measurement criterion. See, for example, Shadow Financial Regulatory Committee, “Alternatives to the Proposed Risk-Based Capital Standards,” Statement no. 323, February 13, 2013. For a detailed discussion of the Shadow Committee’s view, see Robert A. Eisenbeis, “The Shadow Financial Regulatory Committee’s Views on Systemic and Payments System Risks,” paper presented at the 91st Annual Conference of the Western Economic Association, Portland, Oregon, June 2016. The same remarks also appear in Robert A. Eisenbeis, “The Shadow Financial Regulatory Committee’s Views on Systemic and Payments System Risks,” in Innovative Federal Reserve Policies during the Great Financial Crisis, ed. George G. Kaufman, Douglas D. Evanoff, and A. G. Malliaris (Singapore: World Scientific Publishing, 2019), pp. 285–302.

41. Maryka Daubricourt, “Contingent Capital Instruments: Pricing Behaviour,” ESCP Europe Applied Research Paper no. 6, October 2016.

42. See “Additional Tier 1 (AT1) RegS May 2014,” Deutsche Bank, https://www.db.com/ir/en/at1-regs-may-2014.htm.

43.“Europe’s CoCos Provide a Lesson on Uncertainty,” Office of Financial Research Working Paper 17–02, April 2017.

44. William Canny and Donal Griffin, “Deutsche Bank CEO John Cryan Defends Bank as Some Clients Pare Exposure,” LiveMint, September 30, 2016.

45. William R. Cline, “Systemic Implications of Problems at a Major European Bank,” Peterson Institute for International Economics Policy Brief no. 16–19, October 2016.

46. Mark Russell, “The Resolution of Banco Popular,” UK Finance, June 22, 2017.

47. Don Quijones, “The Banking Crisis in Spain Is Back,” Wolf Street, May 28, 2016.

48.“FAQ: EU’s Differing Treatment of Ailing Banks,” Moody’s, June 21, 2017, https://www.moodys.com/research/Banks-Europe-FAQ-EUs-differing-treatment-of-ailing-banks—PBC_1077213.

49. Jim Edwards, “Italy’s Banks Might Need €52 Billion Bailout,” Business Insider, November 29, 2016.

50. James David Spellman, “Italy Shores Up Failing Bank: A Template for Rescuing Europe’s Other Weak Banks?,” European Institute, January 2, 2017.

51. See “Directive 2014/59/EU of the European Parliament and of the Council of 15 May 2014 Establishing a Framework for the Recovery and Resolution of Credit Institutions and Investment Firms,” Official Journal of the European Union, December 6, 2014; and European Commission, “EU Bank Recovery and Resolution Directive (BRRD): Frequently Asked Questions,” news release, April 15, 2014.

52. Depending on their features, however, certain CoCos may be harder to price and tend to be somewhat more expensive than other forms of debt.

53. The “off-ramp” concept is outlined in Title VI of the Financial CHOICE Act of 2017 and was endorsed in a modified form by the Financial Economists Roundtable that same year.

54. Financial CHOICE Act of 2017, H.R. 10, 115th Cong., 2017.

55. Financial Economists Roundtable, “Statement on Bank Capital as a Substitute for Prudential Regulation,” September 20, 2017, http://www.financialeconomistsroundtable.com.

56. The Shadow Financial Regulatory Committee has criticized the use of such measures on many occasions, and the 2008 financial crisis proved how deficient such measures were in reflecting an institution’s soundness. See Kuritzkes and Scott, “Markets Are the Best Judge of Bank Capital”; and Financial Economists Roundtable, “Bank Capital as a Substitute for Prudential Regulation.”

57. See Drew Dahl et al., “Compliance Costs, Economies of Scale and Compliance Performance: Evidence from a Survey of Community Banks,” Federal Reserve Bank of St. Louis, April 2018, Chart 3, https://www.communitybanking.org/~/media/files/compliance%20costs%20economies%20of%20scale%
20and%20compliance%20performance.pdf
.

58. Financial Stability Oversight Council, “Report to Congress.”

59. Robert A. Eisenbeis, “The Fed and Structural Reforms to Reduce Interconnectedness and Promote Financial Stability,” in Public Policy & Financial Economics: Essays in Honor of Professor George G. Kaufman for His Lifelong Contributions to the Profession, ed. Douglas D. Evanoff, A. G. Malliaris, and George G. Kaufman (Singapore: World Scientific Publishing, 2018), pp. 117–46.

osts%20economies%20of%20scale%20and%20compliance%20performance.pdf.

58. Financial Stability Oversight Council, “Report to Congress.”

59. Robert A. Eisenbeis, “The Fed and Structural Reforms to Reduce Interconnectedness and Promote Financial Stability,” in Public Policy & Financial Economics: Essays in Honor of Professor George G. Kaufman for His Lifelong Contributions to the Profession, ed. Douglas D. Evanoff, A. G. Malliaris, and George G. Kaufman (Singapore: World Scientific Publishing, 2018), pp. 117–46.

Robert A. Eisenbeis is vice chairman of Cumberland Advisors and retired executive vice president of the Federal Reserve Bank of Atlanta.

Do Immigrants Import Terrorism?

$
0
0

Andrew C. Forrester, Benjamin Powell, Alex Nowrasteh, & Michelangelo Landgrave

The relationship between immigration and terrorism is an important public policy concern. Using bilateral migration data for 174 countries from 1995 to 2015, we estimate the relationship between levels of immigration and terrorism using an instrumental variables (IV) strategy based on the initial distribution of immigrants in destination countries. We specifically investigate rates of immigration from Muslim Majority Middle Eastern and North African (MENA) countries and countries engaged in conflicts. We find no relationship between stocks of immigrants and terrorism, whether measured by the number of attacks or victims, in destination countries.

Andrew Forrester is a research associate at the Cato Institute. Benjamin Powell is Professor of Economics, Rawls College of Business and director of the Free Market Institute Texas Tech University. Alex Nowrasteh is an immigration policy analyst at the Cato Institute. Michelangelo Landgrave is a doctoral student in political science at the University of California, Riverside.

Taxing Wealth and Capital Income

$
0
0

Chris Edwards

Taxing the wealthy is a hot issue among Democratic candidates for president. Sen. Elizabeth Warren (D-MA) is proposing an annual wealth tax on the richest households, while other candidates are proposing higher taxes on incomes, estates, capital gains, and corporations.

Calls for tax increases are animated by claims about the fairness of income and wealth distributions in the economy. Warren wants to address “runaway wealth concentration,” while Sen. Bernie Sanders (I-VT) says that the wealthy are not “paying their fair share of taxes.”1

The proposed tax increases run counter to the international trend of declining tax rates on capital income and wealth. The number of European countries with a Warren-style wealth tax has fallen from 12 in 1990 to just 3 today.

The Europeans found that imposing punitive taxes on the wealthy was counterproductive. Wealth taxes encouraged avoidance, evasion, and capital flight. In most countries, wealth taxes raised little revenue and became riddled with exemptions.

This study discusses why targeting wealth for higher taxation is misguided. Wealth is simply accumulated savings that economies need for investment. The fortunes of the richest Americans mainly consist of active business assets that generate jobs and income. Increasing taxes on wealth would not help workers, but instead would undermine productivity and wage growth.

Basic economic theory suggests that taxes on capital should be low, and that conclusion is strengthened by the realities of today’s global economy. Furthermore, wealth taxes are even more distortionary than current federal taxes on capital income.

Nonetheless, taxing capital in a fair and efficient manner is a challenge. This study argues that the best approach would be a consumption-based tax system. Such a system would tax capital income but in a simpler way that does not stifle investment and economic growth.

Introduction

The federal tax system will collect $3.5 trillion in 2019. Some federal taxes are imposed on labor, such as payroll taxes. Some taxes are imposed on capital, such as the corporate income tax and the capital gains tax. Some taxes are a hybrid imposed on both capital and labor, such as the individual income tax.

Taxes on capital may be imposed on the stock of capital, such as estate taxes and wealth taxes, or on the income flow from capital, such as taxes on interest and corporate profits. The words capital, wealth, and savings are similar in meaning. This report uses “taxes on capital” to refer to taxes on both the stock of capital or wealth and the income flow from capital.

Federal taxation is uneven. With respect to capital income, it exempts some items from tax, such as interest on municipal bonds, but taxes other items heavily, such as corporate equity. Most experts would agree that uneven taxation across sources of income is inefficient because it distorts investment flows in the economy.

There is less agreement about the uneven taxation of individuals at different income levels. Some experts and policymakers favor a flatter or more proportional tax system, which would apply equal tax rates to individuals across the board.2 Others favor a more progressive system that levies higher tax rates on people at the top.

Elizabeth Warren wants to address “runaway wealth concentration” with her wealth tax proposal.3 She says that people with great fortunes should “put a little bit back in the kiddy [sic]” and “pay a fair share.”4 Sanders, who is proposing higher taxes on estates, incomes, corporations, and capital gains, demands that “the wealthy and large corporations start paying their fair share of taxes.”5 Numerous other Democratic candidates for president support higher taxes, particularly on capital, including taxes on estates, corporations, capital gains, and financial transactions.6

The federal tax system is already highly progressive. When considering all federal taxes—income, payroll, estate, and excise—Congressional Budget Office data show that the average effective tax rate for the top 1 percent of households is 33 percent, while the rate for the middle 60 percent of households is 15 percent, and the rate for the bottom 20 percent of households is less than 2 percent.7 The top 1 percent pays 25 percent of all federal taxes.8

The Organisation for Economic Co-operation and Development (OECD) examined the distributions of household taxes within member countries. Household taxes include individual income taxes and employee payroll taxes. It found that “taxation is most progressively distributed in the United States” of the 24 nations it studied.9 The OECD study was published in 2008, but the findings likely still hold because our tax system has become even more progressive since then.10

There is no agreement that progressive taxation is fairer than proportional taxation, but even if there were, it is clear that our tax system is already highly skewed before any further tax increases. And even if progressive taxation made sense, rather than adding a wealth tax or raising tax rates, a better approach would be to end current breaks for the wealthy that distort the economy, such as the income tax exemption for municipal bond interest.11

This report addresses wealth taxes and broader issues of taxing capital. Taxes on capital are usually aimed at the rich, but they often harm lower- and middle-income workers who may own no capital at all. Taxes on capital also induce extensive avoidance, especially in today’s global economy.

A better way to tax capital is with a consumption-based tax system. Such a system would not distort saving and investment, thus generating higher productivity and wage growth over the long run.

Wealth Tax Basics

The major federal taxes—income and payroll taxes—are taxes on flows of income. By contrast, wealth taxes are imposed on stocks of assets owned by individuals and businesses. The United States currently imposes a number of different wealth taxes. One is the federal estate tax, which is imposed at death on net wealth above an exemption amount.

Local property taxes are also wealth taxes. They are paid by owners of residential, commercial, and industrial real property. U.S. property taxes are relatively high. As a share of gross domestic product (GDP), we have the fourth-highest property tax revenues among 36 major industrial countries.12

Elizabeth Warren has proposed an annual federal tax on a broad measure of individual wealth including real property, personal property, and financial assets. Wealth taxes are imposed on net wealth—assets less debt. Warren’s proposal would impose a tax of 2 percent on net wealth above $50 million and 3 percent on net wealth above $1 billion.13

Much of the advocacy for a wealth tax includes complaints about inherited wealth. In championing Warren’s tax, for example, New York Times columnist Paul Krugman claimed, “we seem to be heading toward a society dominated by vast, often inherited fortunes.”14 Yet a wealth tax would hit both self-made wealth and inherited wealth, and the latter is a small and declining share of the largest fortunes. Just 15 percent or so of the net wealth of the richest 1 percent of Americans is inherited.15

A wealth tax would be imposed on stocks of assets, but it would be similar to an added layer of income tax. Suppose a person received a pretax return of 6 percent on corporate equities. An annual wealth tax of 2 percent would effectively reduce that return to 4 percent, which would be like a 33 percent income tax—and that would be on top of the current federal individual income tax, which has a top rate of 37 percent.

However, wealth taxes differ from taxes on capital income because the tax amount is not related to the actual return. The effect is to impose lower effective tax rates on higher-yielding assets, and vice versa. If equities produced returns of 8 percent, a 2 percent wealth tax would be like a 25 percent income tax. But if equities produced returns of 4 percent, the wealth tax would be like a 50 percent income tax. People with the lowest returns would get hit with the highest tax rates, and even people losing money would have to pay the wealth tax.

Another dissimilarity between wealth taxes and taxes on capital income is that the former often impose tax on items excluded under income taxes. Some household assets, such as owner-occupied housing, artwork, and jewelry, do not produce cash income flows and thus are not taxed under the income tax, but these items may be taxed under wealth taxes.

Would a federal wealth tax be constitutional? The U.S. Constitution allows Congress to impose direct taxes if they are apportioned among the states. The Sixteenth Amendment allowed the government to impose an income tax without apportionment. A wealth tax would seem to be a direct tax that would need apportionment, which would perhaps block its imposition.

However, there may be wiggle room for a wealth tax to pass legal muster. Rather than taxing wealth directly, supporters could add a provision to the current income tax code to tax an assumed fixed annual return from a measure of household wealth. The economic effect would be similar, but such a new wealth tax would look like an income tax.

Whether or not the Supreme Court would find a wealth tax to be constitutional, such a tax would be a bad idea for economic and practical reasons, as the following sections discuss.

Wealth Taxes in Europe

Numerous European countries used to impose annual wealth taxes, but they have been mainly scrapped in recent decades. The number of European countries with annual wealth taxes has fallen from 12 in 1990 to just 3 today. Ireland imposed and then repealed its wealth tax in the 1970s. Table 1 shows countries that have had wealth taxes.16

Countries repealed their wealth taxes for a combination of reasons: they raised little revenue, created high administrative costs, and induced an outflow of wealthy individuals and their money. Also, many policymakers have recognized that high taxes on capital damage economic growth. Here are some country notes:

  • Austria abolished its wealth tax in 1994 “mainly due to the high administrative costs that accrued in the data collection process and because of the economic burden the wealth tax meant to Austrian enterprises.”17
  • Denmark cut its wealth tax rate in 1989 and repealed the tax altogether in 1997.18
  • Finland abolished its wealth tax in 2006, a reform “motivated by the fact that the tax had an unfair impact on enterprises and provided many possibilities to evade,” noted a European Commission report.19
  • France abolished its wealth tax in 2017 after many news articles noted that wealthy entrepreneurs and celebrities were fleeing the country. The government estimated that “some 10,000 people with 35 billion euros worth of assets left in the past 15 years.”20 A related reform was the 2015 repeal of France’s “supertax” on high incomes of 75 percent, which also raised little money and encouraged high-earners to leave.21
  • Germany repealed its wealth tax in 1997 after a constitutional court struck it down due to inequities in the treatment of different asset types.22 The tax repeal appears to have had a positive effect on savings.23
  • Ireland imposed a wealth tax in 1975 due to concerns about wealth inequality. The tax was shot full of exemptions, raised little money, and the administrative costs were high.24 It was repealed in 1978.
  • Netherlands abolished its wealth tax in 2001 and replaced it with an income tax on an assumed fixed return of 4 percent on financial assets. The new tax replaced prior personal taxes on capital income.
  • Norway retains a wealth tax, but it abolished its inheritance tax in 2014 because it was considered unfair, raised little revenue, and impeded the transfer of family businesses.25
  • Sweden repealed its wealth tax in 2007 as it became clear that it was driving business people—such as the founder of Ikea, Ingvar Kamprad—out of the country. An analysis by Swedish economists found that wealth tax revenues were declining as “people could with impunity evade the tax by taking appropriate measures.”26 Sweden has low property taxes and it abolished its inheritance tax in 2004.
  • Spain repealed its wealth tax in 2008 but brought it back during the financial crisis in 2011. The rates are established by regional governments. There has been a downward trend, with the current wealth tax in Madrid set at zero.27
  • Switzerland imposes wealth taxes at the canton level only. The country is unique because it raises substantial revenues from wealth taxes, partly due to low exemption levels. However, this tax burden on capital is offset by Switzerland’s moderate property and corporate income taxes, and its lack of taxation of individual capital gains. Wealth tax rates have been falling across the cantons in recent years.28

In a 2018 study on wealth taxes, the OECD examined the reasons for their repeal. Concerns about capital flight were important, as well as “concerns about their efficiency and administrative costs, in particular in comparison to the limited revenues they tend to generate,” which “have led to their repeal in many countries.”29

When European countries had annual wealth taxes in place, the statutory rates averaged about 1 percent on net wealth above various exemption amounts. The bases of the taxes also varied, as countries exempted different types of assets.30

European wealth taxes typically raised only about 0.2 percent of GDP in revenues.31 Given the little revenue raised, it is not surprising that they had “little effect on wealth distribution,” as one study noted.32 Today, Norway raises about 0.4 percent of GDP, Spain about 0.2 percent, and Switzerland about 1 percent.33

In the United States, the federal individual income tax raises 8 percent of GDP, so a wealth tax raising, say, 0.2 percent would raise just 1/40th as much. Even if one favors higher taxes on the wealthy, it would be simpler to eliminate a high-end loophole in the income tax—such as the tax exemption for municipal bond interest—than to impose a new wealth tax system.

Moreover, a U.S. wealth tax may not raise government revenues overall because it would suppress revenues from other tax sources. As discussed below, that appears to have been the experience in Sweden and France, and that is what computer modeling indicates would happen with proposed wealth taxes in Germany and United States.34

Nonetheless, there has been renewed interest in wealth taxes since the 2014 book by economist Thomas Piketty, Capital in the Twenty-First Century.35 Piketty claimed that rising wealth inequality posed a major crisis for advanced economies. He proposed that countries impose annual wealth taxes with rates of 1 percent and higher above an exemption amount.

Many economists have found inaccuracies in Piketty’s data and pointed out that his theoretical claims are off-base.36 Nonetheless, his ideas have spurred pundits and politicians to champion wealth taxation. But the European lessons are clear, and the good news is that countries have not acted on Piketty’s bad advice to impose or reimpose these complex and harmful taxes.

Complex Administration

Proponents of an annual wealth tax may imagine a system that is simple, broad-based, easy to administer, and lucrative for the government. But wealth taxes did not work that way in practice in Europe. Wealth taxes were complex and costly to collect, and they induced substantial avoidance while raising little revenue.

One problem is valuing assets. A wealth tax may require taxpayers to report valuations, not just of financial securities and homes, but also of such items as household furnishings, artwork, jewelry, vehicles, boats, life insurance policies, pensions, family businesses, and farm assets.37 Many of these assets have no ready market valuation. Accounting for wealth held in trusts would also be difficult, and for people with nontraded ownership in family businesses, book and market valuations can differ substantially.38 Furthermore, valuations of assets change over time, so a large industry of accountants would be needed to prepare regular valuations for tax returns.

Tax law professor Miranda Perry Fleischer finds that an annual wealth tax would be “hobbled by valuation issues.”39 She discusses, for example, the unknown values of closely held businesses, especially those held jointly with multiple sorts of ownership rights. She notes that valuation disputes already bedevil estate tax returns, but wealth tax disputes would be even more contentious because they would come back year after year. And consider that while the IRS handled 12,700 estate tax returns in 2017, Elizabeth Warren’s proposed wealth tax would require annual filing by at least 75,000 taxpayers.40 In a recent survey of economists, 73 percent agreed and only 7 percent disagreed that Senator Warren’s wealth tax would be “much more difficult to enforce than existing federal taxes because of difficulties of valuation.”41

The difficulty of wealth valuation can be seen in an Internal Revenue Service study that compared valuations on estate tax returns to valuations of the same estates on the Forbes 400 list of wealthiest Americans.42 The study found that estate tax valuations were, on average, only 50 percent of the valuations on the Forbes list:

This research highlights the inherent difficulties of valuing assets which are not highly liquid. The portfolios of very wealthy individuals are made up of highly unique assets and often the value of assets, such as businesses, are very closely tied to the personality and skills of the owner. Determining a precise value for these assets can involve more art than science.43

The United Kingdom undertook a major examination of its tax system in 2001 called the Mirrlees Review. It studied a possible UK wealth tax and concluded:

Levying a tax on the stock of wealth is not appealing. To limit avoidance and distortions to the way that wealth is held, as well as for reasons of fairness, the base for such a tax would have to be as comprehensive a measure of wealth as possible. But many forms of wealth are difficult or impractical to value, from personal effects and durable goods to future pension rights—not to mention “human capital.” These are very serious practical difficulties. And where attempts have been made to levy a tax on a measure of current wealth—in France, Greece, Norway, and Switzerland, for example—practical experience has not been encouraging.44

Another problem with wealth taxes would be tracking wealth held abroad. A wealth tax could be imposed on just domestic assets, but that would create a large incentive for the wealthy to hold their assets abroad. So, Congress would likely impose the tax on worldwide assets, yet that would create a large incentive for evasion. The Internal Revenue Service would be charged with the impossible task of auditing everything affected U.S. residents owned on a global basis and judging whether the valuations on all those foreign assets were fair.

Taxpayer liquidity would be another issue. Wealth tax payment would be difficult for people who mainly held assets that are illiquid and do not generate regular cashflows, such as homes, artwork, and ownership shares of some family businesses. The need to pay wealth taxes each year would force inefficient sales of assets to raise cash or require taxpayers to borrow money. The OECD found that liquidity issues have been a major problem with wealth taxes in Europe.45

In the 1970s, the British Labour Party campaigned on imposing an annual wealth tax, and it tried to follow through after being elected. However, party leaders eventually dropped the idea when they realized how complex the administration would be. The Chancellor of the Exchequer at the time, Denis Healey, said in his memoirs, “We had committed ourselves to a wealth tax; but in five years I found it impossible to draft one which would yield enough revenue to be worth the administrative cost and political hassle.”46

India enacted an annual wealth tax in 1957 and repealed it in 2015. Indian finance minister Arun Jaitely described reasons for his government’s scrapping of the tax at an event in New York: “The practical experience has been it’s a high cost and a low yield tax.”47 The Indian wealth tax became riddled with exemptions, it was evaded, and it raised little revenue.48

An expert study for the Mirrlees Review concluded that the wealth tax in Europe “has been a particularly inefficient tax to collect,” and that for the UK it would be “costly to administer, might raise little revenue, and could operate unfairly and inefficiently.”49 An International Monetary Fund (IMF) study concluded that “taxing income from wealth, rather than taxing wealth itself, is more equitable and efficient.”50

For the United States, a wealth tax would not achieve the fairness that supporters are seeking. It would generate tax avoidance and lobbying by the wealthy for exemptions. In turn, that would increase public cynicism about the tax system. In countries that have had wealth taxes, the public has not perceived the actual operation to be fair. In its study, the OECD concluded, “A major concern with net wealth taxes is the ability of wealthier taxpayers to avoid or evade the tax. This has limited the potential of net wealth taxes to achieve their redistributive objectives and has contributed to perceptions of unfairness.”51

Economist Asa Hansson studied European wealth taxes and found that they often resulted in “poisoning general tax morale” because of the exemptions provided and the widespread avoidance.52 The OECD reports that “wealth taxes were unpopular in a number of countries, which contributed to their repeal.”53

Tax Avoidance and Capital Mobility

The flow of capital across international borders has soared since the 1980s.54 Corporations and individuals are increasingly moving their investments to countries with better growth opportunities and lower taxes. Most nations have responded by cutting their tax rates on capital to defend their tax bases and spur economic growth. The OECD notes that the “repeal of net wealth taxes can also be viewed as part of a more general trend towards lowering tax rates on top income earners and capital.”55

Since 1981, the average corporate tax rate across OECD countries fell from 47 percent to 24 percent, the average top personal income tax rate fell from 66 percent to 43 percent, and the average combined corporate-individual rate on dividends fell from 75 percent to 42 percent.56

Many countries have cut their capital gains taxes, as well as their withholding taxes on cross-border investment flows. Numerous countries have abolished their estate and inheritance taxes, including Austria, Canada, the Czech Republic, New Zealand, Norway, Portugal, and Sweden.57 The share of GDP raised by estate and inheritance taxes in the OECD fell from 1.1 percent in 1965 to 0.4 percent today.58

The OECD nations have recognized that wealth and capital income are responsive tax bases. High rates make the tax base shrink—both from domestic avoidance and from international mobility. Furthermore, individuals at the top end have more flexibility in their business and financial affairs than others, so they are particularly responsive to taxes.

Avoidance was common under European wealth taxes and was aided by governments that carved out exemptions.59 Farm and small business assets were often exempted over concerns about entrepreneurship. Pension assets were exempted over concerns about fairness. Artwork and antiques were exempted because of difficulties in valuation and worries about the break-up of collections. Forest lands were exempted for environmental reasons. Nonprofit organizations and intellectual property rights were often exempted. The French wealth tax exempted stocks of wine and brandy.60Over time, taxpayers shifted their wealth into exempted assets and tax bases shrank.

The base of wealth taxes is net wealth, meaning assets less debt. The deductibility of debt encouraged people to borrow and then invest in the exempted assets and in assets that were hard for governments to find. People had the incentive to underreport assets and overreport debt. The OECD found there was “clear evidence of wealth tax avoidance and evasion” in Europe.61 An IMF article concluded, “The design of wealth taxes is notoriously prone to lobbying and the granting of exemptions that the wealthiest can exploit. Furthermore, the rich have proved adept avoiding or evading taxes by placing their wealth abroad in low tax jurisdictions.”62

Wealth tax supporters imagine a simple, broad tax base. Thomas Piketty proposed that wealth taxes cover “all types of assets … no exceptions.”63 Senator Warren and the economists who designed her wealth tax plan say it would cover all assets above the exemption amounts.64 But actual wealth taxes have not worked that way.

Ireland’s experience in the 1970s is classic. The nation imposed a wealth tax in 1975 in response to concerns about wealth inequality, as described in a government White Paper at the time.65 But the government’s broad-based ideal for the tax was undone even as it was being implemented:

Pressure from influential lobby groups had debased and undermined the basic structure proposed in the White Paper. Pressure had come from agricultural interest groups; chambers of commerce; the accountancy profession, and the tourism lobby. The undermined wealth tax eventually enacted was therefore incapable of achieving the stated objectives of horizontal and vertical equity. The inevitably low yield then provided an apparent justification for its eventual abolition.66

The Irish wealth tax exempted homes, farm assets, pensions, art, jewelry, and other items. The tax raised little money and the “administration and compliance costs were very high relative to the yield.”67 It was abolished in 1978. The Irish were quick learners about the folly of wealth taxes.

The Swedish wealth tax experience was similar, as described in a study by economists Magnus Henrekson and Gunnar Du Rietz:

The numerous forms of relief and exemptions introduced over the years not only lowered wealth tax revenue, they also increased the distortive effects of the wealth tax. Most important among these effects were capital outflow and an unsustainable valuation and growth of asset classes exempted from wealth taxation. These asset holdings were often financed by borrowing, which in turn resulted in increased financial fragility.68

Henrekson and Du Rietz describe how avoidance undermined the tax: “First, one should note that despite high statutory tax rates and rapidly increasing wealth levels, especially following financial market deregulation in the 1980s, wealth tax revenue remained low. This is in itself a strong indication that people could with impunity evade the tax by taking appropriate measures.”69 Sweden repealed its wealth tax in 1997.

The current Spanish wealth tax has similar problems.70 Avoidance is fairly easy because many assets have been exempted, including small business assets, some shareholdings, life insurance policies, pension plans, and certain art and antiques. The Spanish wealth tax rate is high (up to 3.45 percent), but the tax only raises 0.2 percent of GDP in revenue.

A few statistical studies have measured the responsiveness of taxpayers to wealth taxes. A study by Katrine Jakobsen and coauthors examined responses to Denmark’s wealth tax, which was repealed in 1997. They found “sizable” responses to the tax with the effects being much larger at the top end of the wealth distribution.71 David Seim studied the Swedish wealth tax and found small responses from avoidance and evasion, but he did not study the shifting of assets abroad.72

A 2016 study by Marius Brülhart and coauthors examined behavioral responses to wealth taxes in Switzerland, where different tax rates are imposed by cantons. They found that “reported wealth holdings in Switzerland are very responsive to wealth taxation. We estimate that a 0.1 percentage-point rise in wealth taxation lowers reported wealth by 3.5 percent.”73 The estimates are large compared to the usual estimates of income tax responsiveness.

While this Swiss study ties the response to domestic avoidance, in other countries international capital mobility was a major issue. Henrekson and Du Rietz’s study on Sweden finds:

In 1989 all foreign exchange controls were lifted, making it difficult to prevent people from transferring wealth to tax havens, either illicitly or when taking residence in another country. Several studies found that a sizable share of large fortunes was being placed outside of Sweden in countries like Luxembourg and Switzerland. In those cases the government not only lost income from wealth taxation, but also tax revenue on capital gains, dividends and interest income. The Swedish Tax Authority (Skatteverket) reported that in the early 2000s the value of assets illicitly transferred offshore may have amounted to more than SEK [Swedish krona] 500 billion, and the accumulated assets of Swedish billionaires living abroad were at least as large. The magnitude of these outflows was a major motivation for the repeal of the wealth tax in 2007.74

As Henrekson and Du Rietz observe, the problem with capital outflows is that governments not only lose wealth tax revenues, but also lose other tax revenues that would have been generated by outgoing individuals and assets.

The French experience was similar to Sweden’s. The tax raised far less revenue than expected when it was introduced in the 1980s, noted law professor Gilbert Paul Verbit, and the “compliance costs of the wealth tax may be such that its principal beneficiaries are the tax advisors to those who must file.”75

Economist Éric Pichet calculated that domestic evasion reduced French wealth tax revenues by at least 28 percent, and that the tax induced a capital flight of about 200 billion euros between 1988 and 2007.76 He estimated that, while the French wealth tax raised 3.5 billion euros a year, the government lost money overall because other tax revenues shrank by about 7 billion euros a year. He concluded, “The fact that it costs more than it yields engenders a paradoxical situation in which all of France’s other taxpayers, including its least wealthy citizens, must bear the brunt of its overall tax burden.”77

How to Tax Capital

There are two basic things people do with their earnings: consume and save. Saving is abstaining from current consumption. Savings are channeled back into the economy and used to support investments by business enterprises. To grow, economies need pools of savings—that is, pools of capital or wealth.

Senator Warren and other policymakers are concerned that wealth is “concentrated.” But the wealth of the wealthy is mainly dispersed across the economy in productive business assets. Looking at the top 0.1 percent of the wealthiest Americans, 73 percent of their wealth is equity in private or public companies, while just 5 percent is the value of their homes.78

Looking just at billionaires, only 2 percent of their wealth is accounted for by their homes and personal assets, such as yachts, airplanes, cars, jewelry, and artwork.79 The great majority of their wealth is in productive business assets, which generate output for the broader economy.

Nonetheless, many policymakers and pundits believe that people with substantial wealth should be targets of heavy taxation. They think that raising taxes on people owning capital would lighten the burden on labor and that taxing wealth would benefit the nonwealthy.

However, imposing heavy taxes on wealth would reduce living standards for everyone because it would reduce the overall size of the economy. Under certain assumptions, a basic finding from economic theory is that everybody should want taxes on capital to be low or even zero—including wage earners, who have no capital income.80

Economist Greg Mankiw describes a simple economy with two groups: workers and capitalists.81 The capitalists save and earn capital income, while the workers earn wages and do not save. The workers are in the democratic majority and can set tax policy anyway they want. Should they tax wages, capital income, or both? It turns out that—acting in their own interest—the workers should tax wages only, not capital income.

The reason is that the supply of capital is elastic or responsive to taxation, and so setting the tax rate to zero would generate increased saving and investment. In turn, that would create rising worker productivity and wages—worker efforts are more valuable when they have more and better machines to work with. In the long run, the after-tax wages of workers would be higher under this policy than under a policy of imposing taxes on capital.

This result assumes that the supply of capital is perfectly elastic or responsive. While that is not fully realistic, capital has become more responsive in today’s global economy. In another paper, Mankiw and coauthors noted that the zero capital tax prescription “is strengthened in the modern economy by the increasing globalization of capital markets, which can lead to highly elastic responses of capital flows to tax changes even in the short run.”82 They conclude that the “logic for low capital taxes is powerful: the supply of capital is highly elastic, capital taxes yield large distortions to intertemporal consumption plans and discourage saving, and capital accumulation is central to the aggregate output of the economy.”83

From an average worker’s point of view, it is beneficial for the wealthy to maximize their savings and reduce consumption. Capital and labor are complements in the economy—workers are more productive and better paid when they are supported by more capital generated by savers. The Council of Economic Advisers has summarized the empirical evidence in support of low taxes on capital.84

The basic idea goes back at least to Adam Smith, writing in TheWealth of Nations. He described how heavy taxes on mobile “stock” or capital would cause losses to workers:

Stock cultivates land; stock employs labour. A tax which tended to drive away stock from any particular country, would so far tend to dry up every source of revenue, both to the sovereign and to the society. Not only the profits of stock, but the rent of land and the wages of labour, would necessarily be more or less diminished by its removal.85

This insight on the importance of savings also underlays opposition to the federal estate tax, which is a wealth tax imposed at death. From a liberal perspective, law professor Edward McCaffery has long made the case for abolishing the estate tax, arguing, “The rich person who passes on wealth is doing good things for society—continuing to work and save, keeping money in the capital stock.”86 McCaffery notes that a weird thing about the estate tax is that it is a “virtue tax,” or the opposite of a sin tax.87 Sin taxes discourage vices, but estate taxes and other wealth taxes discourage the virtuous behavior of saving.

Greg Mankiw has made similar points:

When a family saves for future generations, it provides resources to finance capital investments, like the start-up of new businesses and the expansion of old ones. Greater capital, in turn, affects the earnings of both existing capital and workers.

Because capital is subject to diminishing returns, an increase in its supply causes each unit of capital to earn less. And because increased capital raises labor productivity, workers enjoy higher wages. In other words, by saving rather than spending, those who leave an estate to their heirs induce an unintended redistribution of income from other owners of capital toward workers.

The bottom line is that inherited wealth is not an economic threat. Those who have earned extraordinary incomes naturally want to share their good fortune with their descendants. Those of us not lucky enough to be born into one of these families benefit as well, as their accumulation of capital raises our productivity, wages and living standards.88

All of this raises what appears to be a policy dilemma. How can we have a tax system that does not penalize beneficial wealth accumulation but also distributes the tax burden equitably? How do we ensure that the rich pay a fair share of taxes while not discouraging saving?

The answer is consumption-based taxation. Consumption-based taxes can be taxes on transactions, such as retail sales taxes and value-added taxes. Or they can be taxes assessed on individuals and businesses, such as the “flat tax” designed by economists Robert Hall and Alvin Rabushka and the “X-Tax” designed by economist David Bradford.89

Both income and consumption-based taxes tax income from labor and capital. But unlike income taxes, consumption-based taxes exempt the “normal” return to capital, which removes the bias against saving and investment. The normal return is usually thought of as the yield on a riskless investment, which represents the time value of money.

Both income and consumption-based taxes tax the “above-normal” returns to capital. Those include the returns, or profits, attributable to market power, innovations, windfalls, and various rents available to certain businesses and investors.90 Economist Glenn Hubbard notes that wealthier households receive a larger portion of their capital income from these items, so consumption-based systems can be quite progressive.91

Bradford agrees that “sources of great wealth,” such as monopolies and highly profitable technology firms, are taxed under both income and consumption-based systems.92 However, by exempting the normal returns, the latter system is more conducive to growth. Bradford also long argued that consumption-based tax systems allow for much simpler administration and compliance.93

Consumption-based systems are also better at equalizing taxes on capital across activities and industries, and they capture some activities that escape taxation under the income tax. As one example, the “buy-borrow-die” strategy in real estate investment can allow individuals to go years without paying income tax if they borrow against appreciating properties to fund their consumption.94 That is the sort of loophole that angers the public about wealthy people, and it would be closed under a consumption-based system.

Theoretical models suggest that consumption-based taxes are superior to income taxes on both efficiency and distributional grounds.95 The key is that income taxes distort both work effort and savings, but consumption-based taxes just distort work effort. Consumption-based taxes are superior on efficiency because you can raise a given amount of revenue with fewer distortions than under income taxation. Regarding distribution, you can design a consumption-based tax to match the progressivity of an income tax, but which collects revenue with fewer distortions.

Tax law professors Joseph Bankman and David Weisbach conclude that “everyone is equally well off or better off under a properly designed consumption tax,” as compared to an income tax.96 They note that consumption-based taxes would tax the “idle rich,” which is often the motivation for taxes on the wealthy.97

Economists Kevin Hassett and Alan Auerbach agree that consumption taxes would target wealth, noting that “consumption taxes reduce the value of wealth, just as wealth taxes do” and “if the disproportionate political power of the wealthy is the concern, a consumption tax is potentially a more powerful tool.”98

Wealth taxes are an inefficient method for taxing the rich because they treat profits in the opposite way as consumption-based taxes. Wealth taxes exempt some above-normal returns to savings and tax the normal returns, which would distort savings and investment.99 In its report on wealth taxes, the OECD pointed to this problem: “The taxation of normal returns is likely to distort the timing of consumption and ultimately the decision to save, as the normal return is what compensates for delays in consumption.”100

Auerbach and Hassett come to similar conclusions:

a consumption tax differs from a capital income tax in its treatment of capital income only by its exemption of the safe rate of return on investment. Thus, consumption taxes hit wealth without interfering with the incentive to save associated with the intertemporal terms of trade. Wealth taxes, on the other hand, effectively tax the safe rate of return on investment because they do not depend on actual rates of return, thereby incurring the intertemporal distortion but forgoing tax on other components of the rate of return.101

Bill Gates sort of captured the idea of consumption-based taxation when he said: “Think about the three wealthy people I described earlier: One investing in companies, one in philanthropy, and one in a lavish lifestyle. There’s nothing wrong with the last guy, but I think he should pay more taxes than the others.”102 A better framing would be to say that the last guy, who spends lavishly, is favored under income and wealth taxes, while the first guy, who saves, is penalized. Consumption-based taxation would fix that problem by taxing income and wealth only if consumed.

Because wealth taxes suppress savings and investment, they undermine economic growth. A 2010 study by Asa Hansson examined the relationship between wealth taxes and economic growth across 20 OECD countries from 1980 to 1999. She found “fairly robust support for the popular contention that wealth taxes dampen economic growth,” although the magnitude of the measured effect was modest.103

The Tax Foundation simulated an annual net wealth tax of 1 percent above $1.3 million and 2 percent above $6.5 million.104 They estimated that such a tax would reduce the U.S. capital stock in the long run by 13 percent, which in turn would reduce GDP by 4.9 percent and reduce wages by 4.2 percent. The government would raise about $20 billion a year from such a wealth tax, but in the long run GDP would be reduced by hundreds of billions of dollars a year.

Germany’s Ifo Institute recently simulated a wealth tax for that nation.105 The study assumed a tax rate of 0.8 percent on individual net wealth above 1 million euros. Such a wealth tax would reduce employment by 2 percent and GDP by 5 percent in the long run. The government would raise about 15 billion euros a year from the tax, but because growth was undermined the government would lose 46 billion euros in other revenues, resulting in a net revenue loss of 31 billion euros. The study concluded, “the burden of the wealth tax is practically borne by every citizen, even if the wealth tax is designed to target only the wealthiest individuals in society.”106

Conclusions

Nations around the world have cut taxes on capital in recent decades, and most nations that had annual wealth taxes have repealed them. Recent U.S. proposals to increase taxes on wealth and capital income run counter to the lessons learned about efficient taxation in the global economy.

The Europeans discovered that imposing punitive taxes on the wealthy undermined economic growth. They found that wealth taxes encouraged tax avoidance and generated capital flight. European wealth taxes raised little money and became riddled with exemptions.

Wealth is accumulated savings, which is needed for investment. The fortunes of the richest Americans are mainly socially beneficial business assets that create jobs and income, not private consumption assets. Raising taxes on wealth would boomerang against average workers by undermining their productivity and wage growth.

Senator Warren says that she wants rich people to “pay a fair share, so the next kid has a chance to build something great and the kid after that and the kid after that.”107 But encouraging the wealthy to invest in new and expanding businesses is what creates opportunities for those young people, not redistributing more income through the tax code.

Creating a fair and efficient method of taxing capital is a challenge, but experts are widely agreed that wealth taxes are an inefficient way to do so. Rather than sin taxes, wealth taxes are virtue taxes that penalize the wealthy for being frugal and for reinvesting their earnings.

Rather than imposing a wealth tax or raising tax rates on capital income, policymakers should rethink the overall federal approach to taxing capital. A better way is through consumption-based taxation, which would tax wealth but in a simpler way that does not stifle savings, investment, and growth.

Notes

The author thanks David Kemp and David Titus for research help and David Burton for outside review.

1. Office of Sen. Elizabeth Warren, “Senator Warren Unveils Proposal to Tax Wealth of Ultra-Rich Americans,” press release, January 24, 2019. And see Naomi Lim, “Bernie Sanders: ‘Damn Right I Will Raise Taxes on the Rich,’” Washington Examiner, February 25, 2019.

2. Chris Edwards, “Options for Tax Reform,” Cato Institute Policy Analysis no. 536, February 24, 2005.

3. Warren, “Proposal to Tax Wealth of Ultra-Rich Americans.”

4. Tim Hains, “Elizabeth Warren: ‘Just Wrong’ to Call Me a Socialist, ‘But Markets Have to Have Rules,’” RealClearPolitics, March 10, 2019.

5. Lim, “Bernie Sanders: ‘Damn Right I Will Raise Taxes on the Rich.’”

6. Rocky Mengle, “2020 Election: Tax Plans for All 24 Democratic Presidential Candidates,” Kiplinger, June 28, 2019.

7. Average effective tax rates are total taxes paid divided by the Congressional Budget Office’s measure of income. Congressional Budget Office, “The Distribution of Household Income, 2016,” July 2019. Data are for 2016.

8. Congressional Budget Office, “The Distribution of Household Income, 2016.”

9. Organisation for Economic Co-operation and Development, “Growing Unequal? Income Distribution and Poverty in OECD Countries,” 2008, p. 104. And see Chris Edwards, “U.S. Tax Code Too Progressive,” Cato at Liberty, November 2, 2017.

10. Congressional Budget Office data show that the top 1 percent has increased its share of overall federal taxes since 2008. See Congressional Budget Office, “The Distribution of Household Income, 2016.”

11. About 90 percent of the municipal bond exemption goes to the top income quintile. See Harvey Galper, Kim Rueben, Richard Auxier, and Amanda Eng, “Municipal Debt: What Does It Buy and Who Benefits?,” National Tax Journal 67, no. 4 (December 2014): 901–24.

12. Organisation for Economic Co-operation and Development, “Revenue Statistics,” https://stats.oecd.org/index.aspx?datasetcode=rev.

13. Elizabeth Warren, “Ultra-Millionaire Tax,” https://elizabethwarren.com/ultra-millionaire-tax.

14. Paul Krugman, “Elizabeth Warren Does Teddy Roosevelt,” New York Times, January 28, 2019.

15. Edward N. Wolff and Maury Gittleman, “Inheritances and the Distributions of Wealth or Whatever Happened to the Great Inheritance Boom?,” Bureau of Labor Statistics Working Paper no. 445, January 2011, Table 8. The share fell from 23 percent in 1989 to 15 percent in 2007.

16.Organisation for Economic Co-operation and Development, “The Role and Design of Net Wealth Taxes in the OECD,” 2018, p. 76. The OECD study does not have the year introduced for Iceland, so I took the first year that wealth tax revenue shows up in OECD’s tax revenue database. See also Alexander Krenek and Margit Schratzenstaller, “A European Net Wealth Tax,” Austrian Institute of Economic Research, April 15, 2018, Table 1. This study shows earlier enactment years than the OECD for a number of the countries. Also note that some countries have taxes that cover a portion of wealth. For example, Belgium imposes an annual charge on financial securities and Italy imposes a tax on real estate and financial assets held abroad.

17. Marcus Drometer et al., “Wealth and Inheritance Taxation: An Overview and Country Comparison,” Ifo DICE Report 16, no. 2 (June 2018): 49.

18. Katrine Jakobsen, Kristian Jakobsen, Henrik Kleven, and Gabriel Zucman, “Wealth Taxation and Accumulation: Theory and Evidence from Denmark,” National Bureau of Economic Research Working Paper no. 24371, March 2018.

19. European Commission, “Cross-Country Review of Taxes on Wealth and Transfers of Wealth,” October 2014, p. 42. The report was completed by Ernst and Young.

20. Michel Rose, “Macron Fights ‘President of the Rich’ Tag after Ending Wealth Tax,” Reuters, October 3, 2017. And see Harriet Agnew, “French Government Opens Door to Wealth Tax Concession,” Financial Times, December 5, 2018. The wealth tax was replaced by a tax on high-end real estate.

21.“France in 14bn-euro Tax Black Hole,” BBC News, May 28, 2014. And see Anne Penketh, “France Forced to Drop 75% Supertax after Meagre Returns,” The Guardian, December 31, 2014.

22. PriceWaterhouseCoopers, “Will Germany Revive Its Net Wealth Tax?,” European Tax News Alert, June 5, 2012.

23. Alena Bachleitner, “Abolishing the Wealth Tax: A Case Study of Germany,” Austrian Institute of Economic Research, WIFO Working Paper no. 545, 2017.

24. Thomas A. McDonnell, “Wealth Tax: Options for Its Implementation in the Republic of Ireland,” Nevin Economic Research Institute, September 2013, p. 23.

25. Drometer et al., “Wealth and Inheritance Taxation.”

26. Magnus Henrekson and Gunnar Du Rietz, “The Rise and Fall of Swedish Wealth Taxation,” Nordic Tax Journal 1, no. 1 (2014): 31.

27. Raymundo Larrain Nesbitt, “Andalusia to Abolish Inheritance Tax in 2019,” Spanish Property Insight, January 29, 2019. And see Carlos Gabarro, “Spain’s Wealth Tax and 10 Legitimate Ways to Reduce It,” Tax Notes International, April 2, 2018. Madrid provides a 100 percent credit against the tax.

28. Marius Brülhart, Jonathan Gruber, Matthias Krapf, Kurt Schmidheiny, “Taxing Wealth: Evidence from Switzerland,” National Bureau of Economic Research Working Paper no. 22376, June 2016.

29. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” pp. 16–17.

30. OECD, “The Role and Design of Net Wealth Taxes in the OECD, p. 88.

31. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 20.

32. Robin Boadway, Emma Chamberlain, and Carl Emmerson, “Taxation of Wealth and Wealth Transfers,” in Dimensions of Tax Design (Oxford: Oxford University Press, September 2010), p. 787. This is volume 1 of the Mirrlees Review.

33. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 18.

34. Michael Schuyler, “The Impact of Piketty’s Wealth Tax on the Poor, the Rich, and the Middle Class,” Tax Foundation, October 2014. And see Marcus Drometer et al., “Wealth and Inheritance Taxation: An Overview and Country Comparison,” Ifo DICE Report 16, no. 2 (June 2018).

35. Thomas Piketty, Capital in the Twenty-First Century (Cambridge: Belknap Press, 2014).

36. Jean-Philippe Delsol, Nicolas Lecaussin, and Emmanuel Martin, ed., Anti-Piketty: Capital for the 21st Century (Washington: Cato Institute, 2017); Chris Giles and Ferdinando Giugliano, “Thomas Piketty’s Exhaustive Inequality Data Turn Out to Be Flawed,” Financial Times, May 23, 2014; Richard Sutch, “The One Percent across Two Centuries: A Replication of Thomas Piketty’s Data on the Concentration of Wealth in the United States,” Social Science History 41 (Winter 2017): 587–613; Alan J. Auerbach and Kevin Hassett, “Capital Taxation in the 21st Century,” National Bureau of Economic Research Working Paper no. 20871, January 2015; and Matthew Rognlie, “Deciphering the Fall and Rise in the Net Capital Share: Accumulation or Scarcity,” Brookings Papers on Economic Activity, Spring 2015.

37. Warren and her wealth tax advisers claim that all assets above the exemption amount would be included. Emmanuel Saez and Gabriel Zucman, University of California, Berkeley, letter to Sen. Elizabeth Warren, January 18, 2019.

38. Rebecca S. Rudnick and Richard K. Gordon, “Taxation of Wealth,” in Tax Law Design and Drafting: Volume 1, ed. Victor Thuronyi (Washington: International Monetary Fund, 1996), p. 13.

39. Miranda Perry Fleischer, “Not So Fast: The Hidden Difficulties of Taxing Wealth,” San Diego Legal Studies Paper no. 16-213, March 14, 2016, p. 2.

40. Saez and Zucman, letter to Senator Warren.

41. Chicago Booth, IGM Forum, Economic Experts Panel, April 9, 2019, http://www.igmchicago.org/surveys/wealth-taxes.

42. Brian Raub, Barry Johnson, and Joseph Newcomb, Internal Revenue Service, “A Comparison of Wealth Estimates for America’s Wealthiest Decedents Using Tax Data and Data from the Forbes 400,” presented at National Tax Association 103rd Conference on Taxation, November 20, 2010.

43. Raub, Johnson, and Newcomb, “A Comparison of Wealth Estimates,” p. 134.

44. James Mirrlees et al., “Taxes on Wealth Transfers,” in Tax by Design (Oxford: Oxford University Press, September 2011), p. 347. This is the final report of the Mirrlees Review.

45. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 64.

46. Quoted in Robin Boadway, “Taxation of Wealth and Wealth Transfers,” p. 782.

47. Matt Phillips, “Forget Inequality, India Is Scrapping Its Wealth Tax,” Quartz, March 4, 2015.

48. Rajalakshmi Nirmal, “Why Jaitley Decided to Scrap Wealth Tax,” Hindu Business Line, March 8, 2015.

49. Boadway, “Taxation of Wealth and Wealth Transfers,” pp. 741, 781.

50. International Monetary Fund, “IMF Fiscal Monitor: Tackling Inequality,” October 2017, p. 37.

51. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 90.

52. Asa Hansson, “Is the Wealth Tax Harmful to Economic Growth?,” World Tax Journal 2, no. 1 (January 2010): 24.

53. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 93.

54. Chris Edwards and Daniel J. Mitchell, Global Tax Revolution (Washington: Cato Institute, 2008).

55. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 17.

56. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 17.

57. Drometer et al., “Wealth and Inheritance Taxation.”

58. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 23.

59. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 82. And see Rudnick and Gordon, “Taxation of Wealth.”

60. Gilbert Paul Verbit, “France Tries a Wealth Tax,” University of Pennsylvania Journal of International Law 12, no. 2 (Summer 1991): 181–217.

61. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 68.

62. James Brumby and Michael Keen, “Game-Changers and Whistle-Blowers: Taxing Wealth,” International Monetary Fund (blog), February 13, 2018.

63. Piketty, Capital in the Twenty-First Century, p. 517.

64. Warren, “Ultramillionaire Tax.” And see Saez and Zucman, letter to Senator Warren.

65. As discussed in McDonnell, “Wealth Tax.”

66. McDonnell, “Wealth Tax,” p. 23.

67. McDonnell, “Wealth Tax,” p. 25. McDonnell relies for his description of the 1970s Irish tax on a detailed 1985 study by Cedric Sandford and Oliver Morrissey. McDonnell himself is in favor of a new wealth tax in Ireland, but he wants a well-designed one this time around.

68. Henrekson and Du Rietz, “The Rise and Fall of Swedish Wealth Taxation,” p. 30.

69. Henrekson and Du Rietz, “The Rise and Fall of Swedish Wealth Taxation,” p. 30.

70. Gabarro, “Spain’s Wealth Tax and 10 Legitimate Ways to Reduce It.”

71. Jakobsen et al., “Wealth Taxation and Accumulation.”

72. David Seim, “Behavioral Responses to Wealth Taxes: Evidence from Sweden,” American Economic Journal: Economic Policy 9, no. 4 (2017): 395–421.

73. Brülhart et al., “Taxing Wealth: Evidence from Switzerland,” p. 4.

74. Henrekson and Du Rietz, “The Rise and Fall of Swedish Wealth Taxation,” p. 30.

75. Verbit, “France Tries a Wealth Tax,” p. 217. And see p. 193.

76.Éric Pichet, “The Economic Consequences of the French Wealth Tax,” La Revue de Droit Fiscal 14 (April 2007): 15.

77. Pichet, “The Economic Consequences of the French Wealth Tax,” p. 25.

78. Matthew Smith, Owen Zidar, and Eric Zwick, “Top Wealth in the United States: New Estimates and Implications for Taxing the Rich,” July 19, 2019, p. 46, http://ericzwick.com/wealth/wealth.pdf. See “preferred estimate.” And see Edward Wolff, “Household Wealth Trends in the United States, 1962 to 2016: Has Middle Class Wealth Recovered?,” National Bureau of Economic Research Working Paper no. 24085, November 2017, Table 6.

79.“The Wealth-X Billionaire Census 2019,” Wealth-X, May 9, 2019, p. 16.

80. I am referring to the results of optimal tax theory developed in a series of papers in the 1970s and 1980s. Economist Greg Mankiw asks, “What is the intuition for a zero optimal capital tax?” He answers, “uniform taxation is optimal absent any differences in elasticities or cross-elasticities of supply or demand… . Taxing capital income is undesirable because it means taxing future consumption more heavily than current consumption, which violates the presumption for uniformity.” N. Gregory Mankiw, “Commentary,” in Inequality and Tax Policy, ed. Kevin A. Hassett and R. Glenn Hubbard (Washington: American Enterprise Institute, 2001), p. 188. See also N. Gregory Mankiw, Matthew Weinzierl, and Danny Yagan, “Optimal Taxation in Theory and Practice,” National Bureau of Economic Research Working Paper no. 15071, June 2009; and George R. Zodrow, “Should Capital Income be Subject to Consumption-Based Taxation?,” James A. Baker III Institute for Public Policy, April 2006. For a contrary view, see Peter Diamond and Emmanuel Saez, “The Case for Progressive Tax: From Basic Research to Policy Recommendations,” Journal of Economic Perspectives 25, no. 4 (Fall 2011): 165–90. Optimal tax models assume that redistribution is a good thing and it is to be traded off with the damage caused by higher tax rates within a “social welfare function.” The utilitarian premise is dubious, but that is the starting point of these models.

81. Mankiw, “Commentary,” p. 189.

82. Mankiw, Weinzierl, and Yagan, “Optimal Taxation in Theory and Practice,” p. 21. And see Zodrow, “Should Capital Income be Subject to Consumption-Based Taxation?”

83. Mankiw, Weinzierl, and Yagan, “Optimal Taxation in Theory and Practice,” p. 11.

84. In particular, the Council of Economic Advisers examined taxes on corporate income. It noted that “reductions in the corporate tax rate incentivize corporations to pursue additional capital investments as their cost declines. Complementarities between labor and capital then imply that the demand for labor rises under capital deepening and labor becomes more productive. Standard economic theory implies that the result of more productive and more sought-after labor is an increase in the price of labor, or worker wages.” Council of Economic Advisers, “Corporate Tax Reform and Wages: Theory and Evidence,” October 2017. And see Council of Economic Advisers, “The Growth Effects of Corporate Tax Reform and Implications for Wages,” October 2017.

85. Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations (Chicago: University of Chicago Press, 1976) book 5, chapter II, p. 376.

86. Edward J. McCaffery, “Grave Robbers: The Moral Case against the Death Tax,” Cato Institute Policy Analysis no. 353, October 4, 1999, p. 14. And see Edward J. McCaffery, “The Uneasy Case for Wealth Transfer Taxation,” Yale Law Journal 104, no. 2 (1994): 283–365.

87. Edward J. McCaffery, “The Political Liberal Case Against the Estate Tax,” Philosophy & Public Affairs 23, no. 4 (Autumn, 1994): 296.

88. N. Gregory Mankiw, “How Inherited Wealth Helps the Economy,” New York Times, June 21, 2014.

89. For the flat tax, see Robert E. Hall and Alvin Rabushka, The Flat Tax (Stanford: Hoover Institution, 1995). For the X-Tax, see David F. Bradford, Taxation, Wealth, and Saving (Cambridge: MIT Press, 2000), p. 67. And see David R. Burton, “Four Conservative Tax Plans with Equivalent Economic Results,” Heritage Foundation, Backgrounder no. 2978, December 15, 2014.

90. Glenn Hubbard discusses the four parts of capital income: returns to waiting, returns to risk taking, returns to market power, and luck. Income and consumption taxes treat these sources of income the same except the first item. See R. Glenn Hubbard, “Would a Consumption Tax Favor the Rich?,” in Toward Fundamental Tax Reform, ed. Kevin A. Hassett and Alan J. Auerbach (Washington: American Enterprise Institute, 2005). Tax economists refer to normal and above normal returns fairly loosely, but this OECD paper discusses some of the complexities. Hayley Reynolds and Tom Neubig, “Distinguishing Between ‘Normal’ and ‘Excess’ Returns for Tax Policy,” Organisation for Economic Co-operation and Development, Taxation Working Papers no. 28, 2016.

91. Hubbard says the claim that “consumption tax reform is a sop to the rich is almost certainly unfair, especially if a progressive consumption tax like that proposed by Bradford” were being considered. R. Glenn Hubbard, “Would a Consumption Tax Favor the Rich?,” p. 91. David Bradford similarly discusses why it is a misconception that consumption-based taxation is regressive. See David Bradford, Blueprints for Basic Tax Reform (Arlington: Tax Analysts, 1984), p. 122.

92. Bradford, Taxation, Wealth, and Saving, p. 334. And see David F. Bradford, “Fundamental Issues in Consumption Taxation,” American Enterprise Institute, 1996, pp. 10, 16.

93. Policymakers may add narrow breaks and unneeded complexity to both income and consumption-based taxes. However, the basic accounting under consumption-based taxes is simpler and they would do away with complex aspects of income taxation including depreciation, inventory accounting, and capital gains. See Chris Edwards, “Simplifying Federal Taxes: The Advantages of Consumption-Based Taxation,” Cato Institute Policy Analysis no. 416, October 17, 2001. David Bradford discussed how consumption-based taxation would be simpler than income taxation in many essays. For example, see Bradford, Taxation, Wealth, and Saving.

94. This is Edward McCaffery’s phrase. It is true that one needs to build wealth first before the strategy works. You buy and hold an asset, such as land, that appreciates while providing little current income flow, then you borrow against the appreciating asset for your personal consumption, and when you die your asset gets a step-up in basis. Edward J. McCaffery, “Taxing Wealth Seriously,” University of Southern California Legal Studies Working Paper, 2016.

95. Joseph Bankman and David A. Weisbach, “The Superiority of an Ideal Consumption Tax Over an Ideal Income Tax,” Stanford Law Review 58 (2006): 1413–56. And see Zodrow, “Should Capital Income be Subject to Consumption-Based Taxation?”

96. Bankman and Weisbach, “The Superiority of an Ideal Consumption Tax,” 1413–56. The authors note a “properly designed consumption tax is Pareto superior to an income tax.” Bankman and Weisbach are expanding on an idea explored in a 1976 study by Anthony Atkinson and Joseph Stiglitz.

97. Bankman and Weisbach, “The Superiority of an Ideal Consumption Tax,” 1413–56.

98. Auerbach and Hassett, “Capital Taxation in the 21st Century,” p. 20.

99. A wealth tax would tax normal returns and “any foreseeable above-normal returns associated with tradable assets,” but would exempt other types of above-normal returns that may not be capitalized in asset prices. See S. Cnossen and A. L. Bovenberg, “Fundamental Tax Reform in the Netherlands,” METEOR research memorandum no. 024, Maastricht University, January 1, 2000, p. 4.

100. OECD, “The Role and Design of Net Wealth Taxes in the OECD,” p. 59.

101. Auerbach and Hassett, “Capital Taxation in the 21st Century,” p. 19.

102. Quoted in Jon Hartley, “Why Economists Disagree with Piketty’s ‘r – g’ Hypothesis on Wealth Inequality,” Forbes, October 17, 2014.

103. Asa Hansson, “Is the Wealth Tax Harmful to Economic Growth?,” World Tax Journal 2, no. 1 (January 2010): 19–34.

104. Michael Schuyler, “The Impact of Piketty’s Wealth Tax on the Poor, the Rich, and the Middle Class,” Tax Foundation, October 2014.

105. Clemens Fuest et al., “The Economic Effects of a Wealth Tax in Germany,” Ifo DICE Report 16, no. 2 (June 2018).

106. Fuest et al., “The Economic Effects of a Wealth Tax in Germany,” p. 26.

107. Hains, “Elizabeth Warren: ‘Just Wrong’ to Call Me a Socialist.”

Chris Edwards is the director of tax policy studies and editor of DownsizingGovernment.org at the Cato Institute.

Overcoming Inertia: Why It’s Time to End the War in Afghanistan

$
0
0

John Glaser and John Mueller

The war in Afghanistan has become America’s longest war not because U.S. security interests necessitate it, nor because the battlefield realities are insurmountable, but because of inertia. Policymakers have shied away from hard truths, fallen victim to specious cognitive biases, and allowed the mission to continue without clear intentions or realistic objectives.

Although the American people are substantially insulated from the sacrifices incurred by this distant war, the reality is that the United States can’t win against the Taliban at a remotely acceptable cost. Almost two decades in, the insurgency is as strong as ever, and the U.S.-backed Kabul regime is weak and mired in corruption. And while official assessments of the conflict have long acknowledged it as a stalemate, top military leaders have consistently misled the public and advised elected civilians to devote greater resources to achieve victory.

In refusing to end the war, policymakers have succumbed to the sunk cost fallacy, believing that redoubling efforts would make good on spent resources and ensure that costs already borne were not expended in vain. They also have entertained the spurious notion that withdrawing from a lost war would harm America’s credibility. But the most pervasive myth that has prevented policymakers from ending the war is that a victorious Taliban would provide a haven for transnational terrorist groups to launch attacks against the United States. Not only does this exaggerate the terrorism threat, but it ignores the Taliban’s evident disinterest in once again making Afghanistan a home base for international jihadists.

There has been progress on negotiations, and a full political settlement built around a cease-fire and a withdrawal of U.S. military forces from Afghanistan is within reach—but only if policymakers are willing to make significant concessions to the Taliban and to dispense with erroneous rationales for continuing the fight.

Introduction

In 2010, President Barack Obama told an interviewer:

It is very easy to imagine a situation in which, in the absence of a clear strategy, we ended up staying in Afghanistan for another five years, another eight years, another 10 years. And we would do it not with clear intentions but rather just out of an inertia. Or an unwillingness to ask tough questions.1

In the subsequent decade, the war in Afghanistan has been notable principally for the inertia Obama was concerned about. NATO allies have gradually faded away as the long war has become ever longer, leaving the United States to determine the end of the conflict on its own.

America, too, could leave or pull back. In previous conflicts, when the strategic rationale had gone stale or when the costs exceeded the expected benefits, the United States withdrew its forces from active hostilities, as it did in Somalia in 1993 and in Lebanon in 1983. And although the Vietnam War went on far too long, the United States withdrew from that stalemate in 1973. In each of these cases, withdrawal was the wise choice. But the United States seems to think it owns the war in Afghanistan. Consequently, the “tough questions” that are either ignored or answered with knee-jerk, unexamined responses include: Why are we still there, and should we still be there?

In the words of Lord Salisbury: “Nothing is more fatal to a wise strategy than clinging to the carcasses of dead policies.”2 This paper first sets out a group of propositions relating to some of the kinds of tough questions Obama suggested should be asked. In the process, it assesses why the United States has continued for so long to pursue its dead policies in Afghanistan. With these considerations in mind, it then lays out a plausible negotiation strategy for ending the war and for withdrawing American troops. The strategy is based in part on the experience with the U.S. war in Vietnam.

10 Policy Propositions

In the quest for Obama’s clear strategy, we offer 10 propositions to scrutinize the justifications for the war and to clarify the stakes.

1. The United States Can’t Win against the Taliban at a Remotely Acceptable Cost

The proposition that the United States can’t win in Afghanistan has long been appreciated, even at official levels. Six months before President Trump announced in August 2017 that he would send additional troops to Afghanistan, Gen. John Nicholson, then commander of U.S. forces in Afghanistan, testified before the Senate Armed Services Committee that “the current security situation in Afghanistan [is] a stalemate.”3 Five months later, Laurel Miller, who was acting special representative for Afghanistan and Pakistan until June 2017, said in an interview, “I don’t think there is any serious analyst of the situation in Afghanistan who believes that the war is winnable.”4 A year after that, Lisa Curtis, deputy assistant to the president and senior director for South and Central Asia at the National Security Council, told an audience at the U.S. Institute of Peace that “no one believes that there is a military solution to this conflict.”5

The Trump administration’s policy response of increasing troop levels in Afghanistan and leaving the strategy essentially unchanged fits a pattern going back to the George W. Bush administration. In his January 2008 State of the Union address, President Bush announced a troop surge in Afghanistan, sending an additional 3,200 marines, along with tens of billions of additional taxpayer dollars, to “fight the terrorists and train the Afghan Army and police.”6 Progress proved elusive, however, and later that year a classified National Intelligence Estimate assessed the situation in Afghanistan to be “bleak,” noting that “the Afghan government has failed to consistently deliver services in rural areas,” that the Taliban and other insurgent groups were beginning to fill the void, and that “the Taliban have effectively manipulated the grievances of disgruntled, disenfranchised tribes.” It further maintained that even if the Afghan army and police could be trained into an effective force of several hundred thousand, that improbable development would still be “insufficient if Pakistan remains a safe haven for insurgents.”7

In 2009, the Obama administration produced yet another comprehensive internal review of the war that, according to Obama’s deputy national security adviser Ben Rhodes, concluded the counterinsurgency strategy in Afghanistan “couldn’t succeed.”8 By 2010, briefers were pointing out to top generals that no counterinsurgency on record had succeeded when the insurgents had access to a deep cross-border sanctuary. They did add, however, that one could hope the situation in Afghanistan would prove to be an exception.9

But Obama had campaigned on recommitting to the war in Afghanistan. Citing the need to “keep the pressure on al Qaeda” and for “a military strategy that will break the Taliban’s momentum and increase Afghanistan’s capacity,” he increased troops by nearly 70,000,10 reaching a total of about 100,000 by 2011. In 2016, Obama warned that “the security situation in Afghanistan remains precarious” while acknowledging that the “Taliban remains a threat” and had even “gained ground in some cases.”11 He then passed the buck to Trump, leaving roughly 8,400 American troops in Afghanistan without a clear mission or a resolution to the conflict.

A 2019 report from the Special Inspector General for Afghanistan Reconstruction proves a continuation on a theme going back to the earliest years of the war. Despite 18 years of trying to quell the Taliban insurgency and to build an independent and competent Afghan government, army, and police force, “Afghanistan remains one of the world’s poorest and most dangerous countries,” with the security forces still “not able to protect the population from insurgents in large parts of the country.”12

History demonstrates that indigenous armed groups tend to be more committed to their country than foreign military occupiers. Afghans in particular have a long history of resisting intruders, ousting the British twice in the 19th century and once in the 20th century and pushing out the Soviets at the end of the 1980s. Afghanistan’s landlocked geography, mountainous terrain, and porous borders complicate attempts at military domination from the outside while giving an advantage to guerilla insurgents.

Military occupations fail far more often than they succeed even when active armed resistance is absent.13 Foreign-imposed regime change succeeds only in the rarest circumstances and often only in the short term; over the long term, it is more likely to lay the groundwork for future civil war than to stabilize or democratize, particularly so in an underdeveloped tribal society such as Afghanistan.14

Graeme Smith, a Canadian journalist who was stationed in Afghanistan, suggests that the counterinsurgency theory applied there has been, to put it mildly, “flawed.” The essential notion was that American soldiers, not knowing either the culture or the language and on a one-year tour of duty, “could walk into the world’s most conservative villages, make friends, hunt their enemies, and build a better society.” But “none of that,” he concludes, “proved successful.”15 Instead, the Taliban was finding that the notion of attacking foreign invaders regularly rallied tribesmen to their cause.16

Even in the early years, the war scarcely went smoothly, and things got much worse.17 In the wake of the successful 2001 U.S. invasion, an international coalition and anti-Taliban Afghan groups established a new government, many Afghans returned to their tortured country, and many countries sent aid and assistance. The coalition managed to provide a fair amount of security, particularly in the capital, Kabul, but much of the country continued to be run by, or plagued by, entrepreneurial warlords who were following traditional modes of conduct.

The Bush administration worked closely with bands of warlords and strongmen that opposed the Taliban but were notorious among the Afghan people as violent and corrupt thugs. With continued U.S. backing after the fall of the Taliban, this group eventually came to populate the new Afghan government. It should be little wonder, then, that the Kabul regime fell short of the functioning democratic state envisioned. Washington also erroneously conflated the Taliban and al Qaeda while refusing, sometimes over the wishes of its clients in Kabul, to allow moderate or defected members of the Taliban to join the government.18

Forced by the invasion into exile in Pakistan, the Taliban gradually regrouped, and by 2006 it had reignited a civil war in Afghanistan. The group soon controlled substantial areas in the south that were mostly inhabited by ethnic Pashtuns. Its operators were essentially free to come and go from base areas in the Pashtun section of neighboring Pakistan. The long, remote international border simply can’t be closed.19 Moreover, Pakistan was inevitably drawn into the fight. The United States has provided Pakistan with more than $34 billion in economic and security assistance since 2002.20 However, most Pakistanis—74 percent in 2012—view the United States as an enemy.21

Over the years, corruption has increased in Afghanistan. In one index on corruption, the country ranked 172 on a list of 180 countries.22 The current vice president, Abdul Rashid Dostum, has been “accused, along with nine of his top security officials and bodyguards, of kidnapping, torturing and raping a political rival, Ahmad Ishchi, who was then in his early 60s.” Although seven of Dostum’s bodyguards have been sentenced to years in prison for the crime, Dostum and his top aides have escaped prosecution.23 A government study in 2012 estimated that of the almost $100 billion in reconstruction aid that had been doled out by then, 85 percent had been siphoned off (including by American contractors) before it could reach its intended recipients.24 In 2010, “Afghan soldiers died of starvation at the National Military Hospital because pervasive bribery left the facility stripped of supplies.”25

There also have been major training failures. After seven years of buildup, some 200,000 Afghans were under arms, but only one battalion of 1,000 was deemed capable of carrying out operations independently.26 And by 2016, top American commanders were noting that, after a decade and a half of training by the United States at enormous cost, the Afghan army was still not ready, in part because it still lacked effective leaders. To set things right, they said, would require the United States to keep working at it for, variously, several more years, decades, or generations.27

The Taliban now holds more territory than at any point since 2001, and the regime in Kabul ranks as one of the worst in the world on corruption and respect for human rights.28 The Department of Defense estimated Afghanistan’s security funding requirement to be about $6.5 billion for fiscal year 2019, of which the Afghan government pledged to cover only $500 million. According to Sen. Jack Reed (D-RI), ranking member of the Senate Armed Services Committee, the Afghan security forces “would disintegrate” without U.S. economic and military backing.29

As it’s gained and held land, particularly in the south of the country, the Taliban has set about trying to prove, with considerable success, that it can govern with more effectiveness and less corruption than the U.S.-supported entity in Kabul.30

It is common to see the cause or initial impetus of the Afghanistan fiasco in the early decision of the Bush administration to divert the focus of policy from Afghanistan to Iraq. But, as analysts Michael Mandelbaum and Steve Coll suggest, the notion of successfully using social engineering in Afghanistan was flawed from the start.31 In particular, it seems likely that the Taliban revival would have happened and proceeded apace whether Americans were there in greater numbers or not: the development was essentially unstoppable.

In Vietnam, the United States had not been able to break the will of the communists even though it delivered horrific punishment that, by any reasonable historical standard, should have overwhelmed enemy resistance.32 In contrast, in Afghanistan, the Taliban only needs to maintain a comparatively low level of violence. They can hit and run, retire to Pakistan for refreshment, and then come back to inflict more damage. If they can’t be cut off, they can likely continue the effort forever, or until the hated foreign invader gets sufficiently tired of the contest and goes away—whichever comes first. As in Vietnam, the key issue is one of patience and will. The Taliban has nowhere else to go; the Americans do.

The American military failure in Afghanistan is hardly unique. Indeed, for all the very considerable expense, the military has won no wars since World War II—especially if victory is defined as achieving an objective at an acceptable cost—except against enemy forces that essentially didn’t exist. The American military triumphed in comic opera wars over tiny forces in Grenada and over scarcely organized thuggish ones in Panama and Kosovo. And the Iraqis hardly presented much of a challenge in the 1991 Persian Gulf War. More recently, there has been a successful war against the Islamic State in Iraq and Syria (ISIS) insurgent group, an opponent that proved to be spectacularly self-destructive.33 However, the principal American contribution has been in air support; others have done the heavy lifting. There are also a few wars in which it could probably be said that the United States was ahead at the end of the first, second, or third quarter—Korea, Vietnam, Somalia, Afghanistan, Iraq, and Libya. But the outcomes of these—as seen in Afghanistan in full measure—were certainly less than stellar: exhausted stalemate, effective defeat, hasty withdrawal, and extended misery.

2. The U.S. Military Must Provide Honest Assessments of the War

The war has persisted despite the telltale signs of mission failure in part because of the culture in the Department of Defense and how it interacts with politics at the national level. In their public portrayal of the war, military leaders have rather persistently depicted a rosier picture than the facts warranted.34 In 2014, Gen. John Campbell told National Public Radio that the good news of progress in Afghanistan “sometimes [doesn’t] make the media,” that “the Afghan security forces [are] really stepping up their game,” and that he was “excited about the future here.”35 Such optimistic pronouncements from the military are common: it was in 2013 that Gen. Joseph Dunford talked about “the inevitability of our success.” In 2011, David Petraeus said that American forces had “reversed the momentum of the Taliban.” In 2010, Gen. Stanley McChrystal predicted that “success is still achievable.”36 In 2008, Gen. David McKiernan insisted that “we are not losing in Afghanistan.”37

Overly optimistic portrayals are partly a result of institutional habits and a view about civil-military relations that calls for focusing on tactical and operational facts on the ground while leaving broader strategic and political assessments of the war to elected leaders. Some military leaders publicly misrepresented the course of the war to avoid the hit to troop morale they expected would result from more honest and critical presentations.38 Others felt strongly that negotiations with the Taliban should only occur from a “position of strength,” which they believed was always just around the corner.39 But sometimes the deception was more flagrant: media reports revealed in 2011 that commanders tasked with briefing congressional delegations in Afghanistan deliberately misled members of Congress about the progress of the war.40

After his second deployment to Afghanistan, Army Lt. Col. Daniel L. Davis (now retired) spoke out publicly against this kind of distortion. He wrote two reports, one classified and one unclassified, and briefed members of Congress on his conclusions.41“Senior ranking U.S. military leaders have so distorted the truth when communicating with the U.S. Congress and American people in regards to conditions on the ground in Afghanistan that the truth has become unrecognizable,” he wrote, adding that “if the public had access to these classified reports they would see the dramatic gulf between what is often said in public by our senior leaders and what is actually true behind the scenes.”42

Elected officials are often deferential to military leaders and national security advisers. This is partly due to the superior subject area expertise of military and national security professionals, but it is also because going against such advice can be politically costly.

When Obama entered office in 2009, the senior military leadership strongly favored a troop surge in Afghanistan. According to Vali Nasr, at the time a senior adviser on Afghanistan and Pakistan at the Department of State, the White House was “ever afraid that the young Democratic President would be seen as ‘soft’” if he went against the military’s recommendations.43 Rhodes, Obama’s deputy national security adviser, says that the administration’s Afghanistan policy review was “shaped by leaks from the military designed to box Obama into sending more troops into Afghanistan.”44 One member of Obama’s National Security Council, a colonel who was also an Iraq war veteran, told the president that, if he were to “defy [his] military chain,” the top brass may resign in protest.45“No Democratic president can go against military advice, especially if he asked for it,” advised Leon Panetta, then CIA director.46 Obama’s secretary of defense, Robert Gates, described the troop surge recommendations as “the classic Henry Kissinger model … You have three options, two of which are ridiculous, so you accept the one in the middle.”47 Obama expressed frustration at this. In the end, advisers presented him with four options, two of which were indistinguishable. “So what’s my option?” Obama asked. “You have essentially given me one option.”48 He complained to journalist Bob Woodward that the military was “really cooking the thing in the direction that they wanted . . . They are not going to give me a choice.”49

Trump faced similar pressure to recommit to the war in Afghanistan. The advice Trump received from his military and national security advisers was overwhelmingly supportive of continuing the mission—and of adding another 4,000 troops. According to Woodward’s account, Trump did push back at first, exploding:

You guys have created this situation. It’s been a disaster. You’re the architects of this mess in Afghanistan. You created these problems. You’re smart guys, but I have to tell you, you’re part of the problem. And you haven’t been able to fix it, and you’re making it worse.

Moreover, he added, “I want to get out, and you’re telling me the answer is to get deeper in.”50 But in the end, Trump succumbed to the military request.

The U.S. military has a strong parochial interest in avoiding the perception that the war in Afghanistan has been lost and therefore in ensuring it receives additional resources to continue fighting in it. But the problem extends beyond the Department of Defense. The professional foreign policy class in Washington, concentrated in the various national security agencies of the executive branch, is subject to a powerful bias in favor of action over inaction and troop surges over withdrawal.51 As a result, the advice presidents receive from this expert community tends to reflect these biases. But that expert consensus seems to exist only in the White House’s Situation Room and is frequently at odds with official assessments of the war, with the views of many specialists in academia, and with the perspective of the general public.52

3. A Taliban Victory Would Not Present a Serious Terrorism Threat to the United States

By far the most common justification for remaining in Afghanistan is the safe-haven myth: the fear that if the Taliban take over the country, they would let al Qaeda reestablish a presence there, leaving the terrorist organization to once again plot attacks on the United States. That is, it is effectively contended that although 9/11 was substantially plotted in Hamburg, Germany, just about the only reason further attacks haven’t taken place is because al Qaeda needs a bigger territorial base of operations and that that base must be in Afghanistan.53

Ambassador Richard Holbrooke, who worked on Afghanistan policy under Obama as special envoy to South Asia, explained in 2009 that “the fundamental difference between Afghanistan and Vietnam is 9/11. The Vietcong and the North Vietnamese never posed a threat to the United States homeland. The [perpetrators] of 9/11 who were in that area still do and are still planning. That is why we’re in the region with troops.” If the Taliban returned to control in Afghanistan, Holbrooke maintained that “without any shadow of a doubt, al Qaeda would move back into Afghanistan, set up a larger presence, recruit more people and pursue its objectives against the United States even more aggressively.” That, he insisted, is “the only justification for what we’re doing.”54

Virtually all promoters of the war stress this notion. Obama applied it in 2009.55 And, in 2017, Petraeus, a retired general who had commanded American forces in Afghanistan, ardently contended in an article written with the Brookings Institution’s Michael O’Hanlon, that:

America’s leaders should not lose sight of why the U.S. went to, and has stayed in, Afghanistan: It is in our national interest to ensure that country is not once again a sanctuary for transnational extremists, as it was when the 9/11 attacks were planned there. We have been accomplishing that mission since the intervention began in October 2001. Although al-Qaeda in Afghanistan and Pakistan is diminished, it could rebound if given the opportunity. Islamic State could expand its newfound Afghan foothold as well.56

Trump reflected that thinking when he authorized an increase of troops to Afghanistan in 2017. His “original instinct,” he noted, was “to pull out,” but, as noted earlier, he had been persuaded by the military (whose record on predicting events in Afghanistan has been rather miserable) to believe that “the consequences of a rapid exit are both predictable and unacceptable.” Noting that “the worst terrorist attack in our history, was planned and directed from Afghanistan because that country was ruled by a government that gave comfort and shelter to terrorists,” Trump was sure that “a hasty withdrawal would create a vacuum that terrorists … would instantly fill, just as happened before September 11th.”57 On one occasion when Trump expressed skepticism about the need to deploy additional forces, his then secretary of defense, James Mattis, reportedly told him, “Unfortunately, sir, you have no choice,” adding that it was imperative in order “to prevent a bomb from going off in Times Square.”58 When Trump was subsequently asked, “Can you explain why 17 years later we’re still there?” he replied: “We’re there because virtually every expert that I have and speak to say [sic] if we don’t go there, they’re going to be fighting over here. And I’ve heard it over and over again.”59

This key justification for staying in Afghanistan—indeed, the only one, according to Holbrooke—has gone almost entirely unexamined. It fails in several ways.60

First, it is unlikely that a triumphal Taliban would invite al Qaeda back because its relationship with the terrorist group has been strained from the start. In 1996, Osama bin Laden, an exile from Saudi Arabia and Sudan, showed up in Afghanistan with his entourage. As Lawrence Wright makes clear in his prizewinning book The Looming Tower, the relationship between the Taliban and al Qaeda was often very uncomfortable. Although quite willing to extend hospitality to their well-heeled visitor, the Taliban insisted on guarantees that bin Laden refrain from issuing incendiary messages and from engaging in terrorist activities while in the country. Bin Laden repeatedly agreed but also frequently broke his pledge.61

At times, the Taliban had their troublesome “guest” under house arrest, and veteran correspondent Arnaud de Borchgrave said he was “stunned by the hostility” that Mullah Mohammad Omar, the top Taliban leader, expressed for bin Laden during an interview.62 A senior Taliban official recalls that bin Laden was “a pain in the backside.”63 As Vahid Brown, of the Combating Terrorism Center at the U.S. Military Academy at West Point, New York, puts it, relations were “deeply contentious, and threatened by mutual distrust and divergent ambitions.”64 Meanwhile, Riyadh tried for years to get the Saudi renegade extradited, and it appears to have been close to success in 1998. However, the deal fell through after the Americans bombed Afghanistan in response to two al Qaeda attacks on a pair of U.S. embassies in Africa in August 1998.65

Bin Laden’s 9/11 ploy not only shattered the agreement but also brought armed destruction on his hosts.66 The last thing the Taliban would want, should it take over Afghanistan, is an active terrorist group continually drawing fire from the outside. As Richard Barrett, the United Nation’s former Taliban and al Qaeda monitor, put it in 2009, if the Taliban regain power, “they don’t want al Qaeda hanging around.”67 Moreover, unlike al Qaeda, the Taliban has a very localized perspective. They have never been interested in conducting international terrorism. They are primarily concerned with governing Afghanistan as they see fit free from outside interference.

The main Taliban fighters in Afghanistan are quick to point out that they are running their own war, and it seems clear that al Qaeda plays only a limited role in their efforts. “No foreign fighter can serve as a Taliban commander,” insisted one Taliban leader in 2007.68 And, in 2010, the American commander of U.S. detention centers in Afghanistan said that fewer than 6 percent of his prisoners came from outside the country and that most were from Pakistan: “This is a very local fight,” he observed.69 The then CIA chief Panetta estimated in 2010 that there were “maybe 60 to 100, maybe less” al Qaeda operatives in Afghanistan.70

An extensive 2008 study of the Taliban operation in Afghanistan included al Qaeda as part of the coalition but mentioned it only very occasionally when discussing the details of the insurgency.71 And there have long been reports that the main Taliban leaders are very hostile to the foreign militants and have explicitly distanced themselves from al Qaeda.72 As for the Islamic State’s branch in Afghanistan, the Taliban has actively fought them on the battlefield almost uninterruptedly for years, making a Taliban-sponsored safe haven for that group unlikely.73

Second, it is not at all clear that al Qaeda would even want to return to ravaged, impoverished, insecure, and factionalized Afghanistan even if it were invited. It would have to uproot itself from Pakistan, where it has been operating for more than a decade, and reestablish itself in new, unfamiliar territory. It’s difficult to see how an Afghan haven would be safer than the one al Qaeda occupies now. In fact, Douglas Saunders of Canada’s Globe and Mail reports that most allied commanders in Afghanistan whom he had talked with think it “very unlikely” that al Qaeda would establish a base there even if the Taliban were to take over.74

Third, if al Qaeda were to return, the United States would still be able to bomb and raid in response to a clear and present threat to U.S. security. Indeed, it might well be in a better position to do so in Afghanistan than in Pakistan. American efforts to go after al Qaeda in Pakistan are hampered by concerns about the sensitivities of the Pakistanis and by the fact that Pakistan can retaliate by cutting off or cramping logistics lines. The constraints on taking potential future military action in an Afghanistan controlled by the Taliban are much less formidable. Also, American planners and forces would know the turf better, as they have been occupying the country for nearly two decades. Thus, al Qaeda would be unlikely to find much sanctuary in Afghanistan.

And fourth, the safe-haven argument is based on the ill-founded assumption that the presence of al Qaeda leaders in Taliban-controlled Afghanistan in the lead-up to 9/11 was essential for the success of the attacks. In fact, it seems to have had little, if any, operational utility. Al Qaeda operatives planned and coordinated the 9/11 attacks not just in Afghanistan but also in Germany, Malaysia, and the United States. Technological innovation and increasingly widespread access to the internet has only made instant communication across borders, oceans, and time zones easier in the ensuing years. A territorial haven in remote, landlocked Afghanistan wouldn’t be much help to jihadists plotting to attack the West. Terrorist groups seek inconspicuousness, to have no return address against which their enemies can retaliate.75

The notion that terrorists need a lot of space and privacy to hatch plots of substantial magnitude in the West has been repeatedly undermined by such tragic terrorist episodes in Madrid in 2004, London in 2005, Paris in 2015, and Brussels and Istanbul in 2016. None of the attackers in those incidents operated from a safe haven, nor were their plans coordinated by a group within a safe haven. Al Qaeda Central has not really done all that much since it got horribly lucky on 9/11, and the patent inadequacies and incompetence of the group would scarcely be erased by uprooting itself and moving to new foreign turf.76 Its problems do not stem from failing to have enough territory in which to operate or plan.

4. Defeat in Afghanistan Would Not Necessarily Destabilize the Region

Some commentators argue a U.S. withdrawal would result in regional destabilization. One justification for continuing the war, in particular, is that a Taliban takeover in Afghanistan would somehow destabilize Pakistan, perhaps leading to terrorists or other militants seizing its atomic arsenal.

Actually, though, Pakistan has essentially been harboring the Taliban and generally enjoys good relations with it—and did before 9/11.77 Therefore, a Taliban takeover that brought stability—in the sense of freedom from civil war—to Afghanistan might just as well serve to help stabilize Pakistan.

Other regional players, including Iran, India, China, and Russia, would likely adjust their policies toward Afghanistan following a U.S. withdrawal, in some cases in ways that could benefit American interests. Moscow has recently cultivated a diplomatic relationship with the Taliban, and this seems calculated to irritate Washington, to expedite negotiations predicated on U.S. withdrawal, and perhaps to hedge against more radical jihadist groups at loggerheads with the Taliban.78 China worries that Islamic militant groups in Afghanistan could pose problems in its restive eastern province of Xinjiang, and it also plans to incorporate Afghanistan into its Belt and Road Initiative, meaning there is a strong preference for a functioning, stable Afghan government. Beijing has proven perfectly capable of managing its alliance with Pakistan while cooperating with Moscow on security issues in the broader Central Asian region.

A recent report published by the Canadian Security Intelligence Service contends that “most of Afghanistan’s neighbours want to prevent the US from maintaining a long-term military foothold in their backyard” and that there is “some level of regional agreement about the need to prevent the spread of instability” with multiple countries “seeking to facilitate peace negotiations, in part to curb the escalating violence on their doorstep and secure a stake in an eventual political settlement.”79 This suggests a confluence of interests among many regional powers and the United States—an opportunity policymakers in Washington should seize upon.

Whatever happens following a U.S. withdrawal, the regional players are likely to increase their investment of energy and resources in Afghanistan in ways that address their somewhat overlapping (albeit occasionally conflicting) interests. In short, Afghanistan would become someone else’s problem.80 If that problem were to worsen over time or cause substantial instability beyond Afghanistan’s borders, the country’s neighbors would surely suffer the consequences, and they would deal with them long before the United States must. Widespread regional destabilization is a rather low-probability consequence of withdrawal.

5. Efforts to Reduce Opium Production Are Unnecessarily Complicating and Futile

The heroin trade accounts for an estimated 60 percent of Taliban revenue, roughly $200 million annually.81 Up to 85 percent of the world’s opium is produced in Afghanistan, and drug traffickers cooperate with the Taliban, providing the group with weapons and cash in exchange for protection of trade routes. In addition to fueling the insurgency, Afghanistan’s opium exports contribute to a slew of problems around the world, such as empowering international drug gangs and increasing rates of addiction.82 The Kabul government is the other major beneficiary of the opium trade, and many corrupt Afghan officials have become quite wealthy by helping administer it. “In the district of Garmsir, poppy cultivation not only is tolerated, but is a source of money that the local government depends on,” the New York Times reported in 2016. “Officials have imposed a tax on farmers practically identical to the one the Taliban use in places they control.”83

It is widely accepted that the insurgency cannot be defeated so long as the drug trade persists. The United States has spent years and more than $8 billion trying to quash this critical source of sustenance for the insurgency, with tactics including prohibition, crop eradication, and bombing buildings suspected of being heroin laboratories. However, the effort has failed. Opium production increased by a staggering 87 percent from 2016 to 2017, to 9,000 metric tons—“the most in Afghan history,” according to the Brookings Institution’s Vanda Felbab-Brown.84 In 2014, the special inspector for Afghanistan concluded that “by every conceivable metric, we’ve failed. Production and cultivation are up, interdiction and eradication are down, financial support to the insurgency is up, and addiction and abuse are at unprecedented levels in Afghanistan.”85

The Taliban relies on the heroin trade out of need, not out of preference or indifference. In a condition of peace, however, they would no longer feel that need. Indeed, in 2000, after about four years of being in power, the Taliban famously imposed an outright ban on all opium cultivation, which reduced the harvest by 94 percent.86 The results of that effort are instructive: because farmers in other countries responded to the continued demand, the street price of heroin both in Europe and the United States did not change.

Outside the context of the counterinsurgency campaign, the drug trade out of Afghanistan does not pose a direct threat to the United States. Trying to eradicate or control opium production throughout the war has been a failure, and seeking to do so following withdrawal would simply continue an exercise in futility.87

6. Efforts to Ensure Women’s Rights Are Unlikely to Work

Thanks in part to deliberate efforts of the United States over the course of the war, Afghan women are better off than they were under Taliban rule. Women at least nominally have the right to vote and to equal treatment; they hold prestigious positions in education and law; they work in healthcare and as private-sector entrepreneurs. Women in Afghanistan hold 63 out of 320 parliamentary seats.88

Najia Nasim and Megan Corrado, executive director and director of advocacy at the nonprofit advocacy group Women for Afghan Women, criticized Trump’s “concession-filled diplomacy” as dismissive of the rights of Afghan women, who will suffer repression when the U.S. military is no longer there to support the Kabul government and to thwart the Taliban.89 Mariam Safi, director of the Organization for Policy Research and Development Studies in Kabul, and Muqaddesa Yourish, a commissioner on Afghanistan’s Independent Administrative Reform and Civil Service Commission, similarly warned that withdrawal “will jeopardize for Afghans the future of hard-won gains such as constitutional rights, freedoms of citizens and democratic institutions.”90

However, while Afghanistan has progressed on many normative metrics over the course of the nearly two-decade nation-building effort, those gains are quite limited. According to the United Nations, Afghanistan ranks 153rd out of 160 countries for gender equality.91 In a 2017 index, Afghanistan tied with Syria for the worst place in the world to be a woman.92 As the Canadian intelligence study notes, while “there was no freedom for women in Taliban Afghanistan,” that was also the case “at the end of 2018—after nearly 18 years of international engagement.” The study stresses that “the reality is that Afghanistan was and is a deeply conservative culture governed largely by ancient traditions that are also reflected in their interpretation of Islam and its edicts.”93

Any retreat on women’s rights following a U.S. withdrawal would be heart-rending and tragic. However, advancement, perhaps halting, is more likely to take place in a condition of peace than of war. And if the post-9/11 experience has demonstrated anything, it’s that wars to remake foreign societies into liberal democracies are generally ineffective. In any case, the suggestion that women’s rights are a vital objective in the U.S. mission in Afghanistan is hard to square with the countless other places where human rights and democracy are absent or substantially circumscribed. It is not clear why respect for human rights is vital to American interests in Afghanistan but not in Saudi Arabia, for example.

7. Costs Already Borne in Afghanistan Do Not Justify Additional Investments

Proponents of continuing the mission also maintain that the United States must fight the war until it achieves a clear victory because anything less would derogate the steep costs in blood and money that America has already devoted to the mission. In other words, it is argued, sunk costs necessitate continued investment.

Trump exhibited this kind of thinking when he announced his troop surge: “Our nation must seek an honorable and enduring outcome worthy of the tremendous sacrifices that have been made, especially the sacrifices of lives.”94 Similarly, in critiquing Obama’s gradual drawdown of troops from Afghanistan in 2015, Sen. John McCain (R-AZ) emphasized sunk costs: “All of us want the war in Afghanistan to be over, but after 14 years of hard-fought gains, the decisions we make now will determine whether our progress will endure and our sacrifices will not have been in vain.”95That same year, former general Petraeus and Michael O’Hanlon implored Obama to “protect our investment in Afghanistan,” noting that “the investment to date” has been “well over 2,000 American lives and nearly $1 trillion in expense.”96

Particularly in limited counterinsurgency wars, decisionmakers are often more sensitive to potential future losses than equivalent gains. This can produce a greater willingness to take uncertain gambles to avoid total defeat. Loss aversion, as it is called, often manifests in the form of the sunk cost fallacy, in which actors seek to make good on spent resources by redoubling their commitment to ensure that the costs were not expended in vain. Successive last-ditch efforts across three administrations to flood Afghanistan with more troops and resources in the hope that greater effort would enable America to eke out a “win” are consistent with the presence of this fallacy. Unfortunately, this cognitive bias poses as a serious strategic argument as it pushes people to double down and become entrapped into additional net losses.

A decision about where and whether to devote resources should be based on whether the investment will add future value, not on sunk costs. Rational policymakers should be quick to abandon expensive ventures that lack a decent chance of yielding better returns. They should also give greater weight to opportunity costs and thus be more open to exploring alternatives.

8. Policymakers Should Not Be Overly Concerned about “Salient Failures”

“Failure salience,” according to political scientists Dominic D. P. Johnson and Dominic Tierney, refers to the “tendency to remember and learn more from perceived negative outcomes than from perceived positive outcomes.”97 The Obama administration’s withdrawal from Iraq at the end of 2011 and the subsequent rise of ISIS became a salient failure frequently cited to discourage withdrawing from Afghanistan. In 2017, Trump was persuaded to stay the course as a result. “As we know, in 2011, America hastily and mistakenly withdrew from Iraq,” Trump said in his speech announcing the troop surge in Afghanistan. “The vacuum we created by leaving too soon gave safe haven for ISIS to spread, to grow, recruit, and launch attacks. We cannot repeat in Afghanistan the mistake our leaders made in Iraq.”98

A more incisive lesson to draw from the rise of ISIS is that prolonged military occupations tend to generate violent resistance movements. ISIS is an outgrowth of al Qaeda in Iraq (AQI), which emerged from the Sunni insurgency that rose up to fight occupying U.S. forces. Its leadership consists of veteran AQI insurgents and former Baathists in the Saddam Hussein regime. It never could have filled the vacuum left by the United States’ withdrawal without the initial spark provided by the invasion. Moreover, any “vacuum” was created far more by staggeringly inept policies of Iraqi politicians and by the unwillingness of the Iraqi army (trained by the United States for $20 billion) to fight.99

Given the state of both U.S. and Iraqi politics, America’s withdrawal was inevitable, and the end of 2011 was as auspicious a time as any to do it. But, negative experiences have a profound impact on the psyche. Drawing a causal connection between the American withdrawal and the emergence of a rapacious terrorist army prone to spectacular atrocities and harboring vast territorial ambitions may serve as a compelling argument for some against withdrawal from Afghanistan, but it is an argument based on a misunderstanding of a separate case with entirely different actors, dynamics, and context.

9. Concerns about Humiliation and about Preserving American Credibility in the Event of a Withdrawal Are Misguided

Fear of a loss of credibility or standing has been another major impediment to withdrawal. As Richard Haas, president of the Council on Foreign Relations, put it earlier this year, an abrupt exit “would cast further doubt on America’s willingness to sustain a leading role in the world.”100 The real cost of withdrawing from Afghanistan, according to Edward Luce, a columnist for the Financial Times, “is to the US’s global standing.”101 Bing West, military historian and a former Reagan official, contends that it would be “a disaster for the prestige, influence, and self-image of America if Kabul fell in a manner similar to Saigon in 1975.”102 Even negotiating with the Taliban to eventually bring American troops home, former U.S. ambassador Ryan Crocker claims, is tantamount to “negotiating the terms of our surrender.”103

These concerns are essentially baseless. To begin with, states tend to assess the credibility of other states’ security commitments based on perceived national interests in discrete situations rather than on extrapolations of policies in different regions and contexts.104 NATO countries will not interpret a U.S. withdrawal from Afghanistan as a signal that Washington is ready to relinquish its security commitment to Europe any more than they did when the U.S. abandoned Vietnam. As for “America’s willingness to sustain a leading role in the world,” polling data strongly suggests that fighting a lost war for almost 20 years is doing more to sap the public’s enthusiasm for overseas ventures than a timely withdrawal ever could.105 The same goes for so-called standing. The unending quagmire has arguably tarnished America’s international reputation, but it is not clear that this has negatively impacted national security sufficiently to justify a continued occupation amid a simmering civil war at a cost of tens of billions of dollars per year.

The worry Crocker expresses, that negotiating an end to the U.S. war in Afghanistan without a clear victory would be tantamount to a humiliating surrender, is common throughout history. Though the public defense of the Vietnam War emphasized liberating South Vietnam and preventing falling dominos of communist states, by 1965 the then assistant secretary of defense for international security affairs, John McNaughton, had concluded that the initial security reasons that had gotten America into Vietnam had become “largely academic” and that the U.S. objective in Vietnam was now to “avoid humiliation.”106 The tragic parallel to today’s war in Afghanistan is hard to miss.

Concerns about credibility, prestige, and reputation can often drive states to adopt more aggressive and militarized approaches to foreign affairs.107 However, there is little evidence that a perceived loss of prestige from overseas failures such as Vietnam and Afghanistan has a tangible impact on the nation’s security, except to the extent that it incentivizes political leaders to persist in costly ventures. The University of Washington’s Jonathan Mercer calls prestige an “illusion” that has “neither strategic nor intrinsic value.”108

Nor does the United States need to continue the mission because of fears about domestic political blowback. The American public accepted the capture of Saigon by the North Vietnamese in 1975 with remarkable equanimity in part because of the popularity of U.S. withdrawal and a rather sanguine view of the threat a Vietcong victory posed to their lives and livelihood.109 Similarly, when the United States abruptly withdrew from Lebanon in 1983, and from Somalia in 1993, there seemed to be no lasting hit to America’s influence or self-image. Nor did that happen when armed intervention in Libya in 2011 led to a calamitous civil war. All four debacles generated little political problem for the people who had presided over them.

10. The Most Compelling, and Perhaps Only, Reason to Stay in Afghanistan Is to Avoid a Humanitarian Catastrophe

The strongest argument for continuing the forever war in Afghanistan is primarily humanitarian: as after the fall of the communist regime, the country could descend into another catastrophic civil war. A low-intensity conflict followed the Soviet withdrawal in 1989, but after Soviet aid to its clients in Kabul dried up in December 1991, the regime collapsed, insurgents stormed the capital, and Afghanistan descended into a brutal conflict that eventually brought the Taliban to power in 1996. Combatants, disciplined when confronting the Soviet invaders, disintegrated into dozens of squabbling and corrupt warlord and bandit gangs, plundering the population they had once defended. According to Ahmed Rashid, they “abused the population at will, kidnapping young girls and boys for their sexual pleasure, robbing merchants in the bazaars and fighting and brawling in the streets.” They “seized homes and farms, threw out their occupants and handed them over to their supporters,” and they “sold off everything to Pakistani traders to make money, stripping down telephone wires and poles, cutting trees, selling off factories, machinery and even road rollers to scrap merchants.”110

A similar fate could befall Afghanistan following U.S. withdrawal. Of particular concern is that in recent years, a branch of ISIS called Islamic State-Khorasan (IS-K) has established a modest presence in Afghanistan. However, it has suffered repeated tactical failures, as both the Taliban and the United States have actively battled the group and disrupted its operations. IS-K has little to no support from the local population and has been further weakened by the rollback and defeat of the Islamic State’s “caliphate” in Iraq and Syria.111 Suggestions that the group would rise and ultimately pose a grave threat to the United States following a withdrawal of U.S. forces are dubious.

Other sources of fracturing following a withdrawal of American troops are certainly imaginable, but it should be remembered that even without Soviet troops, the regime the USSR set up in Kabul managed to survive for years as long as financial assistance was provided. Moreover, the risk of internal instability must be weighed against the costs and risks inherent in an indefinite war that seems to cause at least as many national security problems as it allegedly staves off.

In addition, the humanitarian argument for continuing the occupation in Afghanistan confuses the security mission with the expansive ambitions adopted after the invasion. Although the Bush administration was well known for a neoconservative orientation that emphasized democracy promotion through regime-change wars, it began (or sold) the wars in Afghanistan and Iraq on national security grounds and mostly adopted the normative missions about democracy and the rule of law later.112 Leaders engaging in limited wars without decisive victories sometimes respond to that ambiguity by expanding their objectives. As Betty Glad and Philipp Rosenberg explain, “Once a belligerent has invested significant nonrecoverable resources in its attempt to win its original goal, the nature of its goals is apt to change.”113

And although the humanitarian situation could deteriorate following a U.S. withdrawal, it is by no means adequate under current American occupation. At present, an estimated 2 million children in Afghanistan suffer from acute malnutrition. The Taliban, as insurgents against the U.S. occupation, exact a very heavy humanitarian toll on the country and frequently kill and abuse the civilian population. And yet, 2019 marked the first year since the United Nations began documenting civilian casualties that U.S. and Afghan government forces killed more Afghan civilians than the Taliban and other insurgent groups.114

Ending the war through a negotiated settlement, therefore, offers a better safeguard against a humanitarian catastrophe than simply continuing the occupation. But it is not a risk-free solution.

Negotiating a Political Settlement and Withdrawing U.S. Forces

Over the years, there have been sporadic efforts to find a negotiated solution to the war in Afghanistan.115 In his 2017 speech announcing an increase of a few thousand American troops to the war in Afghanistan, Trump laid out what he said was a plan for victory. But he then defined “victory” as something more akin to stalemate: preventing the Taliban from taking over and then perhaps negotiating.116

And in fact, the Trump administration has been quietly pursuing direct talks with the Taliban, with promising, if halting, results so far. The times may be propitious. The Taliban has set up what seems to be a strong negotiating team led by Mullah Abdul Ghani Baradar, who has been described as skilled and pragmatic.117 He had sought a peace deal a decade ago but was arrested by the security establishment in Pakistan, which at the time opposed negotiations.118 The fact that he and others have now been released, due in part to pressure by the American negotiator, Zalmay Khalilzad, is taken to be a sign that Pakistan is now in favor of negotiations, and the fact that he has been appointed their lead negotiator suggests that the Taliban is as well.119

Perhaps somewhat paradoxically, lessons for a deal might be applied from the January 1973 agreement between the United States and the Communist Vietnamese that ended U.S. involvement. The Taliban, while open to talks, wants only to negotiate with the United States, not with what they call the “slave” regime in Kabul.120 That is a condition similar to the one in Vietnam in which the United States pushed ahead with the 1973 agreement largely without substantial participation by the South Vietnamese regime. But as Khalilzad seems to have already accepted, for talks to move forward the United States must accept this condition and negotiate alone, at least at the start.

The Vietnam agreement contained several elements that might be applied to the present, essentially stalemated, situation in Afghanistan. In this, Afghan forces are incapable of being able to seize, hold, and then coherently govern areas controlled by the Taliban while Taliban members recognize that a takeover of government strongholds, in particular the heavily populated capital area of Kabul, is likely to be extremely difficult.121

These elements would be built around establishing an initial cease-fire. Thus, for a time there would be a rather formal partition between Taliban-held areas and Kabul-held areas. Partition has been the effective condition for some time—indeed, it is how the country has traditionally been organized. There would be competition of governance in the two areas, but the war, a decades-long disaster for all involved, would be ended or at least substantially tempered.

Over time, the main Afghan forces might develop a degree of cooperation and coordination. A great deal has changed since the American invasion, and a wired-in generation has developed, particularly in Kabul. And at least some in the Taliban realize that a full return to the Islamic Emirate that existed there before the invasion is no longer possible.122

According to the Canadian report, Taliban interlocutors “rarely if ever” still insist on “a settlement that restores an emirate form of government.” The leadership is “increasingly willing to state that they can accept some form of elected republic—often noting, paradoxically, that the main problem with elections now is the corrupt and chaotic way in which the Afghan government has administered them.”123 In fact, even if the Taliban were to fully take over, some of the gains of the long American occupation might well be retained. The Taliban have indicated, for example, that they would agree to permit women’s education, which they previously denied.124

A withdrawal of American military forces from the country, as in Vietnam, would also have to be a primary part of any negotiated deal; although, as in Vietnam, the United States could continue to supply the current regime using civilians and perhaps contractors to facilitate the process. There could also be an exchange of prisoners, including some Taliban members still held in Guantanamo.125

In addition, the United States might require a pledge from the Taliban that it will not allow its territory to be used by international terror groups. They reportedly have been willing to guarantee that they would not provide a safe haven for international terrorism, including al Qaeda, and over the years, as noted earlier, they have generally maintained that their concerns are local, not international.126 Some in the Taliban have been more resistant to the U.S. demand that they explicitly repudiate al Qaeda. Al Qaeda is scarcely a threat anymore, and the American demand for a wholesale denunciation seems to be something required simply for domestic purposes. Washington could therefore drop this condition—it is essentially meaningless.

With an American military withdrawal, the Taliban would lose its chief recruiting and motivating device, and under a cease-fire, Afghans could set about trying to work out their own future. An agreement with the Taliban would not necessarily bring the end of all fighting because there are spinoff and independent insurgent elements throughout the country as well as independent areas controlled by warlords—though it is at least conceivable that some of these could be brought into the agreement. As noted, the Taliban for years has been fighting against ISIS militants in Afghanistan as well as against other fringe offshoots. That said, 95 percent of violent incidents in Afghanistan involve fighting between pro-Kabul forces and the Taliban, which suggests these other militant groups “are a negligible factor on the battlefield.”127 With an agreement, the Taliban would likely continue to oppose these groups to the degree necessary, and they might even be willing to accept assistance from the United States (and/or regional powers) to do so.

Such a settlement might prove to be temporary. That is what happened in Vietnam when, after an interval of two years, communists launched an offensive and the U.S.-supplied South Vietnam military and government folded in 55 days as the United States wrung its hands from afar and then promptly, and with remarkably little obvious regret, moved on to other concerns. Later, the United States and the communist regime in now-unified Vietnam reconciled, commiserating with each other over their mutual concern about China.

However, the nightmare scenario is not a Taliban takeover or a further splintering of the country, but a descent into widespread and murderous civil war. There are no guarantees, but working against this outcome is the bone-deep exhaustion of the Afghan population with civil war, as seen in the overwhelming popularity of a short cease-fire between Taliban and government forces in June 2018 in which people in all areas and walks of life implored combatants on all sides to stop the fighting.128 The Afghan people have endured 40 years of war and are desperate for relief.129

The fact is that to satisfy the pressing U.S. interest to end the war in Afghanistan, policymakers will have to make difficult and politically sensitive concessions. But if the nightmare scenario can be avoided, none of those accommodations exceed the costs of waging a perpetual, stalemated conflict in the country. The national security threats emanating from Afghanistan have been considerably exaggerated, and even the worst-case scenarios present only limited, manageable hazards to American interests that are not effectively mitigated by continuing the war or by stubbornly adhering to maximalist, and fanciful, definitions of victory.

Conclusion

The United States cannot win the war in Afghanistan on the terms stipulated by the three presidents who have waged it, at least not at an acceptable cost. Pretending that the Taliban can be defeated and that a constitutionally bounded, democratic, and competent Kabul-based government can be left in its place is unrealistic. A Taliban victory might occur after an American military withdrawal, but this does not present a serious security concern to the United States. Particularly, the threat of a terrorist safe haven is minimal and based mostly on the myth that territorial harbors provide great utility in conducting transnational terrorist attacks. Moreover, fears of regional disintegration and destabilization are misplaced, as are concerns about a loss of credibility: there is good reason to expect stability to emerge following a negotiated withdrawal, and the war itself seems to inflict greater damage to America’s image than defeat likely would. Narrower elements of the mission, including quelling the opium trade and securing a lasting human rights regime, have substantially proven to be futile over almost two decades of effort and are not objectives that the U.S. military, a tool for protecting the country from threats overseas, is well suited to addressing.

A negotiated settlement, with a formal cease-fire and a U.S. military withdrawal at the center of it, is the most reasonable and promising way of overcoming inertia and of avoiding the most undesirable outcomes.

Notes

1. Bob Woodward, Obama’s Wars (New York: Simon & Schuster, 2010), p. 376.

2. Quoted in Paul Kennedy, “A Time to Appease,” National Interest, June 28, 2010, https://nationalinterest.org/article/a-time-to-appease-3539.

3. Gen. John W. Nicholson, Statement for the Record on the Situation in Afghanistan before the Senate Committee on Armed Services, 115th Cong., 1st sess., February 9, 2017, p. 2, https://www.armed-services.senate.gov/imo/media/doc/Nicholson_02-09-17.pdf.

4. Susan B. Glasser, “Laurel Miller: The Full Transcript,” Politico Magazine, July 24, 2017, https://www.politico.com/magazine/story/2017/07/24/laurel-miller-the-full-transcript-215410.

5. Lisa Curtis, “The Long Search for Peace in Afghanistan: Top-Down and Bottom-Up Efforts,” opening remarks at the U.S. Institute of Peace panel discussion, Washington, June 7, 2018, https://www.usip.org/events/long-search-peace-afghanistan.

6. George W. Bush, “President Bush Delivers State of the Union Address,” speech, Washington, January 28, 2008, https://georgewbush-whitehouse.archives.gov/news/releases/2008/01/20080128-13.html.

7.Steve Coll, Directorate S: The C.I.A. and America’s Secret Wars in Afghanistan and Pakistan (New York: Penguin Press, 2018), p. 336.

8. Ben Rhodes, The World as It Is: A Memoir of the Obama White House (New York: Random House, 2018), p. 75.

9. Coll, Directorate S, p. 488.

10. Barack Obama, “The New Way Forward—The President’s Address,” speech at the U.S. Military Academy at West Point, NY, December 1, 2009.

11. Barack Obama, “Statement by the President on Afghanistan,” speech at the White House, Washington, July 6, 2016, https://obamawhitehouse.archives.gov/the-press-office/2016/07/06/statement-president-afghanistan.

12. Special Inspector General for Afghanistan Reconstruction, 2019 High-Risk List, 2019, p. 8.

13. David Edelstein, Occupational Hazards: Success and Failure in Military Occupation (Ithaca, NY: Cornell University Press, 2008).

14. Alexander B. Downes and Lindsey A. O’Rourke, “You Can’t Always Get What You Want: Why Foreign-Imposed Regime Change Seldom Improves Interstate Relations,” International Security 41, no. 2 (Fall 2016): 43-89; Goran Peic and Dan Reiter, “Foreign-Imposed Regime Change, State Power and Civil War Onset, 1920-2004,” British Journal of Political Science 41, no. 3 (July 2011): 453-57; and Alexander B. Downes and Jonathan Monten, “Forced to Be Free?: Why Foreign-Imposed Regime Change Rarely Leads to Democratization,” International Security 37, no. 4 (Spring 2013): 90-131.

15. Graeme Smith, The Dogs Are Eating Them Now: Our War in Afghanistan (Berkeley: Counterpoint, 2015), p. xvi.

16. Jack Fairweather, The Good War: Why We Couldn’t Win the War or the Peace in Afghanistan (New York: Basic Books, 2014), p. 246.

17. Coll, Directorate S.

18. Coll, Directorate S, pp. 140-41.

19. Coll, Directorate S, p. 103.

20.Direct Overt U.S. Aid Appropriations for and Military Reimbursements to Pakistan, FY 2002-FY 2018 (Washington, DC: Congressional Research Service, 2017).

21.Pakistani Public Opinion Ever More Critical of U.S.: 74% Call America an Enemy (Washington: Pew Research Center, 2012).

22.“Afghanistan,” Transparency International, https://www.transparency.org/country/AFG.

23. Rod Nordland and Najim Rahim, “Afghan Vice President Survives Attack on Convoy,” New York Times, March 31, 2019.

24. Fairweather, The Good War, pp. 237-38.

25. Coll, Directorate S, p. 496; for a somewhat wider discussion, see pp. 494-96.

26. Fairweather, The Good War, p. 305.

27. Greg Jaffe and Missy Ryan, “The U.S. Was Supposed to Leave Afghanistan by 2017. Now It Might Take Decades,” Washington Post, January 26, 2016.

28. Kara Fox, “Taliban Control of Afghanistan on the Rise, US Inspector Says,” CNN, November 8, 2018.

29. Sen. Jack Reed, Testimony in Review of the Defense Authorization Request for Fiscal Year 2020 and the Future Years Defense Program before the Senate Committee on Armed Services, 116th Cong., 1st sess., February 5, 2019, p. 20, https://www.armed-services.senate.gov/imo/media/doc/19-04_2-05-19.pdf.

30. Ashley Jackson, “The Taliban’s Fight for Hearts and Minds,” Foreign Policy (September/October 2018): 43-49; Ashley J. Tellis and Jeff Eggers, U.S. Policy in Afghanistan: Changing Strategies, Preserving Gains (Washington: Carnegie Endowment for International Peace, 2017), p. 6.

31. Michael Mandelbaum, Mission Failure:America and the World in the Post-Cold War Era (New York: Oxford University Press, 2016), pp. 168-69; Coll, Directorate S, p. 664.

32. John Mueller, “The Search for the ‘Breaking Point’ in Vietnam: The Statistics of a Deadly Quarrel,” International Studies Quarterly 24, no. 4 (December 1980): 497-519. Recent research suggests the bombing campaign did have some effect in some cases. However, this was not enough to cause communist forces to pull back. Lien-Hang T. Nguyen, Hanoi’s War: An International History of the War for Peace in Vietnam (Chapel Hill: University of North Carolina Press, 2012).

33. On the Gulf War, see John Mueller, “The Perfect Enemy: Assessing the Gulf War,” Security Studies 5, no. 1 (1995): 77-117. On ISIS, see John Mueller and Mark G. Stewart, “Misoverestimating ISIS: Comparisons with Al-Qaeda,” Perspectives on Terrorism 10, no. 4 (2016): 32-41. The remarkable capacity of ISIS to self-destruct is one reason that lessons from that conflict are unlikely to be applicable to the war in Afghanistan. The Taliban does not share that crucial proclivity. See John Mueller, “Redefining Winning in Afghanistan,” National Interest, September 5, 2017.

34. Bernard Brodie’s observation about World War I seems to apply as well to the Afghanistan situation. He argues that “the first casualty is not so much ‘truth’ as simple reason” and “to attempt to express reason is, under the circumstances, to risk the label of ‘defeatist,’ the penalties for which are always unpleasant and sometimes extreme. The military commanders who in adversity can feel and exude optimism are the ones who inspire confidence.” War and Politics (New York: Macmillan, 1973), p. 26.

35. Steve Inskeep and Greg Myre, “Afghanistan’s Way Forward: A Talk with Gen. John Campbell, Decoded,” NPR’s Morning Edition, November 11, 2014.

36. These quotes were compiled by Patricia Gossman, “Commentary: What U.S. Generals Get Wrong about Afghanistan,” Reuters, April 12, 2018.

37. John F. Burns, “General Says He’s Hopeful about Taliban War,” New York Times, October 12, 2008.

38. This is according to an off-the-record conversation between one of the authors and a senior Pentagon official.

39. Barnett R. Rubin, “Negotiations Are the Best Way to End the War in Afghanistan,” Foreign Affairs, March 1, 2019.

40. Michael Hastings, “Another Runaway General: Army Deploys Psy-Ops on U.S. Senators,” Rolling Stone, February 24, 2011.

41. Scott Shane, “In Afghan War, Officer Becomes a Whistle-Blower,” New York Times, February 5, 2012; Daniel L. Davis, “Truth, Lies and Afghanistan: How Military Leaders Have Let Us Down,” Armed Forces Journal, February 1, 2012.

42. Michael Hastings, “The Afghanistan Report the Pentagon Doesn’t Want You to Read,” Rolling Stone, February 10, 2012.

43. Vali Nasr, The Dispensable Nation: American Foreign Policy in Retreat (New York: Anchor Books, 2014), p. 36.

44. Rhodes, The World as It Is, p. 74.

45. Woodward, Obama’s Wars, pp. 319-20.

46. Woodward, Obama’s Wars, p. 247.

47. Woodward, Obama’s Wars, p. 103.

48. Woodward, Obama’s Wars, p. 278.

49. Woodward, Obama’s Wars, p. 280.

50. There was dissent as well from Attorney General Jeff Sessions, who had been on the Senate Armed Services Committee for years and had repeatedly heard that the United States was six to 18 months from turning Afghanistan around—time and time again, the same thing, always wrong. Bob Woodward, Fear: Trump in the White House (New York: Simon & Schuster, 2018), pp. 255-56.

51. See Emma Ashford, “Trump’s Syria Strikes Show What’s Wrong with U.S. Foreign Policy,” op-ed, New York Times, April 13, 2018.

52. On the biases in the foreign policy community more generally, see Stephen M. Walt, The Hell of Good Intentions: America’s Foreign Policy Elite and the Decline of U.S. Primacy (New York: Farrar, Straus, and Giroux, 2018), pp. 91-136.

53. It is worth keeping in mind that the 9/11 attack has proven to be a severe outlier. Neither before nor after that event, in war zones or outside them, has any terrorist attack inflicted even one-tenth as much total damage. See John Mueller and Mark G. Stewart, Chasing Ghosts: The Policing of Terrorism (New York: Oxford University Press, 2016), pp. 117-21.

54. Matthew Kaminski, “Holbrooke of South Asia: America’s Regional Envoy Says Pakistan’s Tribal Areas Are the Problem,” Wall Street Journal, April 11, 2009.

55. Fairweather, The Good War, p. 246.

56. David Petraeus and Michael O’Hanlon, “Getting an Edge in the Long Afghan Struggle,” Wall Street Journal, June 22, 2017.

57.“Full Text: Trump’s Speech on Afghanistan,” Politico, August 21, 2017, https://www.politico.com/story/2017/08/21/trump-afghanistan-speech-text-241882.

58. Greg Jaffe and Missy Ryan, “Trump’s Favorite General: Can Mattis Check an Impulsive President and Still Retain His Trust?,” Washington Post, February 7, 2018.

59. Aaron Blake, “President Trump’s Full Washington Post Interview Transcript, Annotated,” Washington Post, November 27, 2018.

60. See also Paul Pillar, “Who’s Afraid of a Terrorist Haven?,” Washington Post, September 16, 2009, http://www.washingtonpost.com/wp-dyn/content/article/2009/09/15/AR2009091502977.html; John Mueller, “The ‘Safe Haven’ Myth,” The Nation, October 21, 2009; Martha Crenshaw, “Assessing the Al-Qa`ida Threat to the United States,” CTC Sentinel 3, no. 1 (2010): 6-9; Micah Zenko and Amelia Mae Wolf, “The Myth of the Terrorist Safe Haven,” Foreign Policy, January 26, 2015.

61. Lawrence Wright, The Looming Tower: Al-Qaeda and the Road to 9/11 (New York: Knopf, 2006), pp. 230-31, 287-88; Jason Burke, Al-Qaeda: The True Story of Radical Islam (London: I.B. Taurus, 2003), pp. 150, 164-65; Vahid Brown, “The Façade of Allegiance: Bin Ladin’s Dubious Pledge to Mullah Omar,” CTC Sentinel 3, no. 1 (January 2010): 1-6.

62. Scott Atran, “Turning the Taliban against Al Qaeda,” New York Times, October 26, 2010.

63. Nic Robertson, “Afghan Taliban Spokesman: We Will Win the War,” CNN, May 5, 2009.

64. Brown, “The Façade of Allegiance,” pp. 1-6.

65. Jason Burke, Al-Qaeda: Casting a Shadow of Terror (New York: I. B. Tauris, 2003), pp. 167-68.

66. See also Crenshaw, “Assessing the Al-Qa`ida Threat,” p. 7.

67. Scott Shane, “A Dogged Taliban Chief Rebounds, Vexing U.S.,” New York Times, October 10, 2009.

68. Brian Glyn Williams, “Return of the Arabs: Al-Qa`ida’s Current Military Role in the Afghan Insurgency,” CTC Sentinel 1, no. 3 (February 2008): 22-25.

69. Craig Whitlock, “Facing Afghan Mistrust, al-Qaeda Fighters Take Limited Role in Insurgency,” Washington Post, August 23, 2010.

70. Daniel W. Drezner, “Why I’m Glad I’m Not a Counter-Terrorism Expert,” Foreign Policy, June 28, 2010.

71. Seth G. Jones, “The Rise of Afghanistan’s Insurgency: State Failure and Jihad,” International Security 32, no. 4 (Spring 2008): 7-40.

72. Brown, “The Façade of Allegiance,” p. 2.

73.Afghanistan: The Precarious Struggle for Stability (Ottawa: Canadian Security Intelligence Service, May 2019), p. 28. This report summarizes the views emerging from a January 2019 meeting of six experts from Canada, the United States, and Europe.

74.“To the Point,” Public Radio International, May 14, 2009.

75. See Zenko and Wolf, “The Myth of the Terrorist Safe Haven.”

76. On al Qaeda’s inadequacies, see Fawaz A. Gerges, The Rise and Fall of Al-Qaeda (New York: Oxford University Press, 2011); Mueller and Stewart, “Misoverestimating ISIS”; Mueller and Stewart, Chasing Ghosts, chap. 4. Al Qaeda’s remarkably limited record since 2001 suggests that Glenn Carle was right when he said in 2008: “The organization … has only a handful of individuals capable of planning, organizing and leading a terrorist operation … its capabilities are far inferior to its desires… . We must not take fright at the specter our leaders have exaggerated. In fact, we must see jihadists for the small, lethal, disjointed and miserable opponents that they are.” Glenn L. Carle, “Overstating Our Fears,” op-ed, Washington Post, July 13, 2008. Terrorism specialist Marc Sageman characterizes the threat terrorists present in the United States as “rather negligible.” Marc Sageman, Misunderstanding Terrorism (Philadelphia: University of Pennsylvania Press, 2017), p. 170; see also Marc Sageman, Turning to Political Violence: The Emergence of Terrorism (Philadelphia: University of Pennsylvania Press, 2017), p. 373.

77. Coll, Directorate S.

78.Precarious Struggle for Stability, p. 49.

79.Precarious Struggle for Stability, p. 47.

80. This analysis comes from Barry Posen, “It’s Time to Make Afghanistan Someone Else’s Problem,” The Atlantic, August 18, 2017.

81. Justin Rowlatt, “How the US Military’s Opium War in Afghanistan Was Lost,” BBC News, April 25, 2019.

82. Mujib Mashal, “Afghan Taliban Awash in Heroin Cash, a Troubling Turn for War,” New York Times, October 29, 2017.

83. Azam Ahmed, “Tasked with Combating Opium, Afghan Officials Profit from It,” New York Times, February 15, 2016.

84. Vanda Felbab-Brown, “Afghanistan’s Opium Production Is through the Roof—Why Washington Shouldn’t Overreact,” Brookings Institution, November 21, 2017.

85. Alfred W. McCoy, “How the Heroin Trade Explains the US-UK Failure in Afghanistan,” The Guardian, January 9, 2018.

86. McCoy, “US-UK Failure in Afghanistan.” See also Barnett R. Rubin, Afghanistan from the Cold War through the War on Terror (New York: Oxford University Press, 2013), p. 401; Coll, Directorate S, p. 60.

87. On this issue more generally, see Christopher J. Coyne and Abigail R. Hall, “Four Decades and Counting: The Continued Failure of the War on Drugs,” Cato Institute Policy Analysis no. 811, April 12, 2017.

88. Special Inspector General for Afghanistan Reconstruction, 2019 High-Risk List, pp. 41-42.

89. Najia Nasim and Megan Corrado, “Don’t Sacrifice Afghan Women’s Freedoms for a Flawed Peace Deal,” op-ed, The Hill, February 16, 2019.

90. Mariam Safi and Muqaddesa Yourish, “What Is Wrong with Afghanistan’s Peace Process,” op-ed, New York Times, February 20, 2019.

91. United Nations Development Programme, Human Development Indices and Indicators: 2018 Statistical Update.

92. Jeni Klugman, “This Chart Shows the Best and Worst Countries for Women in the World Today,” Washington Post, November 7, 2017.

93.Precarious Struggle for Stability, p. 65.

94.“Trump’s Speech on Afghanistan.”

95. Kristina Wong, “McCain: Obama Should Have Halted Afghan Withdrawal,” The Hill, October 15, 2015.

96. David Petraeus and Michael O’Hanlon, “The U.S. Needs to Keep Troops in Afghanistan,” Washington Post, July 7, 2015.

97. Dominic D. P. Johnson and Dominic Tierney, “Bad World: The Negativity Bias in International Politics,” International Security 43, no. 3 (Winter 2018/19): 112.

98.“Trump’s Speech on Afghanistan.”

99. Joel D. Rayburn and Frank K. Sobchak, eds., The U.S. Army in the Iraq War—Volume 2: Surge and Withdrawal 2007-2011 (Carlisle, PA: U.S. Army War College Press, 2019), pp. 569-611.

100. Richard N. Haass, “Agonizing over Afghanistan,” Project Syndicate, January 14, 2019.

101. Edward Luce, “Donald Trump Is Pulling a Vietnam in Afghanistan,” Financial Times, April 4, 2019.

102. Bing West, “Afghanistan Options: Leave, Increase, Stand Pat, or Cut Back?,” Strategika, February 26, 2018, https://www.hoover.org/research/afghanistan-options-leave-increase-stand-pat-or-cut-back.

103. Ryan Crocker, “I Was Ambassador to Afghanistan. This Deal Is a Surrender,” Washington Post, January 29, 2019.

104. Daryl G. Press, Calculating Credibility: How Leaders Assess Military Threats (Ithaca, NY: Cornell University Press, 2007). Also see Jonathan Mercer, Reputation and International Politics (Ithaca, NY: Cornell University Press, 1996); and Robert Jervis and Jack Snyder, eds., Dominoes and Bandwagons: Strategic Beliefs and Great Power Competition in the Eurasian Rimland (New York: Oxford University Press, 1991).

105. Dina Smeltz, Foreign Policy in the New Millennium: Results of the 2012 Chicago Council Survey of American Public Opinion and U.S. Foreign Policy (Chicago: Chicago Council on Global Affairs, 2012).

106. Cited in Mercer, Reputation and International Politics, p. 39.

107. See Richard Ned Lebow, Why Nations Fight (Cambridge: Cambridge University Press, 2010).

108. Jonathan Mercer, “The Illusion of International Prestige,” International Security 41, no. 4 (Spring 2017): 135.

109. For an extended discussion, see John Mueller, “Reflections on the Vietnam Protest Movement and on the Curious Calm at the War’s End,” in Peter Braestrup, ed., Vietnam as History (Lanham, MD: University Press of America, 1984), pp. 151-57.

110. Ahmed Rashid, Taliban: Militant Islam, Oil and Fundamentalism in Central Asia (New Haven, CT: Yale University Press, 2000), chaps. 1-2.

111.Precarious Struggle for Stability, pp. 25-31.

112. John Mueller, War and Ideas: Selected Essays (London: Routledge, 2011), chap. 7; Chaim Kaufmann, “Threat Inflation and the Failure of the Marketplace of Ideas: The Selling of the Iraq War,” International Security 29, no. 1, (Summer 2004): 5-48.

113. Betty Glad and Philipp Rosenberg, “Bargaining Under Fire: Limit Setting and Maintenance during the Korean War,” in Psychological Dimensions of War, ed. Betty Glad (Newbury Park, CA: Sage Publications, 1990), p. 195.

114. David Zucchino, “U.S. and Afghan Forces Killed More Civilians Than Taliban Did, Report Finds,” New York Times, April 24, 2019.

115. For example, see Coll, Directorate S, chap. 31.

116.“Trump’s Speech on Afghanistan.”

117. Shane, “Dogged Taliban Chief Rebounds.”

118. Shashank Bengali, Sultan Faizy, and Aoun Sahi, “What Might Peace with the Taliban in Afghanistan Look Like?,” Los Angeles Times, January 29, 2019, p. A3.

119.Precarious Struggle for Stability, p. 69; Bengali, Faizy, and Sahl, “What Might Peace,” p. A3.

120. Borham Osman, “The U.S. Needs to Talk to the Taliban in Afghanistan,” New York Times, March 19, 2018; Tellis and Eggers, U.S. Policy in Afghanistan, p. 16; Precarious Struggle for Stability, p. 68.

121. Osman, “U.S. Needs to Talk to the Taliban.”

122. Osman, “U.S. Needs to Talk to the Taliban.”

123.Precarious Struggle for Stability, p. 19.

124. Frud Bezhan, “Afghan Taliban Open to Women’s Rights—But Only on Its Terms,” Radio Free Europe/Radio Liberty, February 6, 2019.

125. Coll, Directorate S, p. 572.

126. Rod Nordland and Mujib Mashal, “U.S. and Taliban Make Headway in Talks for Withdrawal from Afghanistan,” New York Times, January 24, 2019.

127.Precarious Struggle for Stability, p. 48.

128.Precarious Struggle for Stability, p. 21; Najim Rahim and Mujib Mashal, “As Afghan Cease-Fire Ends, Temporary Friends Hug, Then Return to War,” New York Times, June 17, 2018.

129. Erik Goepner, “War State, Trauma State: Why Afghanistan Remains Stuck in Conflict,” Cato Institute Policy Analysis no. 844, June 19, 2018.

Citation: Glaser, John, and John Mueller. “Overcoming Inertia: Why It’s Time to End the War in Afghanistan.” Policy Analysis no. 878, Cato Institute, Washington, DC, August 13, 2019. https://doi.org/10.36009/PA.878.

John Glaser is director of foreign policy studies at the Cato Institute. John Mueller is a political scientist at Ohio State University and a senior fellow at the Cato Institute.

Legal Immigration Will Resolve America’s Real Border Problems

$
0
0

David Bier

The U.S. government has allowed its asylum and border processing system to become overwhelmed. Central Americans are crossing illegally and often relying on asylum and other processing procedures at the border because they are virtually the only ways for them to enter the United States. After numerous failed attempts to deter the flow or restrict asylum, the most realistic and humane way to control the border is for Congress and the administration to channel future immigrants into an orderly legal structure for coming to the country.

Five reforms would make the asylum system manageable again and restore control over the border:

1. Humanitarian parole: Waive entry restrictions for Central Americans in the backlogged green card lines and with family legally in the United States.

2. Private refugee sponsorship: Allow U.S. residents and organizations to sponsor refugees from abroad as the State Department had planned in 2016.

3. Guest worker expansion: Expand the H-2A and H-2B seasonal worker programs to year-round jobs for Central Americans and waive the H-2B cap.

4. Legalization: Legalize illegal immigrants who have no serious criminal convictions and let them reunite with their spouses and children, eliminating the network for future illegal immigration.

5. Processing at ports: Remove the cap on asylum seekers at ports of entry, process 100 percent of their claims there, and release them with an employment authorization document contingent on them appearing in court.

These reforms will not stop all asylum seekers, but they will redirect enough of the flow into other legal channels to make the asylum process manageable again for U.S. authorities.

Introduction

“We were looking for the immigration office,” Elmer Danilo Díaz Hernández told a Border Patrol agent after he crossed the border illegally with his 13-year-old son in 2018.1 Like Hernández, 70 percent of immigrants who cross the U.S.-Mexico border this year will not seek to evade detection, according to the Department of Homeland Security.2 Instead, they will intentionally seek out Border Patrol, treating the enforcement agency as an “immigration office” — a part of the legal U.S. immigration system. In Texas, immigrants line up at the border fence’s gate, which — because the fence is on the U.S. side of the Rio Grande — the agents open to process those waiting to be “apprehended.”3

The vast majority of immigrants coming to the U.S.-Mexico border clearly want the opportunity to enter a legal process, and many of them are accessing the only legal process available to them: asylum and related procedures. While U.S. law permits the fast removal of any noncitizen stopped at the border without documents, this rule has two exceptions. The exceptions apply to asylum seekers and their minor children who demonstrate a credible fear of persecution in their home country and to unaccompanied children arriving without their parents.4 Moreover, the government has also released parents who arrive with minor children (i.e., families) without screening them for asylum when it has run out of space to hold them for asylum interviews.5

Figure 1 shows how the numbers of asylum seekers, unaccompanied children, and families apprehended by Border Patrol by year have increased since 2009. By the end of 2019, the United States will have received about a million asylum seekers and unaccompanied children at the border since 2012 — the vast majority of which have come from the Northern Triangle of Central America (i.e., El Salvador, Honduras, and Guatemala).6 Another 900,000 families have crossed illegally during that time.

If the government concludes that an asylum claim is credible — which it does three-quarters of the time — it generally releases asylum seekers into the United States pending an immigration court hearing to determine the validity of the claim. As a matter of law, unaccompanied children also skip the credibility determination process and are released to sponsors, usually family members, in the United States. In 2019, families were often released even without requesting asylum because the government had nowhere to detain them pending an asylum interview or their removal.

Of the immigrants who claimed a credible fear of persecution to start the asylum process in 2014, just 5 percent had received asylum or other relief from deportation by the end of fiscal year (FY) 2017, while the government deported 33 percent of them and ordered deported another 14 percent who hadn’t yet left.7 The rest — nearly half of all the asylum seekers — remained in removal proceedings three years after their initial contact with authorities. If current trends continue, about half of all asylum seekers will likely end up with a removal order that is never executed (i.e., will become illegal immigrants). Unaccompanied children have had similar outcomes.

This new flow of immigrants to the border is a marginal improvement on traditional illegal immigration as it is less dangerous for the migrants and is easier for Border Patrol to monitor.8 Despite these marginal benefits, the situation poses many of the same problems as traditional illegal immigration in which migrants attempt to evade detection. It consumes significant law enforcement resources, immigrants often pay thousands of dollars to criminal organizations to traffic themselves to the U.S.-Mexico border, and many are victimized. America can do better.

The Solutions: More Legal Options to Immigrate

Central Americans choose to come to the United States because it is the safest, freest, and most prosperous country that they can reach. The fundamental cause of the border surge is that crossing the border is a far more effective method for Central Americans to enter the United States than using the rest of the U.S. immigration system. Figure 2 highlights the disconnect between the number of visas and the number of people arriving at the border from the Northern Triangle. In 2019, border apprehensions of Central Americans are on pace to outnumber permanent visas issued to Central Americans by more than 20 to one. For temporary work visas, the ratio is 78:1.

Currently, the government releases most asylum seekers into the interior of the United States because it lacks detention space to hold all of them, making it a viable method to enter even if the immigration courts ultimately deny the asylum application. Asylum applicants also receive employment authorization if their application remains pending for 180 days. And the law requires the government to release unaccompanied children. Given the limited number of visas, Central Americans rationally calculate that they are more likely to gain access to the United States through the U.S.-Mexico border. The following five reforms take that reality into account and would channel future immigrants into the legal immigration system, incentivize compliance, and restore integrity to the legal immigration system.

Solution 1: Parole for Green Card Applicants and U.S. Family

The most pressing need is for the government to allow immigrants to reunite with their families in the United States, which is a powerful mechanism enabling immigration (even if it is not the underlying reason for it).9 As early as 2014, over 90 percent of Central American child migrants had at least one family member in the United States.10 Also that year, the United Nations High Commissioner for Refugees found that 81 percent of Central American children left their home countries, in part, to reunite with family members in the United States or for economic or educational opportunity.11 Adult Central Americans also generally have family members in the United States.12 The problem is that these arrivals have no legal way to reunite with these family members.

Fortunately, the Department of Homeland Security (DHS) has the authority to “parole into the United States … for urgent humanitarian reasons … any alien applying for admission to the United States” who is not otherwise qualified to enter.13 In the immigration context, “parole” is a waiver of immigration restrictions, allowing an immigrant to enter even if they do not fit into a specific category in the law. Paroled migrants receive only legal status and employment authorization, are ineligible for means-tested federal welfare programs, and cannot receive legal permanent residence (or, later, U.S. citizenship) unless they separately qualify through other existing pathways.14 The administration should grant parole to immigrants still in the Northern Triangle if:

  1. they have a green card petition approved on their behalf but cannot receive a visa due to the quotas; or
  2. they have a spouse, parent, child, sibling, grandparent, or — in the case of children — an aunt or uncle with legal status in the United States.

The argument for humanitarian parole is very strong for the first group. Green card applicants have the right to come to the United States eventually, but green card limits — which Congress has not updated since 1990 — impose such exceptionally long wait times that most eligible Central Americans cannot immigrate through these pathways for many years.

Immediate family of American citizens or green card holders currently have long wait times (Table 1). New applicants for green cards from the Northern Triangle will have to wait up to 65 years in some cases, incentivizing many of them to come illegally rather than to apply for a visa that would not be issued until after they had died from old age (Table 1).15 Approximately 144,000 immigrants from the Northern Triangle would benefit from humanitarian parole.16 If these immigrants will be reuniting with the families in the United States either way, the government should assure that when they do, it is before they die of old age.

With such long waits, many applicants will never receive their green cards, and many others will likely never bother to apply at all. DHS should grant parole to these immigrants, allowing them to relocate immediately to the United States, and it should keep the program open to future green card applicants. This type of ongoing parole program for green card applicants would incentivize more immigrants to apply through these legal channels once they see that they are a viable and quick method to immigrate legally. This would have a dual benefit: it would decrease the cost of monitoring the border and save migrants from dangerous journeys, since many immigrants would skip the hazardous trek through Mexico if they used humanitarian parole.

In addition to this program, DHS should initiate a broader parole program for family reunification for anyone with a close relative in the United States who holds any legal status — citizenship, legal permanent resident status, temporary protected status, parole, etc. DHS should define close relatives as spouses, children, parents, siblings, and grandparents of the immigrants as well as aunts and uncles if the immigrant is a minor. There are currently no immigration categories for these relationships, or the pathways require the U.S. family member to be a U.S. citizen or legal permanent resident, excluding people with other legal statuses. As an example, one Central American woman who was seeking asylum and who reunited with her U.S. citizen teenage daughter in Denver after crossing the border was ineligible for sponsorship because her daughter was a minor.17

While it is unclear how many people would benefit from this provision, the approximately 1.3 million Salvadorans, Guatemalans, and Hondurans who had legal status in the United States already in 2016 give some indication (Table 2).18 About three-quarters of them — 937,000 — were U.S. citizens, and another 368,000 were legal noncitizens with various statuses.

DHS should prioritize reuniting Central Americans with legal U.S. family members. People who have U.S. family are far more likely to immigrate here than others, and humanitarian parole would incentivize them to use the legal system. It also would further reward people who have relied on the legal immigration options and so create another positive incentive to obey the law.

Solution 2: Private Refugee Sponsorship by U.S. Individuals and Entities

The second group that should receive immediate attention are immigrants who were forced to flee their home countries due to well-founded fears of persecution. U.S. law labels these immigrants “refugees” if they receive processing overseas or “asylees” if they apply in the United States.19 Although most immigrants arriving at the border from Central America will not receive asylum, about one in six people at the border who assert a credible fear of persecution do receive asylum by proving their cases in immigration court.20

This fact shows that many Central Americans could qualify for the refugee program, which would enable them to apply outside the United States rather than at the border. The problem is that the U.S. refugee program in Central America is virtually nonexistent, and refugees cannot apply directly to it. By the end of FY 2019, the program will have admitted fewer than 3,000 refugees from the Northern Triangle since FY 2015, while people from the Northern Triangle will have made nearly 275,000 asylum claims at the border (Figure 3).21 Since 2015, there have been about 96 asylum claims by Central Americans at the border for each refugee admitted from Central America.

The most difficult part of starting a refugee program is identifying refugees for resettlement. Generally, the U.S. Refugee Admissions Program relies on referrals from the United Nations High Commissioner for Refugees (UNHCR). This process works most effectively when the refugee population is broad and easily identifiable based on direct government persecution. Generally, UNHCR refers refugees who it knows need resettlement from camps where they have lived for a protracted period and have little hope of returning home. In Central America, however, refugee claims are largely based on private violence that the government refuses to investigate, and there are no refugee camps.22 In those cases, having private actors like family members, nonprofit organizations, and churches identify refugees is superior to relying on UNHCR.

These private actors would submit requests to the U.S. Department of State to resettle refugees that they have identified. If the private actors pay the costs of processing the application and bringing the refugees to the United States, the program would reduce the number of candidates without legitimate refugee claims and provide a funding mechanism to process them quickly. If processing is exceptionally slow, immigrants may decide to head to the U.S.-Mexico border anyway, undermining the main purpose of the program. Although refugees traditionally have not paid normal administrative processing fees, rapid processing would be so important for expanded refugee resettlement from Central America that the government should adopt a fee structure anyway.

Private refugee sponsorship is not unprecedented. Canada has had a private sponsorship system for refugees since the late 1970s, and more than 275,000 refugees have used it.23 The United States also briefly had a private refugee sponsorship program for Cubans and Soviet Jews in the late 1980s and early 1990s.24 The Obama administration adopted a family refugee sponsorship program when it created the Central American Minor (CAM) program in 2014. CAM allowed U.S.-based parents with legal status to request resettlement on behalf of their children. Unfortunately, its narrow criteria only allowed a few thousand children to apply. In late 2016, the State Department announced plans to create a broader pilot program for private refugee sponsorship in 2017, which would have allowed private organizations and individuals to sponsor refugees without family ties.25 The Trump administration failed to implement the pilot program, and it cancelled the CAM program.26

A private sponsorship program could begin almost immediately without needing to involve UNHCR. Sponsors would submit an affidavit of support that shows that they have the resources to fund the refugees’ initial resettlement and includes a pledge to support them if they are unable to support themselves in their first two years in the country. Moreover, the Department of State should make it possible for refugees to apply directly to the program, and U.S. private actors could decide which refugees they want to sponsor. Allowing refugees to enter a legal humanitarian immigration process in their home countries (if possible) or in a country to which they have fled would give them a reason to await adjudication of their application and ultimately sponsorship rather than immediately heading to the United States.

Solution 3: Expand Guest Worker Programs in Central America

Family reunification and persecution motivate a significant portion of Central American migration, but economic opportunity and employment remain the most important factors in drawing Central Americans to the border. In one typical example, Honduran Héctor Romero told the New York Times in January 2019 that he would head north because: “I have had only two days’ work a week for the past three months and that barely covers expenses.”27 In 2014, 87 percent of Central Americans apprehended in Mexico told Mexican officials that lack of employment, low wages, and poor working conditions were their primary motivations for leaving their home countries.28 A 2017 survey of Central Americans found that employment and wages dominated the reasons for relocating.29 Nearly all Guatemalans, 94 percent of Hondurans, and 66 percent of Salvadorans exclusively cited economic motivations.30

To address the economic drivers of migration, Congress needs to expand guest worker programs or create such programs for migrants from the Northern Triangle to channel them into legal employment. Worker programs have already effectively controlled illegal immigration from Mexico — the largest historical source of people crossing illegally into the United States.31 Despite the increase in apprehensions of Central Americans by Border Patrol, apprehensions overall have declined since the early 2000s due to more Mexican immigrants using the guest worker visa programs.

Figure 4 compares entries under low-skilled guest worker visa programs and apprehensions per Border Patrol agent from FY 1949 to FY 2018. Researchers use apprehensions as a proxy for illegal crossings, but because more agents can result in more apprehensions without more people crossing, it is important to control for the amount of border enforcement by looking at the number of apprehensions per Border Patrol agent. As Figures 4 shows, when illegal immigration first spiked in the early 1950s, Congress responded both with enforcement and guest worker liberalization under the bracero program for Mexican seasonal agricultural workers. Border Patrol even walked many illegal immigrants to the border, handed them a work visa, and readmitted them legally.32

The bracero guest worker expansion resulted in a massive decline in illegal immigration. When Congress allowed the program to sunset in 1965, illegal immigration returned for four decades of large-scale uninterrupted crossings. At the time, Border Patrol agents fiercely opposed the elimination of the bracero program, correctly predicting that it would result in more illegal immigration than the agency could realistically expect to control. “We can’t do the impossible,” one Border Patrol official told Congress when asked if the agency could stop illegal immigration without the bracero program.33

Since the late 1990s, however, the number of worker admissions has risen almost continuously. The H-2A and H-2B visa system — for seasonal agricultural (H-2A) and nonagricultural (H-2B) positions, respectively — came into existence in 1987, but it took time for employers to learn how to navigate the regulations. As Figure 5 shows, nearly 99 percent of the increase in worker admissions have come from Mexico, while very few have come from the Northern Triangle countries in Central America.

Guest worker programs have effectively controlled illegal immigration from Mexico, and they could do the same for Central America. Work visas need not be issued to everyone who would otherwise come illegally, but their widely known availability creates an expectation that a person could receive a visa in the future if they wait. “Most of my friends go with visas or they don’t go at all,” one Mexican worker said in 2019. Although he had not yet received a visa, his prior experience working in the United States under the H-2A program gave him a reason to wait. He told the Washington Post that he wants to be “working in the United States — but only with a visa.”34 That explains why, from FY 1996 to FY 2019, every H-2 worker’s admission from Mexico was associated with a decrease in two border apprehensions of Mexicans.35

Central Americans would cherish the same opportunity. In fact, because asylum seekers are eligible for employment authorization after their application has been pending for 180 days, the United States already has a de facto worker program for Central America, just one that requires the workers to travel to the border and ask for asylum.36 DHS issued more than 345,000 employment authorization documents to immigrants with pending asylum claims in FY 2018 (Figure 6).37 This means that in practice, the asylum program is already a much larger worker program than the H-2 guest worker programs, which issued 280,000 visas total in FY 2018.38 FY 2018 was even a down year for asylum employment authorizations due to fewer asylum requests at the border in 2017. In FY 2017, more than 400,000 were issued, reflecting higher border flows the year before. It is likely that FY 2019 will exceed the 2017 record and that FY 2020 will blow it away.

Nearly all H-2 visas have gone to Mexicans for several reasons. First, migrant workers cannot apply directly for H-2 visas. U.S. employers must recruit the workers and petition for visas on their behalf, and employers have no incentive to recruit in the Northern Triangle.39 Second, with about 130 million people, Mexico has a much bigger labor market in which to recruit. By comparison, the three Northern Triangle countries have just 33 million combined, and the largest — Guatemala — has only 17 million. As long as U.S. wages remain much higher than Mexican wages, employers can always find enough willing workers in Mexico alone. Third, U.S. recruiters of foreign workers already operate in Mexico, so the marginal costs of recruiting additional workers there is approaching zero. And Central Americans cannot simply go to Mexico to meet with U.S. recruiters there because U.S. law requires each worker to prove “a residence in a foreign country which he has no intention of abandoning.”40 A Central American who has already abandoned his country once would not meet this requirement.

Given this situation, Congress has three options to make work visas more readily available in Central America: it could design a new worker program, throw out several of the rules of the existing system, or give an incentive for U.S. recruiters to set up in Central America. With the conservative nature of government, the last option seems like the most politically realistic in the short term.

Congress should create incentives for U.S. recruiters of temporary workers to establish operations in the Northern Triangle. Here is one way in which to create such incentives:

  • For Guatemala, Congress should let U.S. employers hire H-2A agricultural workers even in nonseasonal or temporary positions.
  • For El Salvador, it should permit H-2B nonagricultural workers to enter and work above the visa cap of 66,000 but only in seasonal positions.
  • For Honduras, it should also waive the H-2B cap but only in nonseasonal positions.

Which country receives which carveout is less important than that Congress give recruiters a reason to operate in each country. The recruiters would also bear the burden of advertising the availability of these new visas, countering narratives from smugglers that the only way to reach the United States is through Mexico. No single reform is more important to solving the asylum crisis than making guest worker programs more available to Central Americans.

Solution 4: Legalization of Existing Illegal Immigrants and Asylum Seekers

None of the above proposed reforms would address two of the biggest problems for the asylum system — the immense backlog of immigration court cases and the number of asylum seekers already in the process, most of whom will not end up receiving any form of legal status. Congress should grant a permanent legal status to the current illegal and asylum-seeking populations, which would clear immigration court backlogs, prevent asylum seekers from becoming illegal immigrants, and allow family of the legalized immigrants to reunite legally.

The court backlog has led to the breakdown of the entire asylum and removal process. As of March 2019, the immigration courts had a backlog of about 870,000 cases.41 In 2019, the average case took 418 days for Guatemalans, 441 days for Hondurans, and 714 days for Salvadorans.42 These durations had all doubled or tripled since 2009, and in many courts, new cases in early 2019 were being scheduled for 2022.43 With waits of these lengths, any new applicants either need to be detained for periods that rival the punishments handed out to felons or released. Either way, it is not a manageable situation.

Congress must hit the reset button on immigration. That process starts with establishing legal channels for future immigrants, but it needs to conclude with a recognition that the existing population of illegal immigrants and asylum seekers cannot be efficiently dealt with under the current process and are not going to be deported. Legalization of the current illegal population would clear the backlogs and restore order to the immigration courts.

As important, Congress should realize that the millions of illegal immigrants and asylum seekers already in the United States provide a network to facilitate the travel and entrance of new illegal immigrants and asylum seekers. Karla Gonzalez, for instance, came from Honduras with just her youngest child but eventually sent for her two older children in 2018.44 Legalizing them and providing them a legal way to reunite with their families is crucial to diverting the flow into legal channels and regulating the border. In 2016, there were already 1.7 million illegal immigrants from the Northern Triangle in the United States.45 Congress cannot create a legal immigration program to reunite immigrants with illegal immigrant families in the United States, but it can and should allow for the reunification of families of newly legalized immigrants.

Of course, legalization could draw more people to come to the border on the erroneous belief that they might benefit or benefit from a future legalization. This is why the government should pair legalization with an expansion of legal immigration along the lines proposed above. Ultimately, legalization is necessary to stop illegal immigration by making legal immigration possible again for the family members of legalized immigrants.

Solution 5: Process All Asylum Seekers at Ports of Entry

The four reforms proposed above will not eliminate asylum, and some people will still come to the border seeking a haven. The final component of reform should focus on processing asylum seekers in a way that minimizes the security and humanitarian challenges posed by the current system. No single goal should be more important to Customs and Border Protection (CBP) in this respect than processing 100 percent of asylum seekers at ports of entry. Unfortunately, the agency has created a perverse set of incentives that discourage legal entry and encourage illegal entry:

  • First, at U.S. prompting, Mexico is intercepting immigrants before they can reach U.S. ports. Mexican agents then direct them to get on a legal immigration list and wait until their name is called.46 By contrast, immigrants who attempt to cross illegally largely are free to do so.
  • Second, in April 2018, CBP instituted a monthly cap of processing about 10,000 undocumented migrants — including asylum seekers — at ports of entry. This means that asylum seekers must wait for months, homeless, in dangerous Mexican cities.47 By contrast, Border Patrol immediately processes asylum seekers who cross illegally.48
  • Third, CBP detains 100 percent of asylum seekers at ports of entry for transfer to interior detention facilities.49 By contrast, Border Patrol has released tens of thousands of immigrant families apprehended between ports without Border Patrol transferring them for further processing in the interior.50
  • Fourth, CBP guarantees that 100 percent of asylum seekers at ports of entry receive credible fear interviews, which — if an asylum officer finds no credible fear — result in the immediate removal of about one in four asylum seekers.51 By contrast, Border Patrol is releasing families into the United States without setting up these interviews.52

These practices create perverse incentives for migrants to cross between ports of entry and not wait for legal processing at the ports. The government should reverse these incentives in every case. It should remove the cap on asylum seekers at ports of entry and work with the Mexican government to direct asylum seekers to ports of entry. While CBP complains of a lack of resources to process undocumented immigrants at ports, the agency had the capacity to process twice as many as its current monthly cap of 10,000 at ports in October 2016.53 CBP’s complaints about resources refer to a lack of resources to process undocumented immigrants in the exact manner that it wants — with 100 percent detention, 100 percent transfer to ICE detention, and 100 percent asylum interviews — as was the case before 2014.

But CBP cannot process asylum seekers between ports of entry in that manner either. That is why Border Patrol is releasing families without interviews or detention. If the agency instead processed asylum seekers at ports with immediate release, it would incentivize people to follow the law and not cross illegally in dangerous and remote parts of the border. To incentivize asylum seekers to show up in court, DHS should issue them employment authorization that is contingent on their appearance in court.54 Currently, DHS grants employment authorization after 180 days. The department should also adopt other proven methods of obtaining compliance with court dates and removal orders, including community and electronic monitoring, legal orientation, and access to legal counsel.55

CBP at ports might need some assistance from Border Patrol agents to process everyone quickly, but the agency has been moving resources in the opposite direction: transferring 750 CBP port inspectors to Border Patrol to process illegal crossers.56 Indeed, despite claiming to hit its self-imposed capacity in early 2018, the agency has not sped up processing at ports at all, while taking drastic steps to do so between ports. The fact is that CBP likely already can process (i.e., collect fingerprints, conduct background checks, and issue charging documents) all asylum seekers with the resources that it currently has at ports.

Until January 2017, CBP demonstrated that it could process asylum seekers in minutes for tens of thousands of Cubans applying under “wet foot, dry foot” — a policy that granted immediate release to Cubans seeking a haven on U.S. soil. As one Cuban explained in 2016, “They take your papers, ask you a series of questions, take your fingerprints, fill out some paperwork and then they say, ‘Welcome to the United States.’”57 Although the process for other asylum seekers would necessarily need to be somewhat different — including issuing them a charging document (i.e., a notice to appear in court) — the objection that quick processing for undocumented immigrants at ports is impossible given current resources is inaccurate.

Removing the cap on asylum seekers at ports would not stop the flow of asylum seekers — that would require other reforms (see above) — but it would ameliorate some negative consequences. Port processing would lessen the number of remote crossings and long detentions implicated in the deaths of several children in 2018.58 In 2019, a father and his daughter drowned crossing the Rio Grande after CBP turned them away at a port of entry.59 Additionally, U.S. law considers crossing the border illegally a misdemeanor, so processing asylum seekers at ports would remove the criminal consequences and allow federal prosecutors to focus on other crimes. Processing all asylum seekers at ports would dramatically improve both the security and humanitarian issues associated with the asylum crisis.

Conclusion

Legal immigration is a proven and effective mechanism to manage migration. The first three legal immigration reforms outlined in this paper deal with each component of the current migration flows: (1) a parole program for families seeking to reunite, (2) a private sponsorship program for refugees fleeing violence, and (3) a work visa program for workers seeking economic opportunity. The last two proposals address the immigrants already at the borders or inside the United States seeking asylum: (4) legalizing the existing population of illegal immigrants and asylum seekers and (5) channeling future asylum seekers to ports of entry.

The reforms outlined in this paper would immediately relieve Border Patrol from having to spend so much of its time dealing with peaceful people seeking a better life in the United States. Awareness of the availability of legal options would create a virtuous cycle of people seeking them out and encouraging others to do so. The legal pathways would divert billions of dollars in smuggling fees away from cartels and criminal organizations and reduce the victimization of immigrants, including many children who pursue an unregulated and dangerous route to the U.S. border. The United States and immigrants seeking a haven here cannot afford another litany of failed efforts to address this humanitarian crisis.

Notes

1 Laura Meckler, Alicia A. Caldwell, and Dudley Althaus, “ ‘This Is Your Daughter? When Was She Born?’ U.S. Border Agents Test Migrants’ Claims of Family Ties,” Wall Street Journal, March 2, 2018.

2 Kevin McAleenan, acting secretary of the Department of Homeland Security, “Senate Judiciary Committee Hearing on Border Security,” C-SPAN, June 11, 2019, 1:52:57.

3 Nick Miroff and Karly Domb Sadof, “This Photo Shows Why a Border Wall Won’t Stop the Immigration Surge,” Washington Post, March 21, 2019.

4 8 U.S.C. § 1225; 8 U.S.C. § 1158; 8 U.S.C. § 1232.

5 Office of Inspector General, “Management Alert - DHS Needs to Address Dangerous Overcrowding among Single Adults at El Paso Del Norte Processing Center (Redacted),” Department of Homeland Security, OIG-19-46, May 30, 2019.

6Asylum Abuse: Is It Overwhelming Our Borders?, Hearing before the Committee on the Judiciary, 113th Cong., 1st sess., December 12, 2013, p. 143; U.S. Citizenship and Immigration Services, Credible Fear Workload Summary, FY 2014-FY 2019; Customs and Border Protection, U.S. Border Patrol Southwest Border Apprehensions by Sector Fiscal Year 2019; Border Patrol, Sector Profiles, FY 2012-FY 2018, https://www.cbp.gov/newsroom/media-resources/stats?title=sector+profile; Lesley Sapp, Apprehensions by the U.S. Border Patrol: 2005-2010, Department of Homeland Security; Chad Haddal, Unaccompanied Alien Children: Policies and Issues, Congressional Research Service, January 15, 2009; U.S. Border Patrol, Unaccompanied Children (Age 0-17) Apprehensions: Fiscal Year 2008 through Fiscal Year 2012, https://object.cato.org/sites/cato.org/files/wp-content/uploads/uacs2008-2012.pdf.

7 Office of Immigration Statistics, 2014 Southwest Border Encounters: Three-Year Cohort Outcomes Analysis, Department of Homeland Security, August 2018.

8 David Bier, “Fences Made Crossings Deadlier — Asylum Made Them Much Less So,” Cato at Liberty, January 24, 2019.

9 Katharine M. Donato and Blake Sisk, “Children’s Migration to the United States from Mexico and Central America: Evidence from the Mexican and Latin American Migration Projects,” Journal on Migration and Human Security 3, no. 1 (2015): 58-79.

10 Elizabeth G. Kennedy, “ ‘No Place for Children’: Central America’s Youth Exodus,” Insight Crime, June 23, 2014; 49 percent of Salvadoran unaccompanied children, 47 percent of Hondurans, and 27 percent of Guatemalans had at least one parent in the United States, according to United Nations High Commissioner for Refugees, Children on the Run: Unaccompanied Children Leaving Central America and Mexico and the Need for International Protection, 2014, p. 63.

11 United Nations High Commissioner for Refugees, Children on the Run, p. 10.

12 Nick Miroff and Tim Meko, “A Snapshot of Where Migrants Go after Release into the United States,” Washington Post, April 12, 2019.

13 8 U.S.C. § 1182(d)(5)(A).

14 National Immigration Law Center, Overview of Immigrant Eligibility for Federal Programs, October 2011.

15 U.S. Department of State, Annual Report of the Visa Office 2018, Table VI.

16 U.S. Department of State, Annual Report of Immigrant Visa Applicants in the Family-sponsored and Employment-based Preferences Registered at the National Visa Center, November 1, 2018.

17“Immigrant Mother Detained at ICE Facility Reunites with Daughter in Aurora,” ABC7, July 3, 2018.

18“Unauthorized Immigrant Population Trends for States, Birth Countries and Regions,” Pew Research Center, November 27, 2018; D’Vera Cohn, Jeffrey Passel, and Ana Gonzalez-Barrera, Rise in U.S. Immigrants from El Salvador, Guatemala and Honduras Outpaces Growth from Elsewhere, Pew Research Center, December 7, 2017.

19 8 U.S.C. § 1157; 8 U.S.C. § 1158; 8 U.S.C. § 1101(a)(42).

20 Department of Justice, Executive Office for Immigration Review Adjudication Statistics: Asylum Decision and Filing Rates in Cases Originating with a Credible Fear Claim, April 12, 2019, https://www.justice.gov/eoir/page/file/1062976/download.

21Admissions and Arrivals,” U.S. Department of State, Refugee Processing Center, www.wrapsnet.org/admissions-and-arrivals.

22 Sofía Martínez, “Today’s Migrant Flow Is Different,” The Atlantic, June 26, 2018.

23“Private Sponsorship of Refugees,” Canadian Council for Refugees.

24 David Bier, “What Ronald Reagan Can Teach Us about Refugee Resettlement,” Daily Caller, November 12, 2015.

25“The Private Sector’s Role in Refugee Resettlement,” Private Sector Forum on Migration & Refugees, Concordia, October 25, 2016, New York, https://youtu.be/Qj89Ccvh8dk?t=1153.

26 Mica Rosenberg, “U.S. Ends Program for Central American Minors Fleeing Violence,” Reuters, August 16, 2017.

27 Jeff Ernst, Elisabeth Malkin, and Paulina Villegas, “A New Migrant Caravan Forms, and Old Battle Lines Harden,” New York Times, January 13, 2019.

28 Auditoría Superior de la Federación, Evalución Número 1787-GB: Política Pública Migratoria, p. 51.

29 El Colegio de La Frontera Norte, Encuesta sobre Migración en la Frontera Sur de México: Informe Anual de Resultados 2017, https://www.colef.mx/emif/resultados/informes/2017/Emif%20Informe%20Anual%20SUR%202017%20(26_abril_2019).pdf, p. 31.

30 According to the International Organization for Migration, 91.1 percent of Guatemalans in the United States in 2016 immigrated there for economic reasons; International Organization for Migration, Encuesta sobre Migración Internacional de Personas Guatemaltecas y Remesas 2016, February 2017, https://onu.org.gt/wp-content/uploads/2017/02/Encuesta-sobre-MigraciOn-y-Remesas-Guatemala-2016.pdf, p. 42.

31 Jens Manuel Krogstad and Jeffrey S. Passel, “U.S. Border Apprehensions of Mexicans Fall to Historic Lows,” Pew Research Center, December 30, 2014.

32 Kitty Calavita, Inside the State: The Bracero Program, Immigration, and the INS (New Orleans: Quid Pro Books, 2010), p. 43; Ernesto Galarza, Merchants of Labor: The Mexican Bracero Story (Charlotte, NC: McNally and Loftin Publishers: 1964), pp. 66-67.

33 Calavita, Inside the State, p. 68.

34 Kevin Sieff, “Why Is Mexican Migration Slowing While Guatemalan and Honduran Migration Is Surging?,” Washington Post, April 29, 2019.

35 Department of Homeland Security, Yearbook of Immigration Statistics, 1996-2017, https://www.dhs.gov/immigration-statistics/yearbook; U.S. Department of State, Annual Reports of the Visa Office, 2018; U.S. Border Patrol, Total Illegal Alien Apprehensions by Fiscal Year, 2000-2018, https://www.cbp.gov/sites/default/files/assets/documents/2019-Mar/bp-total-apps-other-mexico-fy2000-fy2018.pdf; Immigration and Naturalization Service, Yearbooks of Immigration Statistics, 2000, https://www.dhs.gov/immigration-statistics/yearbook/2000.

36 8 CFR § 274a.12(c)(8); 8 CFR § 208.7; 8 U.S.C. § 1158(d)(2).

37 U.S. Citizenship and Immigration Services, Number of Approved Employment Authorization Documents, by Classification and Statutory Eligibility October 1, 2011-September 30, 2018, https://www.uscis.gov/sites/default/files/USCIS/Resources/Reports%20and%20Studies/Immigration%20Forms%20Data/BAHA/2._eads-by-statutory-eligibility_Formatted_4-10-19.pdf.

38“Nonimmigrant Visa Statistics,” U.S. Department of State, https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics/nonimmigrant-visa-statistics.html.

39“2019 Revision of World Population Prospects,” United Nations Population Division, https://population.un.org/wpp/.

40 8 U.S.C. § 1101(a)(15)(h)(ii).

41“Immigration Court Backlog Tool,” Trac Immigration, Syracuse University, 2019, https://trac.syr.edu/phptools/immigration/court_backlog/.

42“Immigration Court Processing Time by Outcome,” Trac Immigration, Syracuse University, 2019, https://trac.syr.edu/phptools/immigration/court_backlog/court_proctime_outcome.php.

43 Noah Lanard, “The Shutdown Is Forcing Immigrants to Wait Years for a Court Hearing,” Mother Jones, January 15, 2019.

44 Chris Kenning, “Migrant Caravan Isn’t an ‘Invasion’ to This Kentucky Mom, It’s Her Kids,” Louisville Courier Journal, December 17, 2018.

45“Unauthorized Immigrant Population Trends,” Pew Research Center.

46Al Otro Lado Inc. v. Nielsen, 327 F. Supp. 3d 1284 (S.D. Cal. Oct. 12, 2018); Human Rights First, Barred at the Border: Wait “Lists” Leave Asylum Seekers in Peril at Texas Ports of Entry, April 2019.

47 Elliot Spagat, Nomaan Merchant, and Patricio Espinoza, “For Thousands of Asylum Seekers, All They Can Do Is Wait,” Associated Press, May 9, 2019; Daniella Silva, “Trapped in Tijuana: Migrants Face a Long, Dangerous Wait to Claim Asylum,” NBC News, March 18, 2019.

48 Miroff and Sadof, “This Photo Shows Why.”

49 Human Rights First, Refugee Blockade: The Trump Administration’s Obstruction of Asylum Claims at the Border, December 2018.

50 Paul Ingram, “Tucson Border Patrol Bypassing ICE in Releasing Migrant Families,” April 2, 2019, Tucson Sentinel.

51 U.S. Citizenship and Immigration Services, Credible Fear Workload Report Summary FY2019 Total Caseload.

52 Ingram, “Tucson Border Patrol Bypassing ICE.”

53“Southwest Border Migration FY2017,” Customs and Border Protection.

54 Shikha Dalmia, “The Cost-Free Way to End the Border Rush,” The Week, April 3, 2019.

55 Alex Nowrasteh, “Alternatives to Detention Are Cheaper Than Universal Detention,” Cato at Liberty, June 20, 2018; Ingrid Eagly and Steven Shafer, “A National Study of Access to Counsel in Immigration Court,” University of Pennsylvania Law Review 164, no. 1 (December 2015): 1-91; Human Rights First, Immigration Court Appearances Rates, February 2018.

56“The Latest: US Will Reassign 750 Border Inspectors,” Associated Press, March 27, 2019.

57 Alan Gomez, “Wave of Cubans Finally Reach U.S. after Grueling Land Journey,” USA Today, January 31, 2016.

58 Nomaan Merchant, “Autopsy: Migrant Child Who Died in US Custody Had Infection,” Associated Press, March 29, 2019.

59 Amy Sherman and Miriam Valverde, “Fact-Checking Julián Castro’s Claim That Asylum ‘Metering’ Caused Drowning of Father, Daughter,” PolitiFact, June 27, 2019.

Citation

Bier, David J. “Legal Immigration Will Resolve America’s Real Border Problems.” Policy Analysis No. 879, Cato Institute, Washington, DC, August 20, 2019. https://doi.org/10.36009/PA.879.

David J. Bier is an immigration policy analyst at the Cato Institute’s Center for Global Liberty and Prosperity.

Homeschooling and Educational Freedom: Why School Choice Is Good for Homeschoolers

$
0
0

Kerry McDonald

Over the past 50 years, homeschooling has grown from a fringe act to a widely accepted education model reflective of a diverse American population. Many parents choose homeschooling to avoid the constraints of the conventional classroom and to embrace education in a broader, often more pluralistic way. Increasingly, homeschooling is driving education innovation, as entrepreneurial parents and educators create hybrid learning models that redefine and expand the homeschooling paradigm.

According to the National Center for Education Statistics, the U.S. homeschooling population more than doubled between 1999 and 2012, from 850,000 to 1.8 million children, or 3.4 percent of the K-12 student population.1 Federal data show that the homeschooling population dipped slightly between 2012 and 2016, but state-level data reveal that some states with robust education choice programs saw rising numbers of homeschoolers during that time. Fluctuation in the homeschooling population is likely due to many factors, including regulatory changes that could make homeschooling either easier or more difficult for parents, but some homeschooling families may be taking advantage of school choice mechanisms, like education savings accounts (ESAs) and tax-credit scholarships. Even if they are not, an environment that supports educational freedom may encourage homeschooling growth.

This paper offers an overview of homeschooling trends and a glimpse at the current homeschooling population while arguing that educational freedom creates momentum for families to seek alternatives to conventional mass schooling. By expanding the definition of education and placing families in charge, education choice programs can empower parents, provide varied learning opportunities for young people, and stimulate education innovation and entrepreneurship. Despite legitimate fears of regulation, homeschoolers should generally support school choice proposals.

Modern Homeschooling

Compulsory-schooling laws spread throughout the United States in the late 19th and early 20th centuries, and their grip became more far-reaching. As mandatory schooling extended earlier into childhood and later into adolescence for more of a child’s day and year, the once widespread and accepted practice of homeschooling virtually disappeared. It reemerged in the early 1970s, when countercultural left “hippies” kept their children out of school and educated them at home or on back-to-the-land communes. While progressives may have launched the modern homeschooling movement, Christian conservatives expanded it. Seemingly disparate in their motivations, both groups rejected state-controlled, institutional schooling and sought a more personalized, child-centered approach to education. As education historian Milton Gaither wrote:

The progressive left had long harbored romantic ideals of child nature, born of Rousseau and come of age in the progressive education movement of the early twentieth century. Countercultural leftists inherited this outlook, and when they had children their instinct was to liberate the kids from what they took to be the deadening effects of institutionalization by keeping them at home. And the countercultural right, despite ostensibly conservative and biblical theological commitments, had basically the same view.2

During the 1980s and 1990s, the number of homeschoolers swelled, reaching 850,000 by 1999, the first year the Department of Education began tracking homeschooling data as part of its National Household Education Surveys Program. Today, while religious homeschoolers remain a significant demographic, fewer families are choosing homeschooling for overtly religious reasons. By 2012, “concern about the environment of other schools” exceeded religious motivations as the primary catalyst for homeschooling.3

Over the past decade, homeschooling families have become much more reflective of the general U.S. population. The long-held stereotype of homeschooling families as white, middle-class, and Christian is changing. Homeschooling has become a mainstream option for many families who are fed up with increasingly standardized mass schooling. According to the New York Times, “Once mainly concentrated among religious families as well as parents who wanted to release their children from the strictures of traditional classrooms, home schooling is now attracting parents who want to escape the testing and curriculums that have come along with the Common Core, new academic standards that have been adopted by more than 40 states.”4Business Insider went so far as to say that “homeschooling could be the smartest way to teach kids in the 21st century.”5

Homeschoolers have become more urban (Figure 1), secular, and socioeconomically diverse, and more single parents and dual-working parents have taken to homeschooling. But perhaps the most significant recent shift in the homeschooling population is its growing racial and ethnic diversity that is now more reflective of American society (Figure 2). Between 2007 and 2012, the percentage of black homeschoolers doubled to 8 percent of all homeschoolers, and the percentage of Hispanic homeschoolers continued to mirror the overall K-12 distribution of Hispanic children, at around one-quarter of all students.6

The dramatic rise in the number of black homeschoolers, in particular, may be a response to more black parents finding district school environments unsatisfactory. For instance, concerns about systemic racism, a culture of low expectations and poor academic outcomes for children of color, and a standardized curriculum that often ignores the history and culture of black people have catalyzed much of the rise in the black homeschooling movement. The Atlantic reported in 2018 that for some black homeschoolers, “seizing control of their children’s schooling is an act of affirmation—a means of liberating themselves from the systemic racism embedded in so many of today’s schools and continuing the campaign for educational independence launched by their ancestors more than a century ago.”7

A more personalized, family-centered approach to education motivates many homeschoolers, but a key trend is using the legal designation of homeschooling to drive education innovation. Private learning centers and microschools are increasingly establishing themselves as independent organizations, not government-licensed schools, that support families who are legally recognized as homeschoolers. This approach can accelerate experimentation and entrepreneurship by freeing enterprising educators from restrictive schooling regulations and state licensing and allowing families more flexibility. Many of these learning centers and microschools let students attend several times a week, in some cases full time, enabling working parents, single parents, and others to register as homeschoolers and take advantage of versatile education models that stretch beyond conventional schooling.

Where Homeschooling Is Growing

The homeschooling population has experienced an astonishing ascent over the past 20 years, but the latest federal data suggest that the rate of increase could be slowing, with homeschooling numbers leveling off. The Department of Education has historically tracked homeschooling through its National Household Education Survey, a randomized survey tool that in 2016 captured nationwide data on 14,075 school-age children, of which 552 were homeschoolers. The total number of homeschoolers declined slightly from about 1.8 million students in 2012, or 3.4 percent of the overall K-12 school-age population, to approximately 1.7 million students in 2016, or about 3.3 percent of all students.8

Given the relatively small sampling of homeschoolers and the potential aversion some homeschooling families express toward government data collection, it is possible this federal survey tool underestimates the overall homeschooling population. But while federal surveys show the homeschooling population is holding steady or slightly declining, some state data show states are experiencing notable growth in their homeschooling populations.

Many factors could be contributing to homeschooling expansion or decline in a given state, including satisfaction with local public school options, cost and availability of private schools, parents’ job opportunities and economic prospects, demographic changes in the overall school-age population, changes in regulations or restrictions on homeschooling families, and availability of resources and support for homeschooling. Some research also suggests that the prevalence of public school choice programs, like charter schools, could reduce homeschooling by offering more “free” education options to parents and that vouchers might push more homeschoolers into private schools.9

Certain states with robust private education choice programs, however, are seeing particularly high growth in homeschooling compared with overall public school enrollment. Florida, for example, is a leader in private education choice programs, offering an ESA, two tax-credit scholarship programs, and two voucher programs. The state has experienced a significant rise in homeschooling numbers over the past several years. The Florida homeschooling population grew 6.8 percent between the 2014-2015 and 2017-2018 school years, compared with only 2.7 percent growth in the state’s K-12 public school population during that same time.10

A similar story of homeschooling growth emerges in North Carolina, where the homeschooling population is rapidly expanding. Like Florida, North Carolina has favorable education choice policies, including an ESA and two voucher programs. Between 2014 and 2018, the homeschooling population grew 27 percent to over 127,000 students, while K-12 public school enrollment fell by 1.3 percent.11

Ohio offers five separate education voucher programs. There, the homeschooling population grew by over 13 percent to over 30,000 homeschoolers between 2014 and 2018, while the overall K-12 public school population fell by just under 1 percent.12 The trend continues in Wisconsin, which offers four statewide voucher programs as well as a K-12 private school tuition tax deduction. Wisconsin public schools saw their enrollment drop by 1.3 percent between 2014 and 2018, while the homeschooling population grew by 9 percent.13

The most recent federal data on homeschooling, 2012 to 2016, show that the number of homeschoolers declined by 4.7 percent nationwide, while K-12 public school enrollment increased 1.6 percent.14 Why are states like Florida, North Carolina, Wisconsin, and Ohio defying national homeschooling trends and dramatically outpacing K-12 public school enrollment? The availability of education choice programs in these states could offer some clues.

Homeschooling and Education Choice Programs

States with successful education choice programs could be encouraging more homeschooling in a variety of ways, both practical and personal. At the practical level, some education choice programs, like ESAs, provide funds that families can use to purchase classes, supplies, curricula, and other resources, in addition to tuition. ESAs let parents opt out of public schools and public charter schools and access some public school funds through a government-authorized savings account. Unlike vouchers, these funds can be used for an array of education-related expenses, not just school tuition. ESAs help to disentangle education from schooling, acknowledging the wide variety of ways young people can and do learn.

According to a 2018 report by EdChoice, a nonprofit organization founded by Nobel prize-winning economist Milton Friedman and his wife, Rose, to support education choice efforts, Florida’s ESA program, known as the Gardiner Scholarship, has provided families of children with special needs access to education resources beyond schooling. Researchers Lindsey Burke and Jason Bedrick discovered that many of these ESA recipients were avoiding brick-and-mortar schooling altogether and using the ESA funds to fully customize their child’s learning. Other recipients used the money for a blend of schooling and supplemental resources, while still others used the ESA like a voucher to pay for private school tuition.15 According to Burke and Bedrick, it’s difficult to know for sure if the Florida ESA families who customized their child’s education without schooling were registered homeschoolers, but it’s quite likely that if students weren’t attending a school, they were being homeschooled. Bedrick says some of the ESA families could have been registered with the Florida Virtual School, a leader in online K-12 learning, but he explains in an interview: “I expect that most of the students in that category would be registered as being home educated.”16 ESAs could be supporting more homeschooling families in customizing their child’s education.

Education choice programs could be encouraging more families to choose homeschooling by offering funding to those who want or need it. They also could be prompting more homeschool resource centers to form, such as BigFish Learning Center, a self-directed learning community in Dover, New Hampshire, where some attendees take advantage of the state’s tax-credit scholarship program to help defray enrollment expenses. New Hampshire’s tax-credit scholarship program, which allows businesses or individuals to receive a tax credit when they donate to a scholarship-granting nonprofit organization, is currently the country’s only tax-credit scholarship program open to homeschoolers, who can use scholarship funds for a variety of approved education expenses if they meet income eligibility requirements.

There also may be more personal reasons why states with flourishing education choice programs have a growing homeschooling population. If everyone in your neighborhood attends an assigned district school, it can be difficult to go against the grain. In an environment of educational choice, where alternatives are available, valued, and sought after, pursuing a different education path may seem more normal. Homeschooling becomes one of many viable education choices, and the more homeschoolers there are, the more likely other families will be to explore this option. This peer effect could be large in states that enact strong choice programs. A growing homeschooling population leads to more local resources for homeschoolers, such as more classes offered by local businesses, museums, and libraries, and may spark more private learning centers and parent-led co-ops to emerge. These resources, in turn, could be encouraging more families to pursue homeschooling.

Even in states like Wisconsin and Ohio that have voucher programs for private school tuition, but not ESAs or funds specifically for homeschooling, a climate of education choice could be influencing more families to choose homeschooling. Indeed, the growth in homeschooling in Wisconsin and Ohio, where public school enrollment declined, could indicate that when there is more education choice, more parents will make more choices. Even when they don’t directly benefit from a state choice program, like a voucher, the mere presence of mechanisms that empower some parents to take control of their child’s education may prompt more parents to do so. This is an important policy point for homeschooling advocates who oppose education choice programs that would include homeschoolers out of concern that such programs could lead to greater homeschooling regulation or oversight, which is a legitimate possibility. Homeschoolers should support education choice programs, whether or not they are personally included in such programs, because more choice can lead to more homeschoolers overall.

How Homeschooling Can Drive Education Innovation

In his influential 1955 paper popularizing the idea of vouchers, Milton Friedman explained how more education choice would break the government monopoly on schooling and lead to more diverse options and innovation. He wrote:

The result of these measures would be a sizable reduction in the direct activities of government, yet a great widening in the educational opportunities open to our children. They would bring a healthy increase in the variety of educational institutions available and in competition among them. Private initiative and enterprise would quicken the pace of progress in this area as it has in so many others. Government would serve its proper function of improving the operation of the invisible hand without substituting the dead hand of bureaucracy.17

By shifting power to families, education choice creates greater variety in how young people learn and triggers education entrepreneurship and experimentation. With its legal flexibility, homeschooling provides an ideal incubator for educational ingenuity.

In Nashville, Tennessee, for instance, two schools that focus on homeschoolers recently opened. Acton Academy Nashville is a hybrid homeschooling model in which students attend the school three days a week, and the Nashville Sudbury School offers students a full-time school track or a flexible homeschool track. Tuition at both schools is a fraction of the cost of other local private schools, and they share a commitment to student-directed, passion-driven learning. At Nashville Sudbury, more than half of the current students are registered homeschoolers. According to Sonia Fernandez LeBlanc, one of the founders of the Nashville Sudbury School: “Families love the flexibility that the homeschooling track allows and most take advantage of more than two days a week.” She adds: “We have a very eclectic homeschooling community in the greater Nashville area.”18

In California, Da Vinci Connect is a publicly funded, privately operated hybrid K-12 charter school network for homeschoolers where children attend the project-based school two days a week and spend the rest of the time at home and throughout their community. According to a recent Forbes article about the Da Vinci network: “Despite what one might consider a common homeschool family unit (two parents and one who is able to not work and stay at home), many Da Vinci Connect families do not fit that mold and are finding unique ways to make the homeschool option work for them.”19

As its population becomes more diverse, and as its versatility attracts both parents and entrepreneurs, homeschooling will likely continue to drive innovation—particularly in states supportive of education choice.

Conclusion

In just 50 years, the modern homeschooling movement has evolved from a smattering of ideologues to a widespread educational option for many families. Today’s homeschoolers increasingly mirror the larger American population and often use the legal designation of homeschooling to create a more personalized, child-directed approach to learning than is possible through the dominant compulsory-schooling model. While recent national data suggest homeschooling growth may be slowing, state-level data suggest that in some states with particularly favorable education choice programs, the homeschooling population is soaring many times faster than the K-12 public school population.

Education choice through ESAs, tax-credit scholarships, and vouchers is beneficial and gives families options. But it may also be good for homeschoolers and others who value educational freedom and change. An environment that supports choice empowers parents to take control of their child’s education, whether or not that child is the recipient of any specific education choice funding. A climate of choice can lead more families to explore alternatives to conventional schooling and inspire entrepreneurial educators to establish new, more flexible models of learning that are better aligned with the realities of the 21st century.

In his book Instead of Education, homeschooling pioneer John Holt wrote: “You cannot have human liberty, and the sense of all persons’ uniqueness, dignity, and worth on which it must rest, if you give to some people the right to tell other people what they must learn or know, or the right to say officially and ‘objectively’ that some people are more able and worthy than others.”20 The promise of education choice is that families are free to opt out of compulsory mass schooling that dictates what all young people must learn and know and that officially judges them on their worth. Fortunately, U.S. homeschoolers have been free to do this legally for over 25 years, and they may very well be the ones best positioned to extend this educational liberty to others by supporting choice for all families.

Notes

1. Thomas D. Snyder, Cristobal de Brey, and Sally A. Dillow, Digest of Education Statistics 2017: 53rd Edition (Washington: U.S. Department of Education, 2019), p. 132.

2. Milton Gaither, Homeschool: An American History (New York: Palgrave MacMillan, 2008), p. 113.

3. Jeremy Redford, Danielle Battle, and Stacey Bielick, Homeschooling in the United States: 2012 (Washington: U.S. Department of Education, 2017).

4. Motoko Rich, “Home Schooling: More Pupils, Less Regulation,” New York Times, January 4, 2015.

5. Chris Weller, “Homeschooling Could Be the Smartest Way to Teach Kids in the 21st Century—Here Are 5 Reasons Why,” Business Insider, January 21, 2018.

6. Meghan McQuiggan and Mahi Megra, Parent and Family Involvement in Education: Results from the National Household Education Surveys Program of 2016 (Washington: U.S. Department of Education, 2017).

7. Melinda D. Anderson, “The Radical Self-Reliance of Black Homeschooling,” The Atlantic, May 17, 2018.

8. Snyder, de Brey, and Dillow, Education Statistics 2017, p. 132.

9. Corey A. DeAngelis and Angela K. Dills, “Is School Choice a ‘Trojan Horse?’ The Effects of School Choice Laws on Homeschool Prevalence,” Peabody Journal of Education 94, no. 3 (2019): pp. 342-54.

10. Florida Department of Education, Home Education in Florida: 2017-18 Annual Report (Tallahassee, FL: Office of Independent Education and Parental Choice, 2018); and “Student Enrollment,” Florida Department of Education, https://edstats.fldoe.org/SASWebReportStudio/gotoReportSection.do?sectionNumber=1.

11.“Home School Statistics,” North Carolina Department of Administration; and “Table 1—LEA Final Pupils by Grade,” North Carolina Department of Public Instruction, http://apps.schools.nc.gov/ords/f?p=145:11:::NO.

12.“Home Schooling,” Ohio Department of Education; and “Enrollment Data,” Ohio Department of Education.

13.“Home Based Private Instruction—Statistics,” Wisconsin Department of Public Instruction.

14. Comparison data on homeschooling and public school enrollment comes from Snyder, de Brey, and Dillow, Education Statistics 2017, p. 132; and “Table 203.10. Enrollment in Public Elementary and Secondary Schools, by Level and Grade: Selected Years, Fall 1980 through Fall 2026,” Digest of Education Statistics, National Center for Education Statistics, U.S. Department of Education, December 2016.

15. Lindsey Burke and Jason Bedrick, Personalizing Education: How Florida Families Use Education Savings Accounts (Indianapolis: EdChoice, February 2018).

16. Jason Bedrick, “Re: Personalizing Education EdChoice report,” email correspondence received by Kerry McDonald, January 21, 2019.

17. Milton Friedman, “The Role of Government in Education,” Economics and the Public Interest, ed. Robert A. Solo (New Brunswick, NJ: Rutgers University Press, 1955), pp. 123-44.

18. Sonia Fernandez LeBlanc, “Nashville Sudbury School response,” email correspondence received by Kerry McDonald, January 26, 2019.

19. Tom Vander Ark, “Da Vinci Schools Expand Opportunities in Los Angeles,” Forbes, November 2, 2018.

20. John Holt, Instead of Education: Ways to Help People Do Things Better (Medford, MA: Holt Associates, 2004), pp. 8-9.

Kerry McDonald is a senior education fellow at the Foundation for Economic Education and author of Unschooled: Raising Curious, Well-Educated Children Outside the Conventional Classroom (Chicago Review Press, 2019).

Viewing all 298 articles
Browse latest View live