Policy Institutes

President Trump announced on Saturday that he had a new plan to open government that includes “a three-year extension of temporary protected status or TPS.” But as in the case of DACA—for reasons I explained here—the actual legislation that Senate Majority Leader Mitch McConnell introduced to implement his proposal does not extend TPS. Rather, it ends it as it exists now, and replaces with an entirely different program with much more restrictive criteria.

Temporary protective status, or TPS, is granted to nationals of country where the government feels it could not, at one time or another, send people back to due to a crisis in those countries, such as a war or natural disaster. Cribbing a lot from what I’ve already written about the DACA provisions of this bill, here is a list of the changes to TPS in the bill:

  1. Ends TPS for 5 of the 9 TPS countries: Under President Trump, the government has terminated TPS for Nepal, Sierre Leone, Liberia, Guinea, Sudan, El Salvador, Haiti, Honduras, and Nicaragua. Yet only the last four nationalities will benefit from this bill at all (p. 1256). To treat this bill as if it reverses Trump’s decisions is incorrect. It maintains the majority of them—notably for Africans who President Trump denigrated in a White House meeting last year.
  2. TPS recipients will lose their jobs: TPS extensions of work authorization are automatically extended but p. 1271 of this bill requires TPS recipients to apply for an entirely new work authorization (p. 1277), meaning that unless courts protect them, there will be a major gap in work eligibility. This is especially true because the government can take a year to enact this new program, virtually guaranteeing that everyone with TPS right now will lose their jobs.
  3. TPS recipients must reapply for initial status: When the government extends TPS, renewals of status are free. But this legislation requires a fee to apply to continue in status (p. 1265). Reapplying for initial status also requires that they reprove their eligibility, which is a costly process and often requires hiring an attorney.
  4. Much higher evidentiary burden: Reapplying will become even more onerous because p. 1256 increases the evidentiary standard to prove eligibility to receive TPS from a “preponderance of the evidence” to “clear and convincing.” The only higher standard of proof in the law is “beyond a reasonable doubt.” People win multi-million judgments based on the preponderance of the evidence standardClear and convincing is often used for cases like withdrawing life support. In the immigration context, USCIS explains that preponderance of the evidence is usually the standard—meaning that “even if the director has some doubt as to the truth,” he should approve “if the petitioner submits relevant, probative, and credible evidence that leads the director to believe that the claim is ‘probably true’ or ‘more likely than not.’” Clear and convincing is used rarely for cases like “to rebut the presumption of a prior fraudulent marriage” (i.e. for applicants the government has reason to be suspicious of). TPS recipients proving that they entered before 2011 or that they resided continuously, for example, just became much more difficult under this legislation.
  5. Massively Increases TPS Application Cost: P. 1243 contains a fine or penalty but rebrands it as a $500 “security fee” to pay for Trump’s “wall.” This is despite the fact that many TPS recipients entered legally and were stranded after hurricanes or earthquakes hit their home countries. This fine comes on top of the normal fees for processing the application, and it essentially increases the cost of the $50 application for TPS status by tenfold. It basically doubles the $495 cost of an extension of TPS work authorization.
  6. Minimum income requirement: P. 1261 would require that TPS recipients prove—again by clear and convincing evidence—that, unless they are a student, they can maintain an income of at least 125 percent of the poverty level during their time in the United States. TPS now has no such requirement. That means retirees, stay-at-home mothers, disabled people, etc. would not be eligible for TPS anymore.
  7. Pay back legally-obtained tax credits: In one of the most bizarre provisions, p. 1262 requires TPS applicants to pay to the U.S. Treasury the value of any legally-obtained tax credits that they have received. This could be thousands of dollars that have already been spent. Not only is this provision not in TPS, it is totally unprecedented in immigration law and would massively increase the cost for many applicants, particularly those with children.
  8. Bars those with pending criminal charges: TPS requires a conviction of a felony or two or more misdemeanors committed in the United States, not a mere charge for any offense at all. Many misdemeanors include minor traffic offenses. But p. 1260 renders anyone with a pending charge ineligible for the new status, even though a conviction for a single misdemeanor wouldn’t make the person ineligible anyway. Given that there’s only a six-month window to apply, this would prevent people from being able to apply at all.
  9. Bars employment “contrary to the national interest”: TPS applicants would now have to prove—by clear and convincing evidence—that their employment would not be “contrary to the national interest” (p. 1271). This provision is bizarre since the purpose of authorizing their employment is that they need to be able to support themselves, which should be always in the national interest, but under the Trump administration, the government may not see it this way.
  10. Keeps TPS recipients from getting permanent residence: Illegal immigrants who also entered illegally cannot adjust their status to legal permanent residence even if they are eligible due to (typically) a marriage to a U.S. citizen. They need to register a legal entry first. Tens of thousands of illegal immigrants received legal permanent residence in this manner under DACA, which offers a similar status to TPS. P. 1273 bars this practice by deeming such entries not a legal “admission” for purposes of adjusting status.
  11. The new status cannot be extended: Unlike TPS, this new status could never be extended by a future administration. All applications must be filed in a 6-month window (p. 1263), and the status would expire after 3 years (p. 1270).
  12. All illegal immigrants are banned from TPS in the future: P. 1275 would create a permanent change to the TPS program, banning anyone who is not lawfully present in the United States from TPS going forward. In other words, no future administration could ever use TPS to grant legal status to someone in the country illegally, even if deporting them was simply not an option.

Once again, this legislation should not be described as an extension of TPS when, in fact, it guts the program for existing recipients and removes it as an option for many future immigrants as well. This legislation does not follow through on the president’s promise.

Ninety years ago, two-thirds of government spending in America was state-local and one-third was federal. Today, it is the reverse with about two-thirds federal and one-third state-local. American government has become larger and much more centralized. That centralization has made winners and losers among the states as vast flows of taxpayer cash pour into Washington and are then dispersed through more than 2,200 federal spending programs.

In 2019, the federal government will vacuum $3.5 trillion from taxpayer pockets in the 50 states, borrow $1 trillion from global capital markets, and then turn on the leaf blower to scatter $4.5 trillion back out across the 50 states. Many billions of dollars will stay in and around Washington to pay for the leaf-blowing operations.

The Rockefeller Institute of Government has released a very useful report detailing these cash flows. The report calculates a “balance of payments” for each state in 2017, which is federal spending in each state less taxes paid to the federal government by individuals and businesses in each state. The winner states have a positive balance and the loser states a negative one. Federal spending includes four items: benefits (such as Social Security), state-local grants (such as Medicaid), procurement (such as fighter jets), and compensation paid to federal workers.

On a per capita basis, the biggest winner states are Virginia, Kentucky, and New Mexico. Virginia has a lot of federal employees, contractors, and the world’s largest naval base. Kentucky receives a lot in benefits, contracts, and grants, and has Senator Mitch McConnell. New Mexico has a lot of federal employees, contractors, and Los Alamos.

The biggest loser states are Connecticut, New Jersey, and Massachusetts. Those states have a large number of high-earning individuals who get hit hard under the “progressive” federal income tax.

Figure 1 below shows data from the Institute’s report. Taxes per capita are on the horizontal axis and spending per capita on the vertical axis. Each dot is a state. The totals allocated for 2017 were $3.1 trillion in taxes and $3.8 trillion in spending, leaving out five percent of taxing and spending that could not be allocated by state.

Generally, states on the bottom right are the losers and those on the top left are winners.

Connecticut is on the far right paying $15,462 in federal taxes per capita but receiving only $11,462 in federal spending. Connecticut would be better off in a decentralized United States with citizens paying most of their taxes to state and local governments rather than the federal government.

Every state is actually worse off than indicated in Figure 1 because federal borrowing in 2017 allowed spending to be 20 percent larger than taxes. But borrowing is not a free lunch. It creates a cost that will hit residents of every state down the road—borrowing is just deferred taxes.

For Figure 2 below, I scaled up federal taxes to include both current and deferred. That is, I scaled up taxes for each state the same percent so that total federal taxes for the nation equals total federal spending. With that adjustment, Connecticut residents paid $18,586 in current and deferred taxes per capita and received only $11,462 in spending. Connecticut residents are only getting back 62 cents for every dollar owed to Washington.

The patterns exhibited in the figures have persisted for decades. High-income states such as Connecticut have been penalized by the overly progressive federal income tax for a very long time. The interesting political question is why do they stand for it? Why do members of Congress from high-income states support a highly progressive federal income tax that particularly harms their own states?

This weekend, President Trump promised to an “extension” of DACA for the “700,000 DACA recipients brought here unlawfully by their parents at a young age many years ago.” But the Senate bill that Senate Majority Leader Mitch McConnell introduced to implement his deal does not extend DACA but rather replaces it with a totally different program that will exclude untold thousands of Dreamers who would have been eligible under DACA. It is important to remember that all of these requirements are for less than 3 years of relief from deportation and work authorization, not a pathway to citizenship.

Here is a list of some of the changes:

  1. Requires Dreamers to reapply: P. 1235 requires Dreamers already in good standing in DACA to reapply for status, even though DACA would have allowed them simply to renew their status without refiling all of their paperwork and evidence. This requirement is a substantial burden, and most applicants will end up having to hire immigration attorneys to fulfill it.
  2. Much higher evidentiary burden: P. 1235 increases the evidentiary standard for Dreamers to prove their eligibility to receive DACA from a “preponderance of the evidence” to “clear and convincing.” The only higher standard of proof in the law is “beyond a reasonable doubt.” People win multi-million judgments based on the preponderance of the evidence standard. Clear and convincing is often used for cases like withdrawing life support. In the immigration context, USCIS explains that preponderance of the evidence is usually the standard—meaning that “even if the director has some doubt as to the truth,” he should approve “if the petitioner submits relevant, probative, and credible evidence that leads the director to believe that the claim is ‘probably true’ or ‘more likely than not.’” Clear and convincing is used rarely for cases like “to rebut the presumption of a prior fraudulent marriage” (i.e. for applicants the government has reason to be suspicious of). Dreamers proving that they entered before June 2007 or that they resided continuously, for example, just became much more difficult under this legislation.
  3. Imposes a Monetary Fine/Doubles Application Cost: DACA, the Dream Act, and other proposals to legalize Dreamers have usually left off the monetary fine for being in the country illegally that proposals to legalize other immigrants have customarily had. This is because no one—including Trump—blames Dreamers for being in the country illegally. They were brought here as children. Yet this bill does contain a fine or penalty but rebrands it as a $500 “security fee” (p. 1243). This fine comes on top of the normal fees for processing the application, and it essentially doubles the cost of the currently $495 application. According to the Migration Policy Institute’s analysis of why eligible Dreamers didn’t apply for DACA, not having $500 cash was the number 1 reason. Anecdotes from Dreamers themselves support this.
  4. “Public charge” rule: P. 1238 applies the public charge ground of inadmissibility in INA 212(a)(4) to Dreamers—something DACA did not require. While DACA recipients are currently ineligible, and would remain ineligible under this bill, for almost all federal benefits, the Trump administration’s pending public charge rule would ban anyone who is even 5 percent dependent on any level of government, even state or local aid, from receiving legal status. This could include numerous Dreamers in states such as California and New York, which offer state benefits to Dreamers. Dreamers in DACA have grown up in America since a very young age and have lived in the country for over a decade. They are Americans. Treating them as if they are new immigrants does not represent the view of most Americans.
  5. Minimum income requirement: P. 1239 would further require that Dreamers prove—again by clear and convincing evidence—that, unless they are a student, they can maintain an income of at least 125 percent of the poverty level during their time in the United States. DACA had no such requirement, and it would result in banning numerous Dreamers currently in DACA.
  6. Pay back legally-obtained tax credits: P. 1239 requires Dreamers to pay to the U.S. Treasury the value of any legally-obtained tax credits that they have received. Not only is this provision not in DACA, it is totally unprecedented in immigration law and would massively increase the cost for many applicants, particularly those with children.
  7. Excludes Dreamers who ever claimed to be U.S. citizens: Unlike DACA, P. 1238 also applies the ground of inadmissibility in INA 212(a)(6)(C) for those Dreamers who ever claimed to be a U.S. citizen. This is an exceptionally common phenomenon because many Dreamers don’t ever realize that they are here illegally until they claim otherwise.
  8. Excludes Dreamers with removal orders: Unlike DACA, P. 1238-9 would ban Dreamers who are in the country illegally due to a prior order of removal. Given that the whole point of DACA and similar programs are to give people here illegally legal status, this provision makes little sense and is solely designed to keep out Dreamers.
  9. Excludes Dreamers not in DACA: Nearly half of all Dreamers have dropped out of DACA or never applied in the first place, possibly due to fear of what Presidents Trump or Obama would do with their information or for costs or other reasons. Moreover, other Dreamers “age-in” to the program when they turn 15 (younger immigrants cannot apply). P. 1239 makes it clear that anyone not currently in DACA cannot apply—another huge change from the DACA program.
  10. Keeps Dreamers from getting permanent residence: Illegal immigrants who also entered illegally cannot adjust their status to legal permanent residence even if they are eligible due to (typically) a marriage to a U.S. citizen. They need to register a legal entry first. DACA allowed them to travel and reenter, which permitted tens of thousands to receive legal permanent residence. P. 1252 bars this practice by deeming such entries not a legal “admission” for purposes of adjusting status.
  11. Dreamers cannot renew status: P. 1240 grants a 3-year status that cannot ever be renewed. This is a huge departure from DACA, which—despite giving just a 2-year status—has allowed renewals for 7 years already.

These are just some of the many changes that the bill makes to the DACA program. Commentators should not describe this bill as “extending DACA” or even extending that status of DACA recipients. This is an entirely new program and an entirely new status.

The oceans are heating up 40% faster than scientists realized screamed Business Insider last Saturday (January 12). Two days earlier The New York Times broke the story with “the oceans are heating up 40 percent faster on average than a United Nations panel estimated five years ago.” It’s all from a January 10 article in Science by Lijing Cheng, of the Chinese  Academy of Sciences in Bejing, along with three American coauthors, titled “How Fast are the Oceans Warming?”

Scary. Not. “40 percent” is a straw man.

The subject of all this attention is the change in the heat content of the world’s oceans. This is obviously related to their temperature—something that has proven rather difficult to measure precisely on the centennial scale because of changes in measurement techniques and data sources. (Quants: heat (in joules) divided by the heat capacity (joules required to warm the ocean a degree) gives temperature change).

At the outset, it’s important to note that this is not an original research article. It’s a “Perspectives” piece, kind of like a sciency op-ed that cites a collection of refereed publications (in this case, with a large number of self-citations) that determine the “perspective” of the writers. Quoting from Dr. Roy Spencer’s blog on January 16:

For those who read the paper, let me warn you: The paper itself does not have enough information to figure out what the authors did…

Further, Spencer notes:

One of the conclusions of the paper is that Ocean Heat Content (OHC) has been rising more rapidly in the last couple decades than in previous decades, but this is not a new finding, and I will not discuss it further here.

Of more concern is the implication that this paper introduces some new OHC dataset that significantly increases our previous estimates of how much the oceans have been warming.

As far as I can tell, this is not the case.

The “United Nations panel” in the first paragraph is, of course, its Intergovernmental Panel on Climate Change (IPCC), and in their most recent (2013) science compendium they most certainly did not estimate that the heat content of the ocean is 40% less than what it is from Cheng et al’s “perspective.” In that report, they noted five different publications, but found problems with four of them and only conferred credibility upon the highest figure, published by Dominguez et al. in 2008. The Cheng et al. study is only 11% higher than that, not 40%. To repeat, the average of the five studies mentioned in the 2013 IPCC report is 40% below the new Cheng et al. figure, but the one that the IPCC found most credible in fact differs from Cheng et al. by only 11%.

The 40% figure is therefore a straw man. 

It’s also noteworthy that the “40 percent” claim is nowhere in the Science Perspective. It’s from a guest post by Cheng et al. in “Carbon Brief,”  principally funded by the European Climate Foundation, which describes itself as “a major philanthropic initiative to help Europe foster the development of a low-carbon society and play an even stronger international leadership role to mitigate climate change.”

Another Perspective

It is obviously very important to understand historical changes in ocean heat content.  Another way to do this would be with the new “reanalysis” data sets, which combine heretofore separate atmospheric observations in the past via a dynamic model. Obviously as one goes further back in time, important data, such as vertical weather balloon soundings drop out, as they did in the 1930s. One important note:  the model is modulated with the changes in atmospheric radiation consistent with human emissions of greenhouse gases, ozone, and aerosols, as well as changes in solar radiation.

(The relevant paper is by Patrick Layloyaux of the European Center for Medium-Range Weather Forecasts, the same people who produce the daily “Euro” model that mid-Atlantic forecasters love so much in snow situations. He has 14 co-authors, with the majority being from the ECMWF.)

Here’s what the ECMWF simulates for the historical heat content (in Joules/square meter) of the upper 300 meters (984 feet) of the globe’s oceans:

 

Oceanic heat content (joules/square meter) or the upper 300 meters of the ocean.  From Layloyaux et al., 2018.

Somehow “ocean heat content as high as it was 75 years ago” isn’t quite so alarming. 

“It’s time to raise the alcohol tax,” declared Vox author German Lopez back in December.

Now let me state upfront that I am not confident I know what the correct tax rate on alcohol should be. Lopez may well be right about their being a rational case on economic grounds for an increase based on high-quality, robust analysis. But his article does not make a reasoned case satisfactorily, nor does it link to such analysis.

In fact, it came to my attention as I was finalizing my new paper “How Market Failure Arguments Lead to Misguided Policy” (released today). And I’m convinced his piece is a classic of the genre. This article aims to highlight some of the key objections I have to his approach, which is increasingly common in public debate.

The traditional economic case for alcohol taxation

Libertarian theory aside, the classic case for taxing alcohol will be familiar to those with basic economic knowledge. Alcohol consumption is believed to impose, on net, external costs on people other than drinkers themselves.

When deciding whether to drink, individuals are thought to only consider the balance of private costs (the money it costs to drink, the hangover, the risk of disease or accidents for them etc) and the private benefits of consumption (the confidence, the enjoyment of the taste, the benefits to them of socializing etc).

But clearly, alcohol consumption can have external effects. The costs of alcohol-related crime and driving under the influence are borne by others. There may be net external costs relating to health care, too, given alcohol-related diseases and incidents could necessitate higher taxpayer subsidies or insurance premiums (though, applying such logic consistently, one would have to net off any “savings” that alcohol consumption might deliver in terms of lower Social Security and Medicare payments from reduced longevity).

The economic case for a tax then is this: if we observe net external costs associated with alcohol consumption, then allowing a free market would lead to higher levels of consumption than optimal. If a tax can be imposed that equates roughly to the marginal external costs of consumption, then drinkers are faced with a price reflective of the true costs of their actions.

Due to the “Law of Demand,” the amount of alcohol consumption will fall to the level at which marginal social costs equate to marginal benefits as this tax is imposed. Some of the negative external costs will occur less often, as will some of the private costs. Society as a whole will be better off because the tax means prices now reflect the true cost to society of the product’s consumption.

In order to make the case for a hike in alcohol taxes then, Lopez simply needed to present clear evidence that current tax rates on alcohol are too low to account fully for the external costs of consumption we see. His line of reasoning does not make this case.

1.   Ignoring the benefits and costs framework

Lopez presents good evidence that higher alcohol prices reduce demand for alcohol (as one would expect). It stands to reason then that reduced alcohol consumption will mean fewer alcohol-related deaths and the other negative consequences outlined above.

But he does not seem to acknowledge that alcohol consumption has benefits too. He highlights how excessive drinking causes anywhere between 88,000 and 100,000 deaths per year as a reason to increase taxation, for example. Yet he never acknowledges that alcohol consumption (at least for most drinkers) brings satisfaction, hence why we drink.

Looking purely at the number of alcohol related deaths tells us little about the desirability of taxation on alcohol per se, because it is not clear how many deaths are rational decisions by well-informed individuals to “live for today,” as opposed to deaths resulting from “external costs” of alcohol consumption.

A newer literature embedded in behavioral economics squares this circle by arguing that heavy drinkers are not truly fulfilling their true lifetime preferences when they consume (because of hyperbolic discounting or other irrationalities). But that is not the argument Lopez makes, and nor would we expect this to be the case for all heavy alcohol consumers.

The traditional market failure framework recognizes that allowing people to decide what to consume enhances their welfare, provided that any third-party costs are accounted for. Lopez, on the other hand, sets out reducing deaths from alcohol as an aim in itself – unlinked to any acknowledgement of benefits. He assures us that significantly reducing alcohol-related deaths “doesn’t require prohibition.” But the logic of focusing on just deaths and ignoring pleasure is that we should indeed aim for zero consumption.

By implicitly presuming we all really want to be life-expectancy maximizing, he goes far beyond the market failure framework of dealing with externalities.

2.   Conflating private costs with external costs

Indeed, the traditional case for taxing alcohol is about pricing in external costs – costs imposed on others. Lopez cites a Centers for Disease Control and Prevention study that estimates the economic costs of excessive drinking in the U.S. as totaling $249 billion in 2010, or $2.05 a drink. Implicitly reaching for “externality” arguments, Lopez notes how this figure includes the costs of “crime, drunk driving, health problems, and more,” pitching this tax as a way of accounting for the externalities imposed on casual drinkers and teetotalers by heavy drinkers.

But an examination of the CDC reports shows the major cost to the economy comes not from these external costs, but from “a reduction in workplace productivity,” which accounts for a massive $179 billion of the total.

This, overwhelmingly, is a private cost and not an external cost. If individuals’ alcohol consumption affects their work performance, or their human capital accumulation, or the length of their working life, the vast proportion of that cost would ultimately be borne by the individual themselves through worse employment prospects and lower wages.

Some people may simply prefer a work-life balance where they stay out later to socialize and drink regularly, rather than maximizing at-work productivity. Using the CDC estimates as a proxy for the external costs of alcohol consumption would therefore lead to a tax rate far too high to deal with genuine external effects.

It is certainly true that some part of “worse productivity” would hurt the individual’s employer or the ultimate consumers of goods and services produced by heavy-drinking workers. Lost productivity could also be considered at least partially an external cost in that lower wages or worse employment prospects may reduce an individual’s net tax contribution. If this necessitates higher tax contributions from other taxpayers to maintain government revenues, there is a clear fiscal third-party effect.

But applying such reasoning consistently would profoundly change the scope of economic policymaking. Many decisions throughout our lives affect our measured productivity, pecuniary rewards, and net tax contributions. Implicitly assuming a baseline in which all individuals maximize measured productivity and net fiscal contributions, and considering deviations from this to be a market failure, would be an absurd principle.

Taking time off to have children or to care for a sick relative, staying up late to watch TV regularly and being tired at work, or choosing not to invest in one’s own human capital, might all have similar effects. This is to say nothing of career choices. Opting to become a French teacher or a public-interest lawyer, even when the opportunity exists for one to be a Wall Street trader, means people clearly do not always make decisions to maximize their net tax contributions.

Singling out the productivity effects of alcohol consumption as a unique externality in need of correction, when every day individuals make decisions that affect their productive potential and, indirectly, their net tax contributions, would be unworkable, arbitrary, and wrong.

3.   Arbitrarily comparing alcohol taxes today with the past

One piece of evidence Lopez cites in support of raising the alcohol tax is that taxes on alcohol “were one-sixth to one-third of their inflation-adjusted value in the early 2010s compared to the 1950s.”

This may offer insight into affordability, but it is irrelevant for what alcohol tax rates should be as justified by economics. The economic rationale here is not to reduce consumption to some level or hit any other arbitrary target for outcomes. It is to price in the external costs associated with marginal alcohol consumption, and then allow consumers to decide how much to drink according to their own preferences when faced with the full social costs of their actions.

It’s perfectly plausible that these external costs change over time. And even if they do not, Lopez presents no evidence that the 1950s tax rates were levels which accurately reflected external costs.

4.   Ignoring the implications of heterogeneity between consumers

Lopez rightly acknowledges that key opposition to increasing alcohol taxes arises because individuals who regard themselves as “responsible drinkers” will resent paying more because of the costs primarily imposed by heavy drinkers. He responds by reassuring us that most of the tax (in terms of revenue collected) would be borne by the higher-risk drinkers, simply because they drink more.

But this is in part because, as the academic literature suggests, heavy drinkers are far less responsive to price changes than non-heavy drinkers. A review of the literature by Jon P. Nelson of Pennsylvania State University found that only two 2 of 19 studies on the consumption behavior of heavy drinkers found “a significant and substantial negative price response.”

If we are to make the assumption that heavy drinkers are responsible for most of the external costs of alcohol consumption, then raising the tax on alcohol will deter consumption for exactly the wrong group. Pricing according to some aggregate calculation to find the “marginal” external costs across the whole population could actually worsen economic efficiency. The tax would be too low for heavy drinkers, but way too high for casual drinkers.

Conclusion

Standard economics, which considers the external costs of alcohol consumption, makes a strong a priori case for government action to “price in” these costs. But German Lopez’s article does not get us any closer to understanding what the correct way of accounting for these effects is. And if taxation, his analysis does not help us ascertain what the tax rate should be.

This past weekend at LibertyCon, I debated Andrew Yang, a progressive candidate for the Democratic presidential nomination, about whether the U.S. should adopt a Universal Basic Income.

Yang believes there is need for a UBI because future technological progress will gradually destroy jobs for people with limited skills. This forecast has arisen for millenia, but it has consistently been wrong:

Emperor Vespasian: The Roman historian Suetonius writes, of the Emperor Vespasian (69-79 AD), that someone came to him with a new, cheaper technology for transporting heavy columns to Rome. The emperor rewarded the inventor but quashed the device on the grounds of displacing manual labor. Suetonius quotes Vespasian: “How will it be possible for me to feed the populace? You must allow my poor hauliers to earn their bread.”

The historian Arnold Toynbee writes of the Roman emperor to whom it had been reported, “as a piece of good news, that one of his subjects had invented a process for manufacturing unbreakable glass. The emperor gave orders that the inventor should be put to death and that the records of his invention should be destroyed. If the invention had been put on the market, the manufacturers of ordinary glass would have been put out of business; there would have been unemployment that would have caused political unrest, and perhaps revolution.”

William Lee/Queen Elizabeth I 1589: Invented the stocking frame knitting machine hoping that it would relieve workers of hand-knitting. Seeking patent protection for his invention, he travelled to London where he had rented a building for his machine to be viewed by Queen Elizabeth I. To his disappointment, the Queen was more concerned with the employment impact of his invention and refused to grant him a patent, claiming that: “Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars”

Thomas Mortimer 1772: Wrote that he wished never to see machines such as saw mills and stamps as they would “exclude the labour of thousands of the human race, who are usefully employed …”

David Ricardo 1817:I am convinced, that the substitution of machinery for human labour, is often very injurious to the interests of the class of labourers.”

David Ricardo 1817:All I wish to prove, is, that the discovery and use of machinery may be attended with a diminution of gross produce; and whenever that is the case, it will be injurious to the labouring class, as some of their number will be thrown out of employment, and population will become redundant, compared with the funds which are to employ it.”

Thomas Carlyle 1839: “[T]he huge demon of Mechanism smokes and thunders, panting at his great task, in all sections of English land; changing his shape like a very Proteus; and infallibly, at every change of shape, oversetting whole multitudes of workmen, as if with the waving of his shadow from afar, hurling them asunder, this way and that, in their crowded march and course of work or traffic; so that the wisest no longer knows his whereabout[s].”

Evan Clark 1928: “the onward march of machines into every corner of our industrial life had driven men out of the factory and into the ranks of the unemployed”.

Keynes 1930:We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another. The increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption; the improvement in the standard of life has been a little too quick”

Ewan Clague 1935: A labor economist, “the present outlook is for the rate of displacement of labor to exceed the rate of reabsorption so that technological employment will continue to be large.”

TIME Magazine 1961: The number of jobs lost to more efficient machines is only part of the problem. What worries many job experts more is that automation may prevent the economy from creating enough new jobs. Says Pennsylvania’s Democratic Congressman Elmer J. Holland, whose subcommittee is about to study the matter: “One of the greatest problems with automation is not the worker who is fired, but the worker who is not hired.” 

John F. Kennedy 1962: “I regard it as the major domestic challenge, really, of the sixties, to maintain full employment at a time when automation, of course, is replacing men.”

Robert Heilbroner 1965: “As machines continue to invade society, duplicating greater and greater numbers of social tasks, it is human labor itself — at least, as we now think of ‘labor’ — that is gradually rendered redundant.”

Ian Turner 1978: Organized a symposium on the implications of the new technologies. The world, he predicted, was about to enter a period as significant as the Neolithic or Industrial revolutions. “By 1988, at least a quarter of the Australian workforce would be made redundant by technological change…”

International Metalworkers Federation 1989: Forecasted that within 30 years, as little as 2 percent of the world’s current labor force “will be needed to produce all the goods necessary for total demand.”

Jeremy Rifkin 1996: “In the years ahead more sophisticated software technologies are going to bring civilization ever closer to a near-workerless world.”

The failure of these predictions does not prove such concerns have no merit; perhaps the nature of technological progress will change.

But absent a reason why “this time is different,” history argues against these “Luddite” concerns.

Not to mention that past technological progress has meant incredible improvements in living standards, for poor and rich alike.

Private schools are held directly accountable to families. They must attract their customers and provide a high-quality educational product if they want to stay in business. School choice programs  allow families to access schools that are accountable to their children’s needs.

Government schools are not held accountable to children in the current system. A family that is not satisfied with their child’s residentially assigned government school typically only has three options: (1) buy an expensive house that’s near a better government school, (2) pay for a private school out of pocket while still paying for the government school through property taxes, or (3) complain to the government school leaders and hope things get better.

The high costs associated with each of those options leaves most families powerless – especially the least advantaged.

This clip from Andrew Coulson’s award-winning School Inc. highlights the fact that school choice is all about accountability. Low-income families in India are asked the following question: “Why are you spending money on the private schools when the government schools are free?”

Their response is telling:

“In the government schools our children are abandoned.”

President Trump offered Democrats a new deal to reopen the government this weekend. The main components would see the president get nearly $8.7 billion for the wall and immigration enforcement and Democrats in Congress get to temporarily reverse his decisions to end legal protections for immigrants with DACA and Temporary Protective Status (TPS). Democratic leadership in the House and Senate have already rejected the offer, and while it is unlikely to pass the Senate and even less likely to pass in the House, the Senate will vote on his proposal anyway this week.

Based on Trump’s comments on Saturday and the White House outline, the legislation would:

  • extend status for three years for a million immigrants already in DACA and TPS (mainly immigrants stranded in the United States after earthquakes in El Salvador, Honduras, and Haiti);
  • spend $5.7 billion to construct as much as 234 miles of massive border barriers at a cost of $24.4 million per mile;
  • spend another $2.95 billion to:
    • inspect for drugs at ports of entry;
    • hire 75 deportation judges to speed up the currently slow deportation process;
    • employ 2,750 more Border Patrol and “law enforcement” agents (i.e. deportation ICE agents); and
    • fund medical inspections and “temporary housing” (i.e. detention of migrants);
    • change immigration law to allow immediate deportations of children from Central America; and
    • provide a very limited pathway to apply for status in their home countries.

It’s not a fair deal, as Trump can spend the rest of his life basking in the shadow of his vanity project, but the immigrants get only a 3-year reprieve from the de facto deportation orders that Trump himself issued when he canceled their statuses. A permanent status is the only fair trade for a permanent wall. 

Moreover, President Trump’s annual price for letting the immigrants remain is about $2.9 billion ($8.7 billion/3 years), placing the price of lifetime protections at about $190 billion ($2.9 billion times 65 years). DACA and TPS recipients represent less than 10 percent of the entire illegal immigrant population. If Democrats give Trump his price for these immigrants, they would be accepting a valuation of permanent legalization for all illegal immigrants of about $2 trillion. And that is without any pathway to citizenship, more deportations, and fewer protections for children at the border.

Trump’s deal might be his best official offer yet, as it drops his demands for cuts to legal immigration, but it is so far from anything reasonable or politically feasible that it seems like it is more a product of negotiations within the White House—between Jared Kushner, Mike Pence, and Stephen Miller—than between Democrats and Republicans in Congress.

Liberty. It is America’s foundational value. We have failed to uphold it for far too many people much too often, but the freedom of Americans to choose what they will believe, and how they will live, is at the very heart of the American experiment. It is fitting, then, that we kick off five days of National School Choice Week posts on Cato@Liberty with a reminder of the fundamental good that is sacrificed when government controls education.

As we will be doing all week, I direct your attention to a clip from Andrew Coulson’s award-winning School Inc., a documentary series that ran on PBS stations nationwide in 2017 and can still be watched, in its entirety, on the website of Free to Choose Media. Here, after discussing sometimes even deadly fights that Americans have had over what the public schools will teach, Andrew invokes Thomas Jefferson’s warning about the tyranny of compelled support of others’ views, and explains how, by compelling such support, public schooling forces wrenching, divisive conflict. Such conflict could be avoided were people allowed to direct the funding for their children’s education to educators who share their values. In other words, by upholding liberty school choice is both more just, and more conducive to social harmony, than public schooling.

 

The following is an excerpt from an op-ed I wrote assessing the Trump administration’s 2019 Missile Defense Review (MDR) and the impact that the document’s recommendations may have on nuclear stability: 

The MDR is a very ambitious document. It starts with calls for more midcourse interceptors and other existing defensive systems, then urges the development of new capabilities to defeat more kinds of adversary missiles across more stages of flight. Examples of these new systems include: laser-armed drones that could disrupt missiles before they leave the atmosphere, space-based sensors to improve early detection of missile launches, and F-35s equipped to hunt mobile missiles before they can be fired.

Supporters of a bigger and better U.S. missile defense capability argue that it improves deterrence by reducing adversaries’ confidence in their ability to launch successful attacks against the United States, its military forces, and allies. This argument has some merit, but it overlooks the negative effect missile defense has on nuclear stability when other factors are considered.

To read the rest of the article, visit Defense One: https://www.defenseone.com/ideas/2019/01/new-missile-defense-policy-wont-maker-us-safer/154295/?oref=d-river.  

We are in the fourth week of the partial federal shutdown, which is starting to disrupt the broader economy because the government exerts control over many major industries. These sorts of disruptions will become more frequent in coming years as deficits rise and partisan divisions persist.

To minimize the damage, we should privatize or devolve to the states activities that do not need to be run by Washington. Those activities include air traffic control, airport screening, parks, and services on Indian reservations, as discussed here and here.

The Wall Street Journal reports today that the “shutdown leaves small-business loans in limbo.” The newspaper says that Small Business Administration (SBA) “loans are a mainstay for many entrepreneurs… The shutdown has already delayed about $2 billion in SBA lending.”

Why is the government a “mainstay” of entrepreneurs? Why should the government subsidize businesses with loan guarantees? Banks have been providing business loans for hundreds of years, so it is not as if the government has unique lending skills unknown to the marketplace.

Tad DeHaven and Veronique de Rugy explain the folly of SBA loans in this Downsizing Government study. They discuss the history of the SBA and explain how politics sustains the agency’s existence rather than any coherent theory of market failure.

They argue that America’s impressive entrepreneurial achievements did not stem from small business subsidies and that the SBA is an unneeded agency that should be terminated. Tethering small businesses to Washington is misguided and the shutdown is illustrating the damage.

 

When Alexandria Ocasio-Cortez suggested a 60-70 percent federal income tax rate on those earning over $10 million, prominent economists and economic commentators Matt YglesiasPaul Krugman and Noah Smith argued that her policy prescription was simply mainstream economics.

But a new Chicago Booth IGM Survey poll suggests economists are generally much more skeptical of the wisdom of jacking up top federal tax rates than these commentators suggest.

The economists were asked whether a top federal marginal income tax rate of 70 percent within the current code would raise substantially more revenue than today’s 37 percent without lowering economic activity. Just 18 percent of those surveyed agreed, against 49 percent who disagreed (21 percent vs. 63 percent when weighted by confidence.)[i]

In other words, a clear majority of economists believe there’s no free lunch from higher marginal rates on the top income bracket. Either it will raise revenue but with economic distortions, or it won’t raise revenue, or it will both fail to raise revenue and be detrimental to broader economic health.

It’s worth noting the wording of the question does not leave much room for nuance. Richard Thaler asked why it deviated from Ocasio-Cortez’s actual proposal. Kicking in at a much higher income level, and so on a group likely to be more responsive in terms of tax planning, her policy would certainly raise less revenue than the policy asked about in the question.

Several other economists said they would have changed the way they voted if a word like “substantially” was inserted in front of economic activity too. But overall, a host of the economists commented to the effect that such high marginal rates within the current code would lead to a whole host of new avoidance activity, on the one hand, and reduced labor supply on the other.

Given the particular wording of the question, the most interesting vote cast was that of Emmanuel Saez, who has been responsible for much research in this area. Intriguingly, he was in the minority in voting that he agreed a 70 percent top marginal rate would raise revenue without lowering economic activity.

On one level, that’s not surprising. His work with Peter Diamond concluded that a total combined 73 percent top marginal tax rate would be revenue maximizing and “optimal” if we put zero weight on the welfare of the rich. They believe too that the real economic responses to higher top tax rates would be small. As such, their research is the academic go-to for those arguing for much higher top marginal tax rates.

But when you read the details of how they got to that result, it’s difficult to see how Saez answered this IGM question in the affirmative. The Diamond-Saez paper makes clear their 73 percent result only holds if you presume policymakers could redesign the tax code to eliminate deductions, exemptions and other possibilities for tax planning or avoidance.

If not, then presuming people in the top tax bracket are as responsive today to tax changes as in the 1980s, the revenue maximizing total combined marginal tax rate would be much lower at 54 percent – equating to around a 48 percent marginal federal income tax rate. This, incidentally, is very similar to the revenue-maximizing income tax rate calculated by the UK government.

According to Saez’s own work then, raising the 37 percent top marginal income tax rate to 70 percent within the current code (as the question clearly sets out), would take us far beyond the revenue maximizing top marginal tax rate. It would be self-defeating in terms of revenue raising. We would be far on the wrong side of the Laffer curve.

It seems extraordinarily unlikely, in a world where 48 percent is the revenue maximizing rate, that a 70 percent rate would raise “substantially more revenue” than a 37 percent rate, as Saez’s answer implies.


 

[i]  In 2019, the 37 percent rate will apply to all single filers with more than $510,300 of taxable income.

Press reports have created the impression that the opioid overdose antidote naloxone is now available over the counter. But in fact, the drug is still classified in the US as prescription only, so states have developed workarounds to make it easier for patients to obtain it without going to a doctor for a prescription. In most states, patients can get naloxone by going up to the counter and asking the pharmacist, who is legally authorized by the state to dispense it. 

But some states prohibit third parties from obtaining a prescription for another person, so people in those states who wish to have the antidote available because they have a friend or relative who uses opioids cannot obtain it. And experience shows that many pharmacists choose to not stock naloxone or participate in any distribution program. Furthermore, the stigma now attached to opioid use has deterred many patients from going up to the pharmacy counter and explaining to a pharmacist why they need naloxone.

To get around such obstacles, Australia and Italy have designated naloxone as a truly over-the-counter drug. People can discreetly buy it off the shelf and check out at the cash register.

The Food and Drug Administration is on record since at least 2016 as believing that it is probably appropriate for naloxone to be rescheduled as OTC and has encouraged manufacturers to petition the FDA to that end. Yesterday  FDA Commissioner Gottlieb announced the FDA has even gone to the trouble of designing Drug Facts Labels (DFL) required of manufactures for their products to be sold over the counter, and has even tested these labels for “consumer comprehension” in front of focus groups. The Commissioner stated in the announcement that this represents an unprecedented effort to facilitate and speed up the reclassification of naloxone from prescription-only to OTC.

This is commendable. But as I have written herehere, and here, the Commissioner does not have to wait for manufacturers, who may lack the incentive, to request the move to OTC. Under FDA regulations, the FDA can undertake reclassification review at the request of “any interested person,” or the Commissioner himself. States may petition the FDA for reclassification. Finally, if all else fails, Congress can order the reclassification.

The FDA should no longer wait for manufacturers to ask them to make this lifesaving drug more accessible to those in need.

Germany introduced a new economy-wide minimum wage for the first time in 2015, at a relatively high rate of €8.50 ($9.67 today). This rose to €8.84 in 2017. For reference: between 10 and 14 percent of eligible workers were thought to earn less than €8.50 before the policy was introduced.

This is interesting from a research perspective. Most minimum wage studies examine the impact of minimum wages at low levels or assess small changes to their rate. But here we have a case study of a whole regime change with a high rate introduced for the first time.

A new paper by IZA Institute of Labor Economics provides a clear literature review on the effects so far. Studies have exploited three different strategies to assess the impact: utilizing regional variation of the “bite” of the minimum wage, using treatment and control groups, and assessing the impact on firms. As the table below shows, a broad consensus is emerging, which sits well within the existing minimum wage literature:

  • Unsurprisingly, hourly wages have increased at the bottom of the income distribution, though there is little evidence of a ripple effect further up.
  • Most studies find a small but negative effect on overall employment (up to 260,000 fewer jobs), driven by reduced hiring (not layoffs) and a reduction of casual and atypical employment.
  • All studies that assessed it find a negative effect on contractual hours.
  • As a result, although hourly wages increased, the reduction in hours meant gross monthly earnings does not appear to have increased much for low-paid employees.
  • Since gross monthly earnings have not substantially increased, and those earning minimum wage are often not from the poorest households, the policy hasn’t seemingly reduced the risk of being in poverty.

For more on the state of the academic debate on minimum wages, read my Regulation article.

 

In his State of the City Address, New York mayor Bill de Blasio laid out his governing philosophy succinctly:

Here’s the truth, brothers and sisters, there’s plenty of money in the world. Plenty of money in this city. It’s just in the wrong hands!

The money, of course, is in the hands of those who earned it. In de Blasio’s view, people who earn too much are “the wrong hands.”

In the speech itself and in an interview with Jake Tapper on CNN’s “State of the Union,” he elaborated: the wealthy have too much money because they aren’t taxed enough.

There are whole books on the correct theory of taxation. De Blasio, like many politicians, seems operate on the theory most clearly enunciated in 1990 by Sen. Barbara Mikulski (D, Md.):

Let’s go and get it from those who’ve got it.

There are many theories of taxation, such as Haig-Simons, the Tiebout model, and the Ramsay Principle. But I’d bet that the Mikulski Principle explains actual taxation best. And as “progressives” are feeling their oats, we can expect more politicians and pundits to be asking, “Who’s got the money? Let’s go get it.”

In his oval office speech, President Trump had this to say about immigrants:

This is a humanitarian crisis — a crisis of the heart and a crisis of the soul. Last month, 20,000 migrant children were illegally brought into the United States — a dramatic increase. These children are used as human pawns by vicious coyotes and ruthless gangs. One in three women are sexually assaulted on the dangerous trek up through Mexico. Women and children are the biggest victims, by far, of our broken system. This is the tragic reality of illegal immigration on our southern border. This is the cycle of human suffering that I am determined to end.

Here’s what his administration is doing to protect these women and children:

Previously, the administration had separated women from their children in order to criminally prosecute them for entering the country illegally.

Native-born American concerns about immigration are primarily about how immigration will affect the culture of the country as a whole and, to a lesser extent, how the newcomers will affect the economy.  One’s personal economic situation is not a major factor.  It’s reasonable to assume that the degree of cultural difference between native-born Americans and new immigrants affects the degree of cultural concern.  Thus, Americans would likely be less concerned over immigrants from Canada or Singapore than they would be over immigrants from Egypt or Azerbaijan. 

A large team of psychologists recently created an index of the cultural distance of people from numerous countries around the world relative to the United States.  The index is constructed from responses to the World Values Survey as well as linguistic and geographical distances.  Their index includes numerous different psychological facts such as individualism, power distance, masculinity, uncertainty avoidance, long term orientation, indulgence, harmony, mastery, embeddedness, hierarchy, egalitarian, autonomy, tolerance for deviant behavior, norm enforcement, openness, conscientiousness, extraversion, agreeableness, neuroticism, creativity, altruism, and obedience.  These are all explained in more detail in the paper.

Their paper has an index where lower numbers indicate a culture more similar to that of the United States while a higher number indicates a culture more distant from that of the United States.  As some extreme examples, Canada’s cultural distance score is 0.025 and Egypt’s is 0.24. 

Using the cultural distance index, I calculated the cultural distance of the stock of immigrants in the United States in 2015 from native-born Americans.  I then compared the cultural distance of the stock to the cultural distance of the flow of immigrants who arrived in 2012-2015.  The immigration figures come from the Annual Social and Economic Supplement of the U.S. Census Bureau.  If the stock of immigrants in 2015 was more culturally similar to native-born Americans than the flow, then the recent flow is more culturally distinct.  If the stock of immigrants in 2015 was more culturally different from native-born Americans than the flow, then the recent flow is less culturally distinct. 

Table 1 shows the results.  The immigrant flow in 2012-2015 is more culturally different from native-born Americans than the stock of immigrants was in 2015.  In other words, today’s newest immigrants are more different than those from the relatively recent past.  Relative to the stock, the cultural distinctiveness of the flow in 2012-2015 was greater by about one-fourth of a standard deviation.  In other words, the stock of American immigrants in 2015 was very culturally similar to people from Trinidad and Tobago (0.099) while the flow of new immigrants who arrived from 2012-2015 more similar to Romanians (0.11).

Table 1

Cultural Distance of Immigrants Relative to Native-Born Americans

  Cultural Distance Immigrant Stock 0.10 Immigrant Flow 0.11

Sources: WEIRD Index, ASEC, and author’s calculations.

There are a few problems with my above calculations.  First, those who choose to move here are likely more similar to Americans than those who do not.  There is obviously some difference in cultural values inside of a country as the average person does not choose to emigrate to the United States.  Second, American immigration laws likely select immigrants with similar cultural values through various means such as favoring the family members of Americans and those hired by American firms.  It’s reasonable to assume that foreigners who marry Americans and who are hired by American firms are more culturally similar than the average person from those countries.  Third, the cultural distance index only covers about two-thirds of the immigrant population in the United States.  It is possible that countries not on the list could shift the score significantly in either direction.

New immigrants to the United States are more culturally different than those of the past, but not by much.  This increase in the cultural difference of new immigrants could have had an outsized impact on Trump voters in 2016, but immigration overall is more popular with Americans than it used to be.  

 

 

 

Welcome to the Defense Download! This new round-up is intended to highlight what we at the Cato Institute are keeping tabs on in the world of defense politics every week. The three-to-five trending stories will vary depending on the news cycle, what policymakers are talking about, and will pull from all sides of the political spectrum. If you would like to recieve more frequent updates on what I’m reading, writing, and listening to—you can follow me on Twitter via @CDDorminey.  

  1. The Missile Defense Review dropped this morning. For those that have been patiently waiting to see this document for months, your time has finally arrived. Since this was just released hours ago, articles breaking down the details have yet to be posted. So stay tuned and check Twitter for commentary. 
  2. Pentagon preps for budget delay as historic shutdown drags on,” Tony Bertuca. The President’s FY2020 budget request was supposed to be publically available and kick off the budget-making process on February 4th, 2019. With the government shutdown, and various topline numbers coming out of the White House, it looks like the budget request will be delayed. 
  3. Reform panel warns Congress to overhaul Pentagon acquisitions, or lose technological edge,” Joe Gould. The Section 809 Panel was gifted the herculean task of reforming how the Pentagon buys products—everything from cybersecurity software to major weapons system hardware. The report itself is mammoth (500+ pages), but includes recommendations aimed at streamlining the acquisition process and leveraging commercial advances. 
  4. The Myth of Cyber Offense: The Case For Restraint,” Brandon Valeriano and Benjamin Jensen. What does a new era of Great Power Politics mean for American cyber policy? Mostly that it’s still being defined and actively shaped by the changing balance of power—and that the choices America makes now could have either stabilizing or destabilizing effects on the evolution of this domain. 

Two huge developments on Brexit this week.

First, Theresa May’s disastrous EU Withdrawal Agreement (negotiated and endorsed by the EU) suffered a crushing defeat in Parliament, going down by 432 votes to 202. This was a fundamental rejection of a deal with a host of problems. Under any normal circumstances, such a mammoth loss on a key policy would have ended a Prime Minister and a government.

Second, the leader of the opposition, Labour’s Jeremy Corbyn, called a subsequent vote of “no confidence” in the government. But with Brexiteers, including the Northern Irish DUP, swinging back behind the Prime Minister to avoid the possibility of a general election, the government survived (by 325 to 306).

What happens now? The default, set out by law, is that the U.K. leaves the EU on March 29th with or without a deal. It is well documented that there is a clear majority in Parliament who want to avoid leaving without a deal. But there is no clear majority for any of the options necessary to prevent a no deal exit.

I spent some time looking at the parliamentary arithmetic last night, from the perspective of Theresa May. She says that she a) wants to avoid no deal but b) wants to ensure she delivers Brexit. And there is no obvious means of achieving both of these goals.

Option 1: Operation Engage Conservatives

Her first option is to try to get more Conservatives on board to support a Withdrawal Agreement. But the difficulty of her being able to do so is set out by the graphic below. The Brexiteer Conservative rebels either want to completely throw out the Withdrawal Agreement for something new, remove a key provision (the backstop) or else simply leave without a deal. Given the EU has said publicly it will not renegotiate or remove the backstop, this seems a dead end unless May is willing to countenance no deal seriously.

The polling suggests that the Brexiteers were right about the politics up front – if the Prime Minister had pursued an “extensive Free Trade Agreement” Brexit and had not got bogged down in the complex arrangements she’d agreed, then a majority could just about have been eeked through on Conservative and DUP votes, with a smattering of Labour rebels (the no deal and no backstop crowds would have accepted it).

But we are where we are. Unless the EU is willing to reopen negotiations and offer a Canada+ deal for the whole of the UK (ending provisions to treat Northern Ireland differently) then tacking towards Brexiteers is endorsing the prospect of no deal, which May says she does not want.

It remains to be seen, of course, how many of these Brexiteers would actively support delivering Brexit through no deal if the EU rebuffed the opportunity to renegotiate outright. But through revealed preference (rejecting the Withdrawal Agreement), they have surely shown they are willing to countenance that risk.

One clear conclusion of this polling of Conservative rebels though is that there are only a tiny number of additional Conservative votes to be gained from a softer Brexit (single market *and* customs union membership – so-called Norway Plus). Given the commentariat all seem to think this week’s events must result in a softer Brexit, that means…

Option 2: Operation Engage The Opposition Parties

The second option is to give up on Conservative votes and try to reach out to opposition parties. Theresa May has offered Parliamentary talks to their leaders, and other groups of senior Parliamentarians. So far though, the leaders of the Labour party, the Lib Dems and the SNP have all said that their key demand is “taking no deal off the table.” Given no deal is the default Brexit, that essentially means “take guaranteeing Brexit off the table,” something the Prime Minister cannot do without her government likely falling.

The problem with dealing with the opposition parties is that they themselves are divided into two broad camps over what to do next.

Yesterday, 71 of 256 Labour MPs joined the campaign for a second referendum. Add in the Lib Dems, the SNP, the Green, a smattering of Independents who want this too and, say, 20 Conservatives, and there’s still only a combined circa 150 in the Chamber who are strongly for a fresh public vote. Even if the government went in this direction, and took the payroll vote with it, that would not command a majority in the chamber either. An overwhelming number of Conservative and Labour MPs in working class seats still by-and-large oppose a 2016 rerun. This could only happen if the Labour front-bench shift their position.

But the only other option that opposition parties might be interested in is a much softer Brexit: either a full, permanent customs union (Labour’s official position) or a Norway style option. Given 150 MPs would prefer a second referendum, it is unclear how many would opt for this if it was available. The only means of getting it through seems to be with Labour front-bench support, giving blessing to large numbers of Labour MPs to vote with the government. That would tear the Conservative party apart and probably guarantee a defeat in the next election, which would naturally appeal to Labour. But on the flipside, large numbers of Labour MPs in Leave constituencies would consider it highly risky as much of the media would describe as Brexit In Name Only, and the completely unreconciled Remainers would reject it for not fully ending Brexit.

Conclusion

Over the coming weeks, Parliament will likely host lots of indicative votes on all these options. The government has to bring forward a revised motion and try again. But so far the Prime Minister appears unwilling to change much of substance, and it’s not clear where she turns.

Crucial now will be the sequencing of votes by MPs for alternatives. If it gets to a stage where it’s the prospect of no deal against the last perceived line of defense against that happening, then Remainers and soft Brexiteers could unite. For now though, they are hopelessly divided too. Absent further constitutional vandalism endorsed by the Speaker of the House of Commons (a strong possibility), I still believe a no deal Brexit is highly possible, despite media claims to the contrary.

Irving Fisher’s classic treatise, The Purchasing Power of Money: Its Determination and Relation to Credit, Interest and Crises (1911), still offers valuable insights regarding monetary reform. This post examines some of Fisher’s insights and draws some lessons for Fed policy.

The Importance of Stable Money

Fisher recognized “the evils of monetary instability”—that is, “periodic changes in the level of prices, producing alternate crises and depressions of trade.”  He argued that “only by knowledge, both of the principles and of the facts involved, can such fluctuations … be prevented or mitigated, and only by such knowledge can the losses which they entail be avoided or reduced” (Fisher 1912: ix). [1]

The main principles that guided Fisher’s work were embodied in the quantity theory of money and the theory of monetary disequilibrium.  The former held that, ceteris paribus, the purchasing power of money (the reciprocal of the price level) depends on the quantity of money relative to real output (trade).  If the economy is at full employment and the velocity of money is stable, then the purchasing power of money will be inversely related to the stock of money.  If money moves in line with trade and velocity is stable, then monetary equilibrium will prevail and the value of money will also be stable (see Fisher 1912: 320).

When Fisher wrote The Purchasing Power of Money, the United States was still on the classical gold standard; there was no central bank.  In his book, he defined money as “what is generally acceptable in exchange for goods” (p. 8).  He recognized that “money never bears interest except in the sense of creating convenience in the process of exchange” and that “this convenience is the special service of money and offsets the apparent loss of interest involved in keeping it in one’s pocket instead of investing” (p. 9).

Fisher also importantly recognized that “money itself belongs to a general class of property rights” known as “currency” or “circulating media.”  More specifically, “currency includes any type of property right which, whether generally acceptable or not, does actually, for its chief purpose and use, serve as a means of exchange” (p. 10).  While Fisher classified bank notes as money and circulating media, he viewed checkable bank deposits as currency, not money in the strict sense (p. 11).

“Primary money” referred to commodity money (gold coin at the time), while “fiduciary money” (notably bank notes) referred to money whose value depended “on the confidence that the owner can exchange it for other goods” (ibid.).  “The chief quality of fiduciary money,” wrote Fisher, “is its redeemability in primary money, or else its imposed character of legal tender” (p. 12).

Fisher refined the quantity theory of money to take account of monetary disequilibrium and used statistical methods to test the theory against historical data.  Like his contemporaries, he understood that the fundamental cause of business fluctuations was erratic money.

The Theory of Monetary Disequilibrium

The main tenets of the theory of monetary disequilibrium were well known to Fisher and Harry Gunnison Brown, who assisted in writing The Purchasing Power of Money (see chap. 4). Clark Warburton summarized those tenets in his monumental book, Depression, Inflation, and Monetary Policy (1966).  They are listed in Table 1.

TABLE 1Assumptions of the Theory of Monetary Disequilibrium 1. A change in the level of prices is a process which takes a period of time, and affects prices of various items sequentially rather than simultaneously. 2. Some prices are greatly influenced by custom or contract and move less readily than other prices; specifically, wages and contractual elements in business costs tend to be sluggish relative to price of output. 3. These differential movements of prices and also prospective further changes in prices have significant effects upon business profits and prospects and hence upon business plans, especially with respect to investment decisions and to holdings of cash relative to receipts and expenditures. 4. The economy is not static; more specifically, we live in a world where population is growing, technological developments are increasing production per worker, and other developments tend to increase the volume of transactions (in quantity terms) relative to the output of final products. 5. As a result of the foregoing and of the stability of customs (such as the periodicity of income payments) which affect the rate of circulation of money, the economy needs for equilibrium a continuous increase in the quantity of money. 6. It is theoretically possible for monetary disequilibrium to persist for months or years, and observations indicate that many such situations have occurred. 7. The actual quantity of money reflects primarily the behavior of banks or of a government treasury issuing circulating medium; and the nature of banks is such that they have a tendency to carry forward the expansion of money to the limit permitted by interbank relationships and the laws under which they operate. 8. In the United States, subsequent to establishment of the national banking system, the chief restraint on the banks, limiting their expansion and occasionally necessitating contraction, is the amount of legal reserves. 9. The impact of monetary disequilibrium is intensified by sequential changes in the rate of circulation of money [i.e., the velocity of money]. 10. Changes in the quantity of money which are not consonant with the rate of expansion needed for equilibrium also change the amount of funds available in the money loan market; thus they constitute the force which produces a departure of the market rate of interest from the equilibrium rate, and consequently disturbs property values and mutual adjustment of saving and investment decisions. 11. If the force impinging on the quantity of money, such as the state of bank reserves, can be observed ahead of change in the quantity of money, or is itself of such character as to have a direct effect on the securities market, the disturbance to property values and to investment decisions may begin ahead of the monetary disequilibrium as observed in statistical data.

Source: Warburton (1966: 28–29).

Propositions 10 and 11 in Table 1 were of particular importance to Fisher. He argued that, while “it is generally recognized that the collapse of bank credit brought about by loss of confidence is the essential fact of every crisis,” it “is not generally recognized … that this loss of confidence … is a consequence of a belated adjustment in the interest rate” (p. 66). His purpose in writing The Purchasing Power of Money was to emphasize that “the monetary causes [of crises] are the most important when taken in connection with the maladjustments in the rate of interest. The other factors often emphasized are merely effects of this maladjustment” (ibid.).

In seeing monetary instability as the chief factor in business fluctuations, Fisher was following the tradition going back to David Hume whereby classical economic theory consisted of two parts: (1) a theory of equilibrium whereby market forces would restore relative wages and prices to their equilibrium levels, and (2) a theory of disequilibrium in which there is either an excess demand for, or supply of, money.  Although the theory of monetary disequilibrium—also known as the “dynamic theory of money”—was widely recognized and developed by the first quarter of the 20th century, the ascent of Keynesian economics diverted attention from that body of knowledge.[2]

Fisher argued that an excess supply of money will not immediately be reflected in a proportionate rise in the price level. The corresponding rise in the supply of bank credit will lower the rate of interest in the short run until inflation is fully anticipated, at which point the nominal interest rate will rise and the expected profitability of investment fall. During the transition to a new equilibrium, bankruptcies will occur and unemployment rise (because of sluggish adjustment of relative wages and prices).[3]

In looking at the case of “overinvestment,” Fisher notes:

The stockholder and enterpriser generally are beguiled by a vain reliance on the stability of the rate of interest, and so they overinvest. It is true that for a time they are gaining what the bondholder is losing and are therefore justified in both spending and investing more than if prices were not rising; and at first they prosper. But sooner or later the rate of interest rises above what they had reckoned on, and they awake to the fact that they have embarked on enterprises which cannot pay these high rates [p. 66].

He goes on to explain that “a curious thing happens: borrowers, unable to get easy loans, blame the high rate of interest for conditions which were really due to the fact that the previous rate of interest was not high enough. Had the previous rate been high enough, the borrowers never would have overinvested (p. 67, emphasis added).

In sum, the importance of erratic money in Fisher’s theory of business fluctuations and his recognition that transition periods could last a considerable time make his theory part and parcel of the dynamic theory of money (see Warburton 1966: 4–5).  Fisher held that, in studying business fluctuations, one cannot ignore variations in the quantity of money relative to output. That is why he chose those variations as the  “chief factor” in his study of commercial crises (Fisher 1911: 55).  Moreover, he argued that “periods of transition are the rule and those of equilibrium the exception, [so that] the mechanism of exchange is almost always in a dynamic rather than a static condition” (p. 71). One of his major contributions was a rigorous discussion of “maladjustments in the rate of interest” in the process of adjustment to a new equilibrium by distinguishing between nominal and real rates of interest.

Proposed Reforms and Method of Persuasion

Fisher considered a number of reforms designed to stabilize the long-run price level, and thus maintain the purchasing power of money. They included changes in monetary law to:

  • “Make inconvertible paper the standard money, and to regulate its quantity.”
  • “Regulate the supply of metallic money by a varying seigniorage charge.”
  • “Issue paper money, redeemable on demand, not in fixed amounts of the basic precious metal, but in varying amounts, so calculated as to keep the level of prices unvarying.”[4]
  • “Adopt the gold-exchange standard combined with a tabular standard” [p. 348].[5]

In proposing any monetary reform designed to safeguard the long-run value of money, Fisher believed that “the first step” should be “to persuade the public, and especially the business public, to study the problem of monetary stability” (ibid.) When the time is ripe for reform, the intellectual groundwork will be ready for policymakers and the public to take the appropriate action. The fact that certain monetary reforms may not be politically feasible at the moment should not dissuade scholars from contemplating reforms that may improve the monetary arrangement and benefit society.  As Fisher wrote,

The necessary education once under way, it will then be time to consider schemes for regulating the purchasing power of money in the light of public and economic conditions of the time. All this, however, is in the future. For the present there seems nothing to do but to state the problem and the principles of its solution in the hope that what is now an academic question may, in due course, become a burning issue [ibid.].

As noted earlier, Fisher saw “the problem of stability and dependability in the purchasing power of money” as “the most serious problem” (p. 321).  Variations in the price level can occur due to (1) “transitional periods constituting credit cycles” and (2)  “secular variations” due to “incidents of industrial changes.”  Both those disturbances can be mitigated, according to Fisher, by increasing “knowledge as to prospective price levels.”  If the public anticipates changes in the price level, then those changes will be reflected in nominal interest rates: “a foreknown change in price levels might be so taken into account in the rate of interest as to neutralize its evils” (ibid.).

Fisher summed up by writing:

While we cannot expect our knowledge of the future ever to become so perfect as to reach this ideal, viz. compensations for every price fluctuation by corresponding adjustments in the rate of interest—nevertheless every increase in our knowledge carries us a little nearer that remote ideal [ibid.].

Giving people better information, however, may not change their behavior if they have a vested interest in maintaining the status quo. Thus, Fisher observes:

The prejudice of business men against the variability of, and especially against a rise of the rate of interest, probably stands in the way of prompt adjustment in that rate and helps to aggravate the far more harmful variability in the level of prices and its reciprocal, the purchasing power of money [p. 322].

Nevertheless, Fisher thought that “while there is much to be hoped for from a greater foreknowledge of price [level] changes, a lessening of the price changes themselves would be still more desirable” (p. 323).

The Search for Stable Money

The quantity theory of money attributes price-level changes mainly to “changes in money and trade.”  As Fisher remarks,

There has been for centuries, and promises to be for centuries to come, a race between money and trade. On the results of that race depends to some extent the fate of every business man. The commercial world has become more and more committed to the gold standard through a series of historical events having little if any connection with the fitness of that or any other metal to serve as a stable standard. So far as the question of monetary stability is concerned, it is not too much to say that we have hit upon the gold standard by accident [pp. 323–24].

While there is little support for a gold standard at present, that monetary regime was taken as a given in 1911, and there seemed to be little chance of replacing it:

Now that we have adopted a gold standard, it is almost as difficult to substitute another as it would be to establish the Russian railway gauge or the duodecimal system of numeration. And the fact that the question of a monetary standard is today so much an international question makes it all the more difficult” [p. 324].

What is of interest here is that Fisher did “not attempt to offer any immediate solution of this great world problem of finding a substitute for gold.”   Rather, he reasoned that “before a substitute for gold can be found, there must be much investigation and education of the public” (ibid.).  His strategy was

to call attention to the necessity for this investigation and education, to examine such solutions as have been already proposed and, very tentatively, to make a suggestion which may possibly be acted upon at some future time, when, through the diffusion of knowledge, better statistics, and better government, the time shall become ripe [ibid.].

Fisher reviews a number of proposals for fundamental monetary reform in chapter 13, including “honest government regulation of the money supply” aimed at price-level stability. A simple scheme would be for the monetary authority to issue  “inconvertible paper money in quantities so proportioned to increase of business that the total amount of currency in circulation, multiplied by its rapidity, would have the same relation to the total business at one time as at any other time.” He argued that, “if the confidence of citizens were preserved, and this relation were kept, the problem [of achieving a stable price level] would need no further solution” (p. 329, emphasis added).

However, Fisher rejects this proposed monetary rule, because “sad experience teaches that irredeemable paper money, while theoretically capable of steadying prices, is apt in practice to be so manipulated as to produce instability” (ibid.) His preferred reform was to introduce a “gold-exchange standard combined with a tabular standard.”[6] He recognized that, a tabular standard alone, which could be introduced by private contractual parties without any government action, would not suffice to bring about monetary and price-level stability (see pp. 334–37).  But it should be pointed out that his mixed system also has problems: it is not a real gold standard, it is open to speculative attacks, and it depends on an unsustainable degree of central bank cooperation.

We now turn to lessons for the Fed and monetary reform from the insights of Fisher.

Lessons for the Fed and Monetary Reform   

Irving Fisher’s examination of monetary theory and history led him to refine the quantity theory of money and to offer various proposals for monetary reform.  He took a comparative institutions approach to reforming the monetary regime.  He did not expect immediate results, but emphasized the importance of laying the groundwork for future reform so that when the time was ripe he could offer well-developed alternatives to the existing system. He sought to improve the chances for price-level stability and lessen the chances for crises due to erratic money.

Fisher’s emphasis on the discoordination generated by monetary disequilibrium is still relevant today, but has been lost sight of in macroeconomic models devoid of money. His emphasis on a stable and predictable value of the dollar is useful as a guide to monetary policy, but ignores problems with a price-level target as opposed to targeting nominal spending. Instead of maintaining a constant inflation rate, a nominal GDP target would allow the rate of inflation to vary with changes in the growth rate of real output, declining in times of relatively rapid output growth, and rising in times of slower growth. As New Zealand economist Allan G. B. Fisher noted, “If prices are not allowed to fall in proportion to improvements in the efficiency of production, misleading indications will be given to producers as to the directions in which it is desirable to retard or accelerate the flow of capital; and the errors thus encouraged are likely to cause dislocation throughout the whole economic structure.”[7]

Fisher’s attention to transition periods and especially to the “maladjustments in the rate of interest,” caused by an excess supply of money, is relevant for helping understand financial crises. His analysis of financial booms and busts led to the idea that interest rates can be kept too low for too long and that “had the previous rate [of interest] been high enough, the borrowers never would have overinvested” (p. 67). This idea is evident in John Taylor’s and Anna Schwartz’s critique of Fed policy prior to the 2008 financial crisis and the Great Recession.[8]

The 2008 financial crisis revealed the flaws in the current discretionary government fiat money system with a central bank that kept its policy rate too low for too long.  Although there were nonmonetary factors contributing to the 2008 crisis, especially misguided housing policy, the monetary policy mistakes were of critical importance.

John Taylor, in his reassessment of the 2008 financial crisis after 10 years, concluded:

There was a significant deviation in 2003–2005 from the more rules-based monetary policy strategy [the Taylor rule] that had worked well in the two prior decades. The resulting extra low policy interest rates were a factor leading to a search for yield, excessive risk taking, a boom and bust in the housing market, and eventually the financial crisis and recession… . These actions spread internationally as central banks tended to follow each other in setting their policy interest rate” [Taylor 2018: 2].

To support his monetary theory of the 2008 crisis, Taylor used an econometric model to simulate what the housing market would have looked like if the Fed had followed the Taylor rule in setting its policy rate.  He found that “the [housing] boom and the bust disappeared”; and that, “if the Fed had not held rates too low, there would have been less search for yield, less risk-taking and fewer problems on the banks’ balance sheets” (pp. 3–4).[9]

Taylor’s “real concern” is with “preventing central banks from causing asset bubbles” by keeping rates too low for too long (p. 4). A rules-based monetary policy would help in that regard.  Moreover, “a rules-based monetary policy is an essential part of a well-functioning market economy” (p. 24).

Anna J. Schwartz also argued that “if monetary policy had been more restrictive, the asset price boom in housing could have been avoided.” She criticized Alan Greenspan for not seeing this fact (Schwartz 2009: 22–23). In Fisherian fashion, Schwartz (p. 19) stated:

The basic groundwork to the disruption of credit flows can be traced to the asset price bubble of the housing price boom. It has become a cliché to refer to an asset boom as a mania. The cliché, however, obscures why ordinary folk become avid buyers of whatever object has become the target of desire. An asset boom is propagated by an expansive monetary policy that lowers interest rates and induces borrowing beyond prudent bounds to acquire the asset.

Schwartz then laid out the sequence of monetary policy steps that helped fuel the housing boom:

The Fed was accommodative too long from 2001 on and was slow to tighten monetary policy, delaying tightening until June 2004 and then ending the monthly 25 basis point increase in August 2006. The rate cuts that began on August 10, 2007, and escalated in an unprecedented 75 basis point reduction on January 22, 2008, was announced at an unscheduled video conference meeting a week before a scheduled FOMC meeting. The rate increases in 2004 were too little and ended too soon [pp. 19–20].

The Fed’s unconventional monetary policies, characterized by near zero interest rates, large-scale asset purchases (i.e., “quantitative easing”), and forward guidance (keeping rates “lower for longer”) were designed to revive the economy after the panic of 2008.  There is no doubt that those policies boosted asset prices and had a wealth effect, but they increased risk taking, incentivized leverage, and misallocated capital. Raising rates is more difficult than lowering them—as Fed Chairman Powell is experiencing from President Trump’s harsh criticism and the market’s reaction.  When rates do normalize the wealth created by unconventional policies may well turn out to be a pseudo wealth effect rather than a real one; after all, “easy money” can’t permanently increase real economic growth.

Nevertheless, the Fed seems determined to keep unconventional monetary policy tools available in case of another recession, reverting once again to quantitative easing if the effective fed funds rate reaches the zero lower bound (i.e., a zero nominal rate).  In the meantime, there is the danger of drifting toward a higher inflation target to push nominal interest rates up and thus have more room to decrease the policy rate in case of a recession. Such interest-rate manipulation is part and parcel of our government fiat money regime.

The main lesson from Fisher’s work is that there needs to be a thorough study of the current discretionary regime and an examination of alternatives that would reduce regime uncertainty and mitigate monetary-induced business fluctuations.  Possible alternatives include a price level rule, nominal GDP targeting, Fisher’s compensated dollar plan, and Hayek’s free-market money proposal. The return to a commodity-based regime, in which there is no central bank and the supply of money is market determined, should also be part of the debate over the future of money, as should the use of cryptocurrencies.[10]

The Fed plans to host a conference later this year at the Chicago Fed to discuss its dual mandate and strategies to achieve full employment and price stability.  Hopefully, that discussion will include a close examination of the current operating procedure by which the Fed uses interest on excess reserves (IOER) and the overnight reverse repo rate (ONRRP) to set the range for its policy rate.  Paying IOER to banks above the opportunity cost of holding those reserves at the Fed plugs up the monetary transmission mechanism, increases the demand for reserves, and reduces the impact of changes in the monetary base on broader monetary aggregates—and thus on nominal GDP. Moving away from the “floor system” to a “corridor system” and reducing the size of the Fed’s balance sheet are necessary steps for normalizing policy.[11]

The Fed conference is a step in the right direction for increasing public debate over the role of the central bank, but it is insufficient.  Congress, in its constitutional duty of safeguarding the value of money, needs to take that responsibility seriously and establish the Centennial Monetary Commission that was proposed under the Financial CHOICE Act of 2017 (Title X, Sec. 1011) to examine the Fed’s performance since its creation in 1913, and to consider various reforms. In doing so, it should not neglect the importance of restoring constitutional money and understanding how alternative monetary regimes affect uncertainty.

Conclusion

In thinking about monetary alternatives, there is no better place to start then a review of Irving Fisher’s work, especially The Purchasing Power of Money.  His insights can guide all those interested in improving the current government fiat money regime and in avoiding the mistakes of the past.  The Fed, in particular, ought to listen to what Fisher had to say about sound money—that is, money of stable purchasing power.  There is no perfect monetary system, but one needs to understand what a “good system” would look like in order to move in the right direction.  A deep knowledge of monetary theory, monetary alternatives, and monetary history are essential in order to improve the present monetary regime.

Fisher (1911: 329) sought to avoid those reforms that “would be subject to the danger of unwise or dishonest political manipulation.” That is wise advice.  We cannot assume that public officials have perfect information or will act in “the public interest.”  That is why James Madison, the chief architect of the Constitution, wrote:

The only adequate guarantee for the uniform and stable value of a paper currency is its convertibility into specie—the least fluctuating and the only universal currency. I am sensible that a value equal to that of specie may be given to paper or any other medium, by making a limited amount necessary for necessary purposes; but what is to ensure the inflexible adherence of the Legislative Ensurers to their own principles and purposes? [Madison 1831, “Letter to Mr. Teachle,” Montpelier, March 15, emphasis added].

Fisher recognized that, even under the gold standard, the price level would vary in the short run; indeed, it had to in order to maintain stability over the long run. By anchoring the price level under the price-specie-flow mechanism, interest rates stayed low for long periods and governments could issue long-dated bonds (consols).  Fiscal rectitude accompanied monetary stability.

In the search for stable money, reform proposals that may seem farfetched today may become feasible in the future.  Those who lived under the classical gold standard would be shocked to learn of its demise and replacement with a central bank having a balance sheet of more than $4 trillion and the power to engage in large-scale asset purchases, including mortgage-backed securities.  It’s time for an audit of the Fed: not just its books, but its structure, conduct, and performance.  Revisiting the works of  great monetary thinkers like Fisher is not a bad place to start.

[1] All quotes are from the 1912 reprint of The Purchasing Power of Money (Macmillan). The 1922 edition can be found at https://www.econlib.org/library/YPDBooks/Fisher/fshPPM.html.

[2] See Warburton (1966: chaps. 1 and 4).  Also see Leland B. Yeager, The Fluttering Veil: Essays on Monetary Disequilibrium, Part 3 (Liberty Fund, 1997).

[3] See Fisher (2012: chap. 4).

[4] Under this so-called compensated dollar plan, “the amount of gold obtainable for a paper dollar would vary inversely with its purchasing power per ounce as compared with commodities, the total purchasing power of the dollar being always the same.”  In such a system, “the supply of money in circulation would regulate itself automatically” (Fisher 1911: 331). For a more thorough discussion of Fisher’s compensated dollar plan, see Don Patinkin, “Irving Fisher and His Compensated Dollar Plan,” Federal Reserve Bank of Richmond Economic Quarterly (79/3, Summer 1993):1–33. Also see Chapter 6, “The Quantity Theory Alternative,” in Thomas M. Humphrey and Richard H. Timberlake’s forthcoming book, Gold, The Real Bills Doctrine, and the Fed: Sources of Monetary Disorder, 1922–1938 (Cato Institute, 2019).

[5] A gold-exchange system would provide for a country not on the gold standard to exchange its currency at par with a country whose currency is linked to gold.  A tabular standard uses a price index to ensure that creditors are paid back in dollars of constant purchasing power.

[6] For a discussion of the operation of this standard, see Fisher (1911: 337–47).

[7] Allan G. B. Fisher, “Does an Increase in Volume of Production Call for a Corresponding Increase in Volume of Money?” American Economic Review 25/2 (June 1935): 197. Also see George Selgin’s Less than Zero: The Case for a Falling Price Level in a Growing Economy (Cato Institute, 2017).

[8] It is important to note, however, that the Taylor Rule is not the same as Fisher’s compensated dollar rule. Unlike Taylor’s rule, Fisher’s allows for no feedback from the state of output or employment. It is a price level or inflation rule pure and simple. It is also a general inflation rather core inflation rule. As such, it would have called for more tightening than Taylor’s rule, and even more than the Fed engaged in, during 2008. I am indebted to George Selgin for this point.

[9] In a study of 18 OECD countries from 1920 to 2011, Bordo and Landon-Lane (2013) found that “‘loose’ monetary policy—that is, having an interest rate below the target rate or having a growth rate of money above the target growth rate—does positively impact asset prices and this correspondence is heightened during periods when asset prices grew quickly and then subsequently suffered a significant correction.”

[10] For a discussion of alternative monetary rules, see Dorn (2018).

[11] For a detailed analysis of the pre- and post-crisis operating system, see Selgin (2018): Floored! How a Misguided Fed Experiment Deepened and Prolonged the Great Recession.

[Cross-posted from Alt-M.org]

Pages