Collateralized shadow banking: still at risk of fire sales

A few basic points about shadow banking ten years after the crisis:

“What shadow banking is” isn’t very complicated if banking is defined as “borrowing short to lend long”

What makes banks unstable is that their liabilities are on demand (i.e. they borrow short) while their assets pay out only over the course of years (i.e. they lend long). A principle reason that we are worried about “shadow” banks is that they have the same instability as banks, but lack the protections in the form of a strict regulatory regime and a lender of last resort. When shadow banks have this instability it is because they borrow short to lend long.

This approach makes it easy to understand the world of shadow banking, because there are only a limited number of financial instruments that are used to borrow on a short-term basis. Thus, for the most part shadow banks have to finance themselves on the commercial paper market (unsecured financing) or on the repo market (secured financing) or, especially for investment banks, via derivatives collateral (e.g. that is posted by prime brokerage clients). These are the major sources of wholesale short-term funding.

So typically when a financial product is subject to losses due to a run-prone (and therefore classified as a shadow bank), it’s because of the product’s relationship to the commercial paper market, to the repo market, and/or to the derivatives market.* The latter two, which comprise the collateralized segment of shadow banking, are the most complicated, because the run can come from many different directions: that is, lenders may stop lending (e.g. Lehman Bros), borrowers who post collateral may stop posting collateral (e.g. novation at Bear Stearns), and for derivatives contracts conditions may shift so that suddenly collateral posting requirements increase (e.g. AIG).

Collateralized shadow banking is governed by ISDA protocols and contracts, not the traditional law governing debt

While repos have been around for centuries, a “repo market” in which anyone can participate and where collateral other than government debt is posted is a relatively new phenomenon. Similarly derivatives contracts have been subject to margin requirements for more than a century, but in the past these contracts were exchange-traded and exchanges set the rules both for margin and for eligibility to trade on the exchange.

Thus, what made repo and derivatives financially innovative in the 1980s and 1990s was that suddenly there were unregulated over the counter (OTC) markets in them. What “unregulated” really meant, however, was that the big banks wrote the rules for this market themselves in the form of International Swaps and Derivatives Association (ISDA) protocols and contracts.

In the early days of repo and derivatives it was far from clear that they wouldn’t fall under the existing regulatory regime as securities (regulated by the SEC), or as commodities and/or futures (regulated by the CFTC). (The legal definitions of the SEC’s and the CFTC’s jurisdiction was deliberately made very broad in the implementing legislation, so an intuitive understanding of these terms will not coincide with their legal definitions.) Similarly, it was far from clear that the collateral posted in these OTC contracts would not be subject to the standard terms in the bankruptcy code governing collateralized debt. (Kettering who describes repos in this era as too big to fail products is great on this.)

Thus, one of the ISDA’s first projects was lobbying in the US for exceptions to the existing regulatory regime. Progress was incremental, but a long series of legislative amendments to the financial regulatory regime starting in 1982 and culminating in the bankruptcy reform act of 2005 effectively placed the whole system of repo and margin collateral outside the financial regulatory regime that had been set up in the 1930s and 1940s (for details see here, or ungated). These reforms also exempted these contracts from the bankruptcy code’s protections for debtors (see here or ungated).

Where the US led others followed. Gabor (2016) documents how Germany and Britain came to adopt the US model of collateralized lending, despite the central banks’ serious reservations about the system’s implications for financial stability. The world economy entered into 2008 with repo and derivatives markets effectively subject only to the private “regulation” of ISDA protocols and contracts.

Despite reforms, the instability at the heart of the collateralized shadow banking system has yet to be addressed

We saw in 2008 how the collateralized shadow banking system relies extremely heavily on the central bank for stability. (Federal Reserve programs to support the repo market included the TSLF and the PDCF.  Data released by the Fed indicates that at the peak of the crisis it accepted substantial amounts of very risky collateral.)

Indeed the International Capital Markets Association has put it quite bluntly that it considers the systemic risk associated with fire sales in repo and derivatives markets to be a problem that “the authorities” are expected to step in and address.

“The question is how to mitigate such systemic liquidity risk. We believe that systemic risks require systemic responses. In this case, the authorities can be expected to intervene as lenders of last resort to ensure the liquidity of the system as a whole. For their part, market users should be expected to remain creditworthy and to have liquidity buffers sufficient to sustain themselves until official intervention restores sufficient liquidity to obviate the need for fire sales.”

In short, the collateralized shadow banking system is constructed on the expectation of a “Fed put”. Instead of attempting to build a robust infrastructure of debt, shadow banking embraces the risk of fire sales and expects the governments that don’t make the shadow banking rules to bail it out.

The only sure-fire way to eliminate the risk of fire sales is to reduce the financial system’s reliance on repo- and margin-type contracts that allow a decline in the value of collateral to be a trigger for demanding additional funds. Based on financial market history this would almost certainly require an increase in the use of unsecured interbank debt markets. However, not much progress has been made on this front, especially since the EU’s proposed Financial Transactions Tax stalled in 2015.

On the other hand, significant reforms have been made since 2008 (Please let me know if I’ve left out anything important.) :

  • Collateral has shifted mostly to sovereign debt. This helps stabilize the market, but perhaps only temporarily as a broad range of collateral is still officially acceptable (so deterioration of the quality of collateral can creep in).
  • Approximately 50% of derivatives now are held with central counterparties. (The estimate is based on a 2015 BIS report.) This reduces the risk that the failure of a small market participant sets off a chain of failures that results in a fire sale. There is some concern however that fire sale risk has been transformed into the risk of a failure of a central counterparty.
  • Derivatives are now officially regulated by either the CFTC or the SEC and and there has been an effort to harmonize OTC margining requirements internationally.
  • Under pressure from regulators a voluntary stay protocol has been developed by the ISDA that is designed to work with the regulators’ special resolution regimes and to limit the right to terminate a contract due the default of a related entity. In the US systemically important banks are required to include this protocol in their OTC derivatives contracts.
  • Bank liquidity regulations have been adopted that limit the degree to which regulated banks are exposed to significant risk in these markets.

Notice that these new regulations embrace the basic framework of collateralized shadow banking: much of the focus is on making sure that enough collateral is being used. Special rules are designed to protect the largest banks and the banking system more generally. But aside from protecting the banks, it’s not clear that significant measures have been taken to eliminate the risk of fire sales that originate outside the banking system. Assuming that these regulations are effective at protecting the banks, this raises the question: Who bears the fire sale risk in this new environment?

Thanks to @kiffmeister for requesting that I write up this blogpost.

* While one can usually figure this out after the run has occurred, current regulation does not necessarily make the relevant information available before a run has occurred. Mutual funds are a case in point: the vast majority of them have so little exposure to repo and derivatives markets that it can be ignored, but the few that take on significant risk may have disclosures that are hard to distinguish ex ante from the ones that don’t (e.g. Oppenheimer Core Bond Fund in 2008).

A regression discontinuity test error

This is post 3 in my HAMP and principal reduction series. For the introductory post see here.

The series is motivated by Peter Ganong and Pascal Noel’s argument that mortgage modifications that include principal reduction have no significant effect on either default or consumption for underwater borrowers. In post 1 I explained how the framing of their paper focuses entirely on the short-run, as if the long run doesn’t matter – and characterize this as the ideology of financialization. In post 2 I explain why financialization is a problem.

In this post I am going to discuss a very technical problem with Ganong and Noel’s regression discontinuity test of the effect of principal reduction on default. The idea behind a regression discontinuity test is to use the fact that there is a variable that is used to classify people into two categories and then exploit the fact that near the boundary where the classification takes place there’s no significant difference between the characteristics of the people divided into the two groups. The test looks specifically at those who lie near the classification boundary and then compare how the groups in the two classifications differ. In this situation, the differences can be interpreted as having been caused by the classification.

Borrowers offered HAMP modifications were offered either standard HAMP or HAMP PRA which is HAMP with principal reduction. In principle those who received HAMP modifications had a net present value (NPV) of the HAMP modification in excess of the NPV of the HAMP PRA modification, and those who received a HAMP PRA modification had an NPV of HAMP PRA greater than NPV of HAMP. The relevant variable for classifying modifications is therefore ΔNPV (which is economists’ notation for the different between the two net present values). Note that in practice, the classification was not strict and there was a bias against principle reduction (see Figure 2a). This situation is addressed with a “fuzzy” regression discontinuity test.

The authors seek to measure how principal reduction affects default. They do this by first estimating the difference in the default rates for the two groups as they converge to the cutoff point ΔNPV = 0, and then estimating the difference in the rate of assignment to HAMP PRA for the two groups as they converge to the cutoff point ΔNPV = 0, and finally taking the ratio of the two (p. 12). The authors find that the difference in default rates is insignificant — and this is a key result that is actually used later in the paper (footnote 30) to assume that the effect of principle reduction can be discounted (apparently driving the results on p. 24).

My objection to this measure is that due to the structure of HAMP PRA, most of the time when ΔNPV is equal to or close to zero, that is because the principal reduction in HAMP PRA is so small that there is virtually no difference between HAMP and HAMP PRA. That is, as the ΔNPV converges to zero it is also converging to the case where there is no difference between the two programs and to the case where principal reduction is zero.

To see this consider the structure of HAMP PRA. If the loan to value (LTV) of the mortgage being modified is less than or equal to 115, then HAMP PRA does not apply and only HAMP is offered. If LTV > 115, then the principal reduction alternative must be considered. Under no circumstances will HAMP PRA reduce the LTV below 115. After the principal reduction amount has been determined for a HAMP PRA mod, the modification terms are set by putting the reduced principal loan through the standard HAMP waterfall. As a result of this process, when the LTV is near 115, a HAMP PRA is evaluated, but principal reduction will be very small and the loan will be virtually indistinguishable from a HAMP loan. In this case, HAMP and HAMP PRA have the same NPV (especially as the data was apparently reported only to one decimal point, see App. A Figure 5), and ΔNPV = 0.

While it may be the case that for a HAMP PRA modification with significant principal reduction the NPV happens to be the same as the NPV for HAMP, this will almost certainly be a rare occurrence. On the other hand, it will be very common that when the LTV is near 115, the ΔNPV = 0, which is just a reflection of the fact that the two modifications are virtually the same when LTV is near 115. Thus, the structure of the program means that there will be many results with ΔNPV = 0, and these loans will generally have LTV near 115 and very little principal modification. In short, as you converge to ΔNPV = 0 from the HAMP PRA side of the classification, you converge to a HAMP modification. Under these circumstances it would be extremely surprising to see a jump in default rates at ΔNPV = 0.

In short, there is no way to interpret the results of the test conducted by the authors as a test of the effect of principal reduction. Perhaps it should be characterized as a test of whether classification into HAMP PRA without principal reduction affects the default rate.

Note that the authors’ charts support this. In Appendix A, Figure 5(a) we see that almost 40% of the authors’ data for this test has ΔNPV = 0. On page 12 the authors indicate that they were told this was probably bad data, because it indicates that the servicer was lazy and only one NPV test was run. Thus this 40% of their data was thrown out as “bad.” Evidence that this 40% was heavily concentrated around LTV = 115 is given by Appendix A, Figure 4(d):

GanongNoel

Here we see that as the LTV drops toward 120, ΔNPV converges to zero from both sides. Presumably the explanation for why it converges to 120 and not to 115 is because almost 40% of the data was thrown out. See also Appendix A Figure 6(d), which despite the exclusion of 40% of the data shows a steep decline in principal reduction as ΔNPV converges to 0 from the HAMP PRA side.

I think this is mostly a lesson that details matter and economics is hard. It is also important, however, to set the record straight: running a regression discontinuity test on HAMP data cannot tell us about the relationship between mortgage principal reductions and default.

What’s the problem with financialization?

This is post 2 in my HAMP and principal reduction series. For the introductory post see here.

The series is motivated by Peter Ganong and Pascal Noel’s argument that mortgage modifications that include principal reduction have no significant effect on either default or consumption for underwater borrowers. In post 1 I explained how the framing of their paper focuses entirely on the short-run, as if the long run doesn’t matter – and even uses language that indicates that people who take their long-run financial condition into account are behaving improperly. I call this exclusive focus on the short-run the ideology of financialization. I note at the end of post 1 that this ideology appears to have influenced both Geithner’s views and the structure of HAMP.

So this raises the question: What’s the problem with the ideology of financialization?

The short answer is that it appears to be designed to trap as many people into a state of debt peonage as possible. Debt peonage, by preventing people who are trapped in debt from realizing their full potential, is harmful to economic performance more generally.

Here’s the long answer.

By focusing attention on short-term payments and how sustainable they are today, while at the same time heaping heavy debt obligations into the future, modern finance has had devastating effects at both the individual and the aggregate levels. Heavy long-term debt burdens are guaranteed to be a problem for a subset of individual borrowers, such as those who are unexpectedly disabled or who see their income decline over time for other reasons. Mortgages with payments that balloon at some date in the future (such as those studied in Ganong and Noel’s paper) are by definition a gamble on future financial circumstances. This makes them entirely appropriate products for the small subset of borrowers who have the financial resources to deal with the worst case scenario, but the financial equivalent of Russian roulette for the majority of borrowers who don’t have financial backup in the worst case scenario. (Remember the probabilities are in your favor in Russian roulette, too.)

Gary Gorton once described the subprime mortgage model as one where the borrower is forced to refinance after a few years and this gives the bank the option every few years of whether or not to foreclose on the home. Because the mortgage borrower is in the position of having sold an option, the borrower’s position is closer to that of a renter than of homeowner. Mortgages that are structured to have payment increases a few years into the loan – which is the case for virtually all of the modifications offered to borrowers during the crisis – similarly tend to put the borrower into a situation more like that of a renter than a homeowner.

The ideology of financialization thus perverts the whole concept of debt. A debt contract is not a zero-sum transaction. Debt contracts exist because they are mutually beneficial and they should be designed to give benefits to both lenders and borrowers. Loans like subprime mortgages are literally designed to set the borrower up so the borrower will be forced into a renegotiation where the borrower can be held to his or her reservation value. That is, they are designed to shift the bargaining power in contracting in favor of the lender. HAMP modifications for underwater borrowers set up a similar situation.

Ganong and Noel treat this distorted bargaining situation as if it is normal in section 6 of their paper, where they purport to characterize “efficient modification design.” The first step in their analysis is to hold the borrowers who need modifications to their reservation values (p. 27).[1] Having done this, they then describe an “efficient frontier” that minimizes costs to lenders and taxpayers. A few decades ago when I studied Pareto efficiency, the characterization of the efficient frontier required shifting the planner’s weights on all members of the economy. What the authors have in fact presented is the constrained efficient frontier where the borrowers are held to their reservation values. Standard economic analysis indicates that starting from any point on this constrained efficient frontier, direct transfers from the lenders to the borrowers up until the point that the lenders are held to their reservation value should also be considered part of the efficient frontier.

In short, Ganong and Noel’s analysis is best viewed as a description of how the financial industry views and treats underwater borrowers, not as a description of policies that are objectively “efficient.” Indeed, when they “rank modification steps by their cost-effectiveness” they come very close to reproducing the HAMP waterfall (p. 31): the only difference is that maturity extension takes place before a temporary interest rate reduction. Perhaps the authors are providing valuable insight into how the HAMP waterfall was developed.

The unbalanced bargaining situation over contract terms that is presented in this paper should be viewed as a problem for the economy as a whole. As everybody realized post-crisis the macroeconomics of debt has not been fully explored by the economics profession and the profession is still in the early stages of addressing this lacuna. Thus, it is not surprising that this paper touches only very briefly on the macroeconomics of mortgage modification.

In my view the ideology of financialization with its short term focus has contributed significantly to growth of a heavily indebted economy. This burden of debt tends to reduce the bargaining power of the debtors and to interfere with their ability to realize their full potential in the economy. Arguably this heavily indebted economy is losing the capacity to grow because it is in a permanent balance sheet recession. At the same time, the ideology underlying financialization appears to be effectively a gamble that it’s okay to shift the debt off into the future, because we will grow out of it so it will not weigh heavily on the future. The risk is that, by taking it as given that g > r over the long run, this ideology may well be creating a situation of permanent balance sheet recession where g is necessarily less than r, even given optimal monetary policy.

[1] The authors justify this because they have “shown” that principal reductions for underwater borrowers do not reduce defaults or increase consumption. Of course, they have shown no such thing because they have only evaluated 5-10% of the life of the mortgage – and even that analysis is flawed.

The Ideology of Financialization

This is post 1 in my HAMP and principal reduction series. For the introductory post see here.

The analysis in Peter Ganong and Pascal Noel’s Liquidity vs. wealth in household debt obligations: Evidence from housing policy in the Great Recession is an object lesson in the ideological underpinnings of “financialization”. So this first post in my HAMP and principal reduction series dissects the general approach taken by this paper. Note that I have no reason to believe that these authors are intentionally promoting financialization. The fact that the framing may be unintentionally ideological makes it all the more important to expose the ideology latent in the paper.

The paper studies government and private mortgage modification programs and in particular seeks to differentiate the effects of principal reductions from those of payment reductions. The paper concludes “we find that principal reduction that increases housing wealth without affecting liquidity has no significant impact on default or consumption for underwater borrowers [and that] maturity extension, which immediately reduces payments but leaves long-term obligations approximately unchanged, does significantly reduce default rates” (p. 1). The path that the authors follow to arrive at these broad conclusions is truly remarkable.

The second paragraph of this paper frames the analysis of the relative effects of modifying mortgage debt by either reducing payments or forgiving mortgage principal. This first post will discuss only the first three sentences of this paragraph and what they imply. They read:

“The normative policy debate hinges on fundamental economic questions about the relative effect of short- vs long-term debt obligations. For default, the underlying question is whether it is primarily driven by a lack of cash to make payments in the short-term or whether it is a response to the total burden of long-term debt obligations, sometimes known as ‘strategic default.’ For consumption, the underlying question is whether underwater borrowers have a high marginal propensity to consume (MPC) out of either changes in total housing wealth or changes in immediate cash-flow.”

Each of the sentences in the paragraph above is remarkable in its own way. Let’s take them one at time.

First sentence

“The normative policy debate hinges on fundamental economic questions about the relative effect of short- vs long-term debt obligations.”

This is a paper about mortgage debt – that is, long term debt – and how it is restructured. This paper is, thus, not about “the relative effect of short- vs long-term debt obligations,” it is about how choices can be made regarding how long-term debt obligations are structured. This paper has nothing whatsoever to do with short-term debt obligations, which are, by definition, paid off within a year and  do not figure in paper’s analysis at any point.

On the other hand, the authors’ analysis is short-term. It evaluates data only on the first two to three years (on average)  after a mortgage is modified. The whole discussion takes it as given that it is appropriate to evaluate a long-term loan over a horizon that covers only 5 to 10% of its life, and that we can draw firm conclusions about the efficiency of a mortgage modification by only evaluating the first few years of the mortgage’s existence. Remember the authors were willing to state that “principal reduction … has no significant impact on default or consumption for underwater borrowers” even though they have no data on 90 – 95% of the performance of the mortgages they study (that is, on the latter 30-odd years of the mortgages’ existence).

Note that the problem here is not the nature of the data in the paper. It is natural that topical studies of mortgage performance will typically only cover a portion of those mortgages’ lives. But it should be equally natural that every statement in the study acknowledges the inadequacy of the data. For example, the authors could have written: “principal reduction … has no significant impact on immediate horizon default or immediate horizon consumption for underwater borrowers.” Instead, the authors choose to discuss short-term performance as if it is all that matters.

This focus on the short-term, as if it is all that matters, is I would argue the fundamental characteristic of “financialization.” It is also the classic financial conman’s bait and switch. The key when selling a shoddy financial product is to focus on how good it is in the short-term and to fail to discuss the long-term risks. When questions arise regarding the long-term risks, these risks are minimized and are not presented accurately. This bait and switch was practiced on municipal borrowers who issued adjustable rate securities and purchased interest rate swaps, on adjustable rate mortgage borrowers who were advised that they would be able to refinance before the mortgage rate adjusted up, and even on the Trustees of Harvard University, who apparently entered into interest rate swaps without bothering to understand to long-term obligations associated with them.

The authors embrace this deceptive framework of financialization whole-heartedly throughout the paper by discussing the short-term performance of long-term loans as if it is all that matters. While it is true that there are a few nods in footnotes and deep within the paper to what is being left out, they are wholly inadequate to address the fact that the basic framing of the paper is extremely misleading.

Second sentence

“For default, the underlying question is whether it is primarily driven by a lack of cash to make payments in the short-term or whether it is a response to the total burden of long-term debt obligations, sometimes known as ‘strategic default.’”

The second sentence is based on the classic distinction between a temporary liquidity-driven stoppage of payments and a stoppage due to negative net worth – i.e. insolvency. (Note that these are the two long-standing reasons for filing bankruptcy.) But the framing in this sentence is remarkably ideological.

The claim that those defaults that are “a response to the total burden of long-term debt obligations” are “sometimes known as ‘strategic default’” is ideologically loaded language. Because the term “strategic default” has a pejorative connotation, this sentence has the effect of putting a moralistic framing on the problem of default: liquidity-constrained defaults are implicitly unavoidable and therefore non-strategic and proper, whereas all non-liquidity-constrained defaults are strategic and implicitly improper. This framing ignores the fact that a default may be due to balance sheet insolvency, which will necessarily be “a response to the total burden of long-term debt obligations” and yet cannot be classified a “strategic” default. What is commonly referred to as strategic default is the case where the debtor is neither liquidity constrained, nor insolvent, but considers only the fact that for this particular asset the payments are effectively paying rent and do not build any principal in the property.

By linguistically excising the possibility that the weight of long-term debt obligations leads to an insolvency-driven default, the authors are already demonstrating their bias against principal reduction and once again exhibiting the ideology of financialization: all that matters is the short-term, therefore balance sheet insolvency driven by the weight of long-term debt does not need to be taken into account.

In short, the implicit claim is that even if the borrower is insolvent and not only has a right to the “fresh start” offered by bankruptcy, but likely needs it to get onto his or her feet again, this would be “strategic” and improper. Overall, the moralistic framing of the paper’s approach to debt is not consistent with either the long-standing U.S. legal framework governing debt which acknowledges the propriety of defaults due to insolvency, or with social norms regarding debt where business-logic default (which is a more neutral term than strategic default) is common.

Third sentence

“For consumption, the underlying question is whether underwater borrowers have a high marginal propensity to consume (MPC) out of either changes in total housing wealth or changes in immediate cash-flow.”

The underlying assumption in this sentence is that mortgage policy had as one of its goals immediate economic stimulus, and that one of the choices for generating this economic stimulus was to use mortgage modifications to encourage troubled borrowers to increase current consumption at the expense of a future debt burden. In short, this is the classic financialization approach: get the borrower to focus only on current needs and discourage focus on the costs of long-debt. Most remarkably it appears that Tim Geithner actually did view mortgage policy as having as one of its goals immediate economic stimulus and that this basic logic was his justification for preferring payment reduction to principal reduction.[1]

Just think about this for a moment: Policy makers in the midst of a crisis were so blinded by the ideology of financializaton that they used the government mortgage modification program as a form of short-term demand stimulus at the cost of inducing troubled borrowers (i.e. the struggling middle class) to further mortgage their futures. And this paper is a full-throated defense of these decisions.

The ideology of financialization has become powerful indeed.

Financialization Post 2 will answer the question: What’s the problem with the ideology of financialization?

[1] See, e.g., the quote from Geithner’s book in Mian & Sufi, Washington Post, 2014

HAMP and principal reduction: an overview

I spent the summer of 2011 helping mortgage borrowers (i) correct bank documentation regarding their loans and (ii) extract permanent mortgage modifications from banks. One of things I did was check the bank modifications for compliance with the government’s mortgage modification program, HAMP, and with the HAMP waterfall including the HAMP Principal Reduction Alternative. At that time I put together HAMP spreadsheets, and typically when I read articles about HAMP I go back to my spreadsheets to refresh my memory of the details of HAMP.

So when I learned about a paper that finds that HAMP “placed an inefficient emphasis on reducing borrowers’ total mortgage debt” and should have focused more on reducing borrowers payments in the short-run — which goes contrary to everything I know about HAMP, I decided to read the paper.

Now I am an economist, so even though my focus is not quantitative data analysis, when I bother to put the time into reading an econometric study it’s not difficult to see problems with the research design. On the other hand, I usually avoid being too critical, on the principle that econometrics is a little outside the area of my expertise. In this case, however, I know that very few people have enough knowledge of HAMP to actually evaluate the paper — and that many of those who do are interested parties.

The paper Peter Ganong and Pascal Noel’s Liquidity vs. wealth in household debt obligations: Evidence from housing policy in the Great Recession. This paper has been published as a working paper by the Washington Center for Equitable Growth and NBER, both of which provided funding for the research. Both the Wall Street Journal and Forbes have published articles on this paper. So as one of the few people who is capable offering a robust critique of the paper, I am going to do a series of posts explaining why the main conclusion of this paper is fatally flawed and why the paper reads to me as financial industry propaganda.

Note that I am not making any claims about the authors’ motivation in writing this paper. I see some evidence in the paper to support the view that the authors were manipulated by some of the people providing them with the data and explaining it to them. Overall, I think this paper should however serve as a cautionary tale for all those who are dependent on interested parties for their data.

Here is the overview of the blogposts I will post discussing this paper:

HAMP and principal reduction post 1: The ideology of financialization

HAMP and principal reduction post 2: What’s the problem with financialization?

HAMP and principal reduction post 3: A regression discontinuity error
The principal result in the paper is invalid, because the authors did not have a good understanding of HAMP and of HAMP PRA, and therefore did not understand how the variable they use to distinguish treatment from control groups converges to their threshold precisely when principal reduction converges to zero. The structure of this variable invalidates the regression discontinuity test that the authors perform.

How to evaluate “central banking for all” proposals

The first question to ask regarding proposals to expand the role of the central bank in the monetary system is the payroll question: How is the payroll of a new small business that grows, for example, greenhouse crops that have an 8 week life cycle handled in this environment? For this example let’s assume the owner had enough capital to get the all the infrastructure of the business set up, but not enough to make a payroll of say $10,000 to keep the greenhouse in operation before any product can be sold.

Currently the opening of a small business account by a proprietor with a solid credit record will typically generate a solicitation to open an overdraft related to the account. Thus, it will in many cases be an easy matter for the small business to get the $10,000 loan to go into operation. Assuming the business is a success and produces regular revenues, it is also likely to be easy to get bank loans to fund slow expansion. (Note the business owner will most likely have to take personal liability for the loans.)

Thus, the first thing to ask about any of these policy proposals is: when a bank makes this sort of a loan how can it be funded?

In the most extreme proposals, the bank has to have raised funds in the form of equity or long-term debt before it can lend at all. This is such a dramatic change to our system that it’s hard to believe that the same level of credit that is available now to small business will be available in the new system.

Several proposals (including Ricks et al. – full disclosure: I have not read the paper) get around this problem by allowing banks to fund their lending by borrowing from the central bank. This immediately raises two questions:

(i) How is eligibility to borrow at the central bank determined? If it’s the same set of banks that are eligible to earn interest on reserves now, isn’t this just a transfer of the benefits of banking to a different locus. As long as the policy is not one of “central bank loans for all,” the proposal is clearly still one of two-tier access to the central bank.

(ii) What are the criteria for lending by the central bank? Notice that this necessarily involves much more “hands on” lending than we have in the current system, precisely because the central bank funds these loans itself. In the current system (or more precisely in the system pre-2008 when reserves were scarce), the central bank provides an appropriate (and adjustable) supply of reserves and allows the banks to lend to each other on the Federal Funds market. Thus, in this system the central bank outsources the actual lending decisions to the private sector, allowing market forces to play a role in lending decisions.

Overall, proposals in which the central bank will be lending directly to banks to fund their loans create a situation where monetary policy is being implemented by what used to be called “qualitative policy.” After all if the central bank simply offers unlimited, unsecured loans at a given interest rate to eligible borrowers, such a policy seems certain to be abused by somebody. So the central bank is either going to have to define eligible collateral, eligible (and demonstrable) uses of the funds, or some other explicit criteria for what type of loans are funded. This is a much more interventionist central bank policy than we are used to, and it is far from clear that central banks have the skills to do this well. (Indeed, Gabor & Ban (2015) argue that the ECB post-crisis set up a catastrophically bad collateral framework.)

Now if I understand the Ricks et al. proposal properly (which again I have not read), their solution to this criticism is to say, well, we don’t need to go immediately to full-bore central banking for all, we can simply offer central bank accounts as a public option and let the market decide.

This is what I think will happen in the hybrid system. Just as the growth of MMMFs in the 80s led to growth of financial commercial paper and repos to finance bank lending, so this public option will force the central bank to actively operate its lending window to finance bank loans. Now we have two competing systems, one is the old system of retail and wholesale banking funding, the other is the central bank lending policy.

The question then is: Do federal regulators have the skillset to get the rules right, so that destabilizing forces don’t build up in this system? I would analogize to the last time we set up a system of alternative funding for banks (the MMMF system) and expect regulators to set up something that is temporarily stable and capable of operating for a decade or two, before a fundamental regulatory flaw is exposed and it all comes apart in a terrifying crash. The last time we were lucky, as regulatory ingenuity and legal duct tape held the system together. In this new scenario, the central bank, instead of sitting somewhat above the fray will sit at the dead center of the crisis and may have a harder time garnering support to save the system.

And then, of course, all “let the market decide” arguments are a form of the “competition is good” fallacy. In my view, before claiming that “competition is good,” one must make a prior demonstration that the regulatory structure is such that competition will not lead to a race to the bottom. Given our current circumstances where, for example, the regulator created by the Dodd-Frank Act to deal with fraud and near-fraud is currently being hamstrung, there is abundant reason to believe that the regulatory structure of the financial system is inadequate. Thus, appeals to a public option as a form of healthy competition in the financial system as it is currently regulated are not convincing.

Brokers, dealers and the regulation of markets: Applying finreg to the giant tech platforms

Frank Pasquale (h/t Steve Waldman) offers an interesting approach to dealing with the giant tech firms’ privileged access to data: he contrasts a Jeffersonian — “just break ’em up” approach — with a Hamiltonian — regulate them as natural monopolies approach. Although Pasquale favors the Hamiltonian approach, he opens his essay by discussing Hayekian prices. Hayekian prices simultaneously aggregate distributed knowledge about the object sold and summarize it, reflecting the essential information that the individuals trading in the market need to know. While gigantic firms are alternate way of aggregating data, there is little reason to believe that they could possibly produce the benefits of Hayekian prices, the whole point of which is to publicize for each good a specific and extremely important summary statistic, the competitive price.

Pasquale’s framing brings to mind an interest parallel with the history of financial markets. Financial markets have for centuries been centralized in stock/bond and commodities exchanges, because it was widely understood that price discovery works best when everyone trades at a single location. The single location by drawing almost all market activity offers both “liquidity” and the best prices. The dealers on these markets have always been recognized as having a privileged position because of their superior access to information about what’s going on in the market.

One way to understand Google, Amazon, and Facebook is that they are acting as dealers in a broader economic marketplace. That with their superior knowledge about supply and demand they have an ability to extract gains that is perfectly analogous to dealers in financial markets.

Given this framing, it’s worth revisiting one of the most effective ways of regulating financial markets: a simple, but strict, application of a branch of common law, the law of agency was applied to the regulation of the London Stock Exchange from the mid-1800s through the 1986 “Big Bang.” It was remarkably effective at both controlling conflicts of interest and producing stable prices, but post World War II was overshadowed and eclipsed by the conflict-of-interest-dominated U.S. markets. In the “Big Bang” British markets embraced the conflicted financial markets model — posing a regulatory challenge which was recognized at the time (see Christopher McMahon 1985), but was never really addressed.

The basic principles of traditional common law market regulation are as follows. When a consumer seeks to trade in a market, the consumer is presumed to be uninformed and to need the help of an agent. Thus, access to the market is through agents, called brokers. Because a broker is a consumer’s agent, the broker cannot trade directly with the consumer. Trading directly with the consumer would mean that the broker’s interests are directly adverse to those of the consumer, and this conflict of interest is viewed by the law as interfering with the broker’s ability to act an agent. (Such conflicts can be waived by the consumer, but in early 20th century British financial markets were generally not waived.)

A broker’s job is to help the consumer find the best terms offered by a dealer. Because dealers buy and sell, they are prohibited from acting as the agents of the consumers — and in general prohibited from interacting with them directly at all. Brokers force dealers to offer their clients good deals by demanding two-sided quotes and only after learning both the bid and the ask, revealing whether their client’s order is a buy or a sell. Brokers also typically get bids from different dealers to make sure that the the prices on offer are competitive.

Brokers and dealers are strictly prohibited from belonging to the same firm or otherwise working in concert. The validity of the price setting mechanism is based on the bright line drawn between the different functions of brokers and of dealers.

Note that this system was never used in the U.S., where the law of agency with respect to financial markets was interpreted very differently, and where financial markets were beset by conflicts of interest from their earliest origins. Thus, it was in the U.S. that the fixed fees paid to brokers were first criticized as anti-competitive and eventually eliminated. In Britain the elimination of fixed fees reduced the costs faced by large traders, but not those faced by small traders (Sissoko 2017). By adversely affecting the quality of the price setting mechanism, the actual costs to traders of eliminating the structured broker-dealer interaction was hidden. We now have markets beset by “flash-crashes,” “whales,” cancelled orders, 2-tier data services, etc. In short, our market structure instead of being designed to control information asymmetry, is extremely permissive of the exploitation of information asymmetry.

So what lessons can we draw from the structured broker-dealer interaction model of regulating financial markets? Maybe we should think about regulating Google, Amazon, and Facebook so that they have to choose between either being the agents in legal terms of those whose data they collect, or of being sellers of products (or agents of these sellers) and having no access to buyer’s data.

In short, access to customer data should be tied to agency obligations with respect to that data. Firms with access to such data can provide services to consumers that help them negotiate a good deal with the sellers of products that they are interested in, but their revenue should come solely from the fees that they charge to consumers on their purchases. They should not be able to either act as sellers themselves or to make any side deals with sellers.

This is the best way of protecting a Hayekian price formation process by making sure that the information that causes prices to move is the flow of buy or sell orders that is generated by a dealer making two-sided markets and choosing a certain price point. And concurrently by allowing individuals to make their decisions in light of the prices they face. Such competitive pricing has the benefit of ensuring that prices are informative and useful for coordinating economic decision making.

When prices are not set by dealers who are forced to make two-sided markets and who are given no information about the nature of the trader, but instead prices are set by hyper-informed market participants, prices stop having the meaning attributed to them by standard economic models. In fact, given asymmetric information trade itself can easily degenerate away from the win-win ideal of economic models into a means of extracting value from the uninformed, as has been demonstrated time and again both in theory and in practice.

Pasquale’s claim that regulators need to permit “good” trade on asymmetric information (that which “actually helps solve real-world problems”) and prevent “bad” trade on asymmetric information (that which constitutes “the mere accumulation of bargaining power and leverage”) seems fantastic. How is any regulator to have the omniscience to draw these distinctions? Or does the “mere” in the latter case indicate the good case is to be presumed by default?

Overall, it’s hard to imagine a means of regulating informational behemoths like Google, Amazon and Facebook that favors Hayekian prices without also destroying entirely their current business models. Even if the Hamiltonian path of regulating the beasts is chosen, the economics of information would direct regulators to attach agency obligations to the collection of consumer data, and with those obligations to prevent the monetization of that data except by means of fees charged to the consumer for helping them find the best prices for their purchases.

When can banks create their own capital?

A commenter directed me to an excellent article by Richard Werner comparing three different approaches to banking. The first two are commonly found in the economics literature, and the third is the credit creation theory of banking. Werner’s article provides a very good analysis of the three approaches, and weighs in heavily in favor of the credit creation theory.

Werner points out that when regulators use the wrong model, they inadvertently allow banks to do things that they should not be allowed to do. More precisely, Werner finds that when regulators try to impose capital constraints on banks without understanding how banks function, they leave open the possibility that the banks find a way to create capital “out of thin air,” which clearly is not the regulator’s intent.

In this post I want to point out that Werner does not give the best example of how banks can sometimes create their own capital. I offer two more examples of how banks created their own capital in the years leading up to the crisis.

1. The SIVs that blew up in 2007

You may remember Hank Paulson running around Europe in the early fall of 2007 trying to drum up support for something called the Master Liquidity Enhancement Conduit (MLEC) or more simply the Super-SIV. He was trying to address the problem that structured vehicles called SIVs were blowing up left, right, and center at the time.

These vehicles were essentially ways for banks to create capital.  Here’s how:

According to a Bear Stearns report at the time, 43% of the assets in the SIVs were bank debt, and commentators a the time make it clear that the kind of bank debt in the SIVs was a special kind of debt that was acceptable as capital for the purposes of bank capital requirements because of the strong rights given to the issuer to forgo making interest payments on the debt.

The liability side of a SIV was comprised of 4-6% equity and the rest senior liabilities, Medium Term Notes (MTNs) of a few years maturity and Commercial Paper (CP) that had to be refinanced every few months. Obviously SIVs had roll-over (or liquidity) risk, since their assets were much longer than their liabilities. The rating agencies addressed this roll-over risk by requiring the SIVs to have access to a liquidity facility provided by  a bank. More precisely the reason a SIV shadow bank was allowed to exist was because there was a highly rated traditional bank that had a contractual commitment to provide funds to the SIV on a same-day basis in the event that the liquidity risk was realized. Furthermore, triggers in the structured vehicle’s paperwork required it to go into wind down mode if, for example, the value of its assets fell below a certain threshold. All the SIVs breached their triggers in Fall 2007.

Those with an understanding of the credit creation theory of banking would recognize immediately that the “liquidity facility” provided by the traditional bank was a classic way for a bank to transform the SIV’s liabilities into monetary assets. That’s why money market funds and others seeking very liquid assets were willing to hold SIV CP and MTNs. In short, a basic understanding of an SIV asset and liability structure and of the banks’ relationship to it would have been a red flag to a regulator conversant with the credit creation theory that banks were literally creating their own capital.

2. The pre-2007 US Federal Home Loan Bank (FHLB) System

In the early naughties all of the FHLBs revised their capital plans. For someone with an understanding of the credit creation theory, these capital plans were clearly consistent with virtually unlimited finance of mortgages.

The FHLBs form a system with a single regulator and together offer a joint guarantee of all FHLB liabilities. The FHLB system is one of the “agencies” that can easily raise money at low cost on public debt markets. Each FHLB covers a specific region of the country and is cooperatively owned by its member banks. In 2007 every major bank in the US was a member of the FHLB system. As a result, FHLB debt was effectively guaranteed by the whole of the US banking system. Once again using the credit creation theory, we find that the bank guarantee converted FHLB liabilities into monetary assets.

The basic structure of the FHLBs support of the mortgage market was this (note that I will frequently use the past tense, because I haven’t looked up what the current capital structure is and believe that it has changed):

The FHLBs faced a 4% capital requirement on their loans. Using the Atlanta FHLB’s capital plan as an example, we find that whenever a member bank borrowed from the Atlanta FHL bank, it was required to increase its capital contribution by 4.5% of the loan. This guaranteed that the Atlanta FHL bank could never fall foul of its 4% capital requirement — and that there was a virtually unlimited supply of funds available to finance mortgages in the US.

The only constraint exercised by FHLBs on this system was that they would not lend for the full value of any mortgage. Agency MBS faced a 5% haircut, private label MBS faced a minimum 10% haircut, and individual mortgages faced higher haircuts.

In short, the FHLB system was designed to make it possible for the FHLBs to be lenders of last resort to mortgage lenders. As long as a member bank’s assets were mortgages that qualified for FHL bank loans, credit was available for a bank that was in trouble.

The system was designed in the 1930s — by people who understood the credit creation theory of banking — to deliberately exclude commercial banks which financed commercial activity and whose last-resort lender was the Federal Reserve. Only when the FIRRE Act in 1989 was passed subsequent to the Savings and Loan crisis were commercial banks permitted to become FHLB members.

From a credit creation theory perspective this major shift in US bank regulation ensured that the full credit creation capacity of the commercial banking system was united with the US mortgage lending system making it possible for the FHLBs to create their own capital and use it to provide virtually unlimited funds to finance mortgage lending in the US.

 

In Defense of Banking II

Proposals for reform of the monetary system based either on public access to accounts with the central bank or on banking systems that are 100% backed by central bank reserves and government debt have proliferated since the financial crisis. A few have crossed my path in the past few days (e.g. here and here).

I have been making the point in a variety of posts on this blog that these proposals are based on the Monetarist misconception of the nature of money in the modern economy and likely to prove disastrous. While much of my time lately is being spent working up a formal “greek” presentation of these ideas, explaining them in layman’s terms is equally important. Thanks to comments from an attentive reader, here is a more transparent explanation. Let me start by quoting from an earlier post that draw a schematic outline of Goodhart’s “private money” model :

The simplest model of money is a game with three people, each of whom produces something another seeks to consume: person 2 produces for person 1, person 3 produces for person 2, person 1 produces for person 3. Trade takes place over the course of three sequential pairwise matches: (1,2), (2,3), (3,1). Thus, in each match there is never a double coincidence of wants, but always a single coincidence of wants. We abstract from price by assuming that our three market participants can coordinate on an equilibrium price vector (cf. the Walrasian auctioneer). Thus, all these agents need is liquidity.

Let the liquidity be supplied by bank credit lines that are sufficiently large and are both drawn down by our participants on an “as needed” basis, and repaid at the earliest possible moment. Assume that these credit lines – like credit card balances that are promptly repaid – bear no interest. Then we observe, first, that after three periods trade has taken place and every participant’s bank balance is zero; and, second, that if the game is repeated foerever, the aggregate money supply is zero at the end of every three periods.

In this model the money supply expands only to meet the needs the trade, and automatically contracts in every third round because the buyer holds bank liabilities sufficient to meet his demand.

Consider the alternative of using a fiat money “token” to solve the infinitely repeated version of the game. Observe that in order for the allocation to be efficient, if there is only one token to allocate, we must know ex ante who to give that token to. If we give it to person 3, no trade will take place in the first two rounds, and if we give it to person 2 no trade will take place in the first round. While this might seem a minor loss, consider the possibility that people who don’t consume in the first stage of their life may have their productivity impaired for the rest of time. This indicates that the use of fiat money may require particularized knowledge about the nature of the economy that is not necessary if we solve the problem using credit lines.

Why don’t we just allocate one token to everybody so that we can be sure that the right person isn’t cash constrained in early life? This creates another problem. Person 2 and person 3 will both have 2 units of cash whenever they are making their purchases, but in order to reach the equilibrium allocation we need them to choose to spend only one unit of this cash in each period. In short, this solution would require people to hold onto money for eternity without ever intending to spend it. That clearly doesn’t make sense.

This simple discussion explains that there is a fundamental problem with fiat money that ensures that an incentive compatible credit system is never worse and in many environments is strictly better than fiat money. This is one of the most robust results to come out of the formal study of economic environments with liquidity frictions (see e.g. Kocherlakota 1998).

In response to this I received the following question by email:

In your 3 person model, [why not allocate] a token to everybody? – I don’t understand how you reached the conclusion that “this solution would require people to hold onto money for eternity without ever intending to spend it”. If people have more units of cash than they need for consumption, the excess would be saved and potentially lent to others who need credit?

This question arises, because I failed in the excerpt from my post above to explain what the implications of “allocating a token to everybody” are when translated into a real world economy. In order for an efficient outcome to be achieved, you need to make sure that everybody has enough money at the start of the monetary system so that it is not possible that they will ever be cash-constrained at any point in time. In my simple model this just implies that everybody is given one token at the start of time. In the real world this means that every newborn child is endowed at birth with more than enough cash to pay the full cost of U.S. college tuition at an elite institution (for example).

Turning back to the context of the model, if the two people with excess currency save and lend it, we have the problem that the one person who consumes at the given date already has enough money to make her purchases. In short there is three times as much currency in the economy as is needed for purchases. What this implies is that we do not have an equilibrium because the market for debt can’t clear at the prices we have assumed in our model. In short, if we add lending to the model then the equilibrium price will have to rise — with the result that nobody is endowed with enough money to make the purchases they want to make. Whether or not an efficient allocation can be obtained by this means will depend on the details of how the lending process is modeled. (The alternative that I considered was that there was no system of lending, so they had to hold the token. Then when they had an opportunity to buy, choose to spend only one token, even though they were holding two tokens. This is the sense in which the token must be held “for eternity” without being spent.)

Tying this discussion back into the college tuition example. If, in fact, you tried to implement a policy where every child is endowed at birth with enough cash to pay elite U.S. college tuition, what we would expect to happen is that by the time these children were going to college the cost would have increased so that they no longer had enough to pay tuition. But then of course you have failed to implement the policy. In short, it is impossible to “allocate a token to everybody”, because as soon as you do, you affect prices in a way that ensures that the token’s value has fallen below the value that you intended to allocate. There’s no way to square this circle.

Connecting this up with bitcoin or deposit accounts at the central bank: the currently rich have a huge advantage in a transition to such a system, because they get to start out with more bitcoins or larger deposit accounts. By contrast in a credit-based monetary system everybody has the opportunity to borrow against their future income.

The problem with the credit-based monetary system that we have is that guaranteeing the fairness of the mechanisms by which credit is allocated is an extremely important aspect of the efficiency of the system. That is, in a credit-based monetary system fairness-based considerations are not in conflict with efficiency-based considerations, but instead essential in order to make efficiency an achievable goal.

Because of the failure to model our monetary system properly, we have failed to understand the importance of regulation that protects and supports the fair allocation of credit in the system and have failed to maintain the efficiency of the monetary system. In my view appropriate reforms will target the mechanisms by which credit is allocated, because there’s no question that in the current system it is allocated very unfairly.

The problem with proposals to eliminate the debt-based system is that as far as I can tell, doing so is likely to just make the unfairness worse by giving the currently rich a huge advantage that they would not have in a reformed and well-designed credit-based monetary system.

Corporate liability and the “crimes were committed” approach to law enforcement

Pursuant to Attorney General Loretta Lynch’s welcome change in DoJ policy, it occurred to me that an old draft post of mine might actually merit being posted, so here goes:

After listening to a presentation on the impressive growth in enforcement actions resulting in corporate criminal liability a few months ago, it occurred to me that people without legal training might not actually understand the reasoning behind the critique that individual prosecutions should almost always accompany corporate criminal liability. (The presenter at one point framed such critiques as claiming that prosecutors were colluding with management against the shareholders.)

The problem with corporate criminal liability is this: every crime has a mens rea or element of intent that must be proved as part of the prosecutor’s case. Negligence is one of the lower levels of mens rea, but many instances of negligence are not crimes. Often a “knowing” or “should have known” standard is applied in criminal law.

When a prosecutor chooses to seek corporate criminal liability, without bringing any cases of individual criminal liability, the problem is whether it makes logical sense to argue that the corporation had the mens rea for the crime, but no individual in the corporation had the mens rea (or the one with the mens rea managed not to take relevant action in promotion of the crime). Now one can dream up special circumstances where this position would actually be logical, but it seems to a lot of people that this situation should be rare.

Critics of corporate liability (I’m thinking of Judge Rakoff and Bill Black here, for example) would probably argue that pursuing corporate criminal liability, without pursuing individual liability is tantamount to stating that a crime was committed, but we don’t know by whom. (Note that the reverse where there is individual criminal liability without corporate criminal liability is likely to be much more common. Rogue employees and a genuine effort on the part of the corporation to avoid the criminal activity would both be good reasons – though not necessarily successful reasons – for not extending criminal liability from an individual to the corporation.)

Overall an important criticism of the growth of deferred prosecution agreements and non prosecution agreements is that finding this growth acceptable in the absence of individual prosecutions is essentially lowering the standards for what a prosecutor is supposed to do. “A crime was committed, but I don’t know by whom” should not be the normal stopping point for a prosecutor’s case.

The argument is, of course, not that there should never be corporate criminal liability without an accompanying case for individual liability, but simply that this outcome should be relatively rare. In general, we want our prosecutors to think of their jobs as going all the way to finding out “who done it,” and not stopping with “a crime was committed” and a fine was paid.

In short, the argument against treating a finding of corporate criminal liability as an end point is not about “collusion,” but instead goes to the heart of what it means to enforce the law.