A regression discontinuity test error

This is post 3 in my HAMP and principal reduction series. For the introductory post see here.

The series is motivated by Peter Ganong and Pascal Noel’s argument that mortgage modifications that include principal reduction have no significant effect on either default or consumption for underwater borrowers. In post 1 I explained how the framing of their paper focuses entirely on the short-run, as if the long run doesn’t matter – and characterize this as the ideology of financialization. In post 2 I explain why financialization is a problem.

In this post I am going to discuss a very technical problem with Ganong and Noel’s regression discontinuity test of the effect of principal reduction on default. The idea behind a regression discontinuity test is to use the fact that there is a variable that is used to classify people into two categories and then exploit the fact that near the boundary where the classification takes place there’s no significant difference between the characteristics of the people divided into the two groups. The test looks specifically at those who lie near the classification boundary and then compare how the groups in the two classifications differ. In this situation, the differences can be interpreted as having been caused by the classification.

Borrowers offered HAMP modifications were offered either standard HAMP or HAMP PRA which is HAMP with principal reduction. In principle those who received HAMP modifications had a net present value (NPV) of the HAMP modification in excess of the NPV of the HAMP PRA modification, and those who received a HAMP PRA modification had an NPV of HAMP PRA greater than NPV of HAMP. The relevant variable for classifying modifications is therefore ΔNPV (which is economists’ notation for the different between the two net present values). Note that in practice, the classification was not strict and there was a bias against principle reduction (see Figure 2a). This situation is addressed with a “fuzzy” regression discontinuity test.

The authors seek to measure how principal reduction affects default. They do this by first estimating the difference in the default rates for the two groups as they converge to the cutoff point ΔNPV = 0, and then estimating the difference in the rate of assignment to HAMP PRA for the two groups as they converge to the cutoff point ΔNPV = 0, and finally taking the ratio of the two (p. 12). The authors find that the difference in default rates is insignificant — and this is a key result that is actually used later in the paper (footnote 30) to assume that the effect of principle reduction can be discounted (apparently driving the results on p. 24).

My objection to this measure is that due to the structure of HAMP PRA, most of the time when ΔNPV is equal to or close to zero, that is because the principal reduction in HAMP PRA is so small that there is virtually no difference between HAMP and HAMP PRA. That is, as the ΔNPV converges to zero it is also converging to the case where there is no difference between the two programs and to the case where principal reduction is zero.

To see this consider the structure of HAMP PRA. If the loan to value (LTV) of the mortgage being modified is less than or equal to 115, then HAMP PRA does not apply and only HAMP is offered. If LTV > 115, then the principal reduction alternative must be considered. Under no circumstances will HAMP PRA reduce the LTV below 115. After the principal reduction amount has been determined for a HAMP PRA mod, the modification terms are set by putting the reduced principal loan through the standard HAMP waterfall. As a result of this process, when the LTV is near 115, a HAMP PRA is evaluated, but principal reduction will be very small and the loan will be virtually indistinguishable from a HAMP loan. In this case, HAMP and HAMP PRA have the same NPV (especially as the data was apparently reported only to one decimal point, see App. A Figure 5), and ΔNPV = 0.

While it may be the case that for a HAMP PRA modification with significant principal reduction the NPV happens to be the same as the NPV for HAMP, this will almost certainly be a rare occurrence. On the other hand, it will be very common that when the LTV is near 115, the ΔNPV = 0, which is just a reflection of the fact that the two modifications are virtually the same when LTV is near 115. Thus, the structure of the program means that there will be many results with ΔNPV = 0, and these loans will generally have LTV near 115 and very little principal modification. In short, as you converge to ΔNPV = 0 from the HAMP PRA side of the classification, you converge to a HAMP modification. Under these circumstances it would be extremely surprising to see a jump in default rates at ΔNPV = 0.

In short, there is no way to interpret the results of the test conducted by the authors as a test of the effect of principal reduction. Perhaps it should be characterized as a test of whether classification into HAMP PRA without principal reduction affects the default rate.

Note that the authors’ charts support this. In Appendix A, Figure 5(a) we see that almost 40% of the authors’ data for this test has ΔNPV = 0. On page 12 the authors indicate that they were told this was probably bad data, because it indicates that the servicer was lazy and only one NPV test was run. Thus this 40% of their data was thrown out as “bad.” Evidence that this 40% was heavily concentrated around LTV = 115 is given by Appendix A, Figure 4(d):

GanongNoel

Here we see that as the LTV drops toward 120, ΔNPV converges to zero from both sides. Presumably the explanation for why it converges to 120 and not to 115 is because almost 40% of the data was thrown out. See also Appendix A Figure 6(d), which despite the exclusion of 40% of the data shows a steep decline in principal reduction as ΔNPV converges to 0 from the HAMP PRA side.

I think this is mostly a lesson that details matter and economics is hard. It is also important, however, to set the record straight: running a regression discontinuity test on HAMP data cannot tell us about the relationship between mortgage principal reductions and default.

Advertisements

What’s the problem with financialization?

This is post 2 in my HAMP and principal reduction series. For the introductory post see here.

The series is motivated by Peter Ganong and Pascal Noel’s argument that mortgage modifications that include principal reduction have no significant effect on either default or consumption for underwater borrowers. In post 1 I explained how the framing of their paper focuses entirely on the short-run, as if the long run doesn’t matter – and even uses language that indicates that people who take their long-run financial condition into account are behaving improperly. I call this exclusive focus on the short-run the ideology of financialization. I note at the end of post 1 that this ideology appears to have influenced both Geithner’s views and the structure of HAMP.

So this raises the question: What’s the problem with the ideology of financialization?

The short answer is that it appears to be designed to trap as many people into a state of debt peonage as possible. Debt peonage, by preventing people who are trapped in debt from realizing their full potential, is harmful to economic performance more generally.

Here’s the long answer.

By focusing attention on short-term payments and how sustainable they are today, while at the same time heaping heavy debt obligations into the future, modern finance has had devastating effects at both the individual and the aggregate levels. Heavy long-term debt burdens are guaranteed to be a problem for a subset of individual borrowers, such as those who are unexpectedly disabled or who see their income decline over time for other reasons. Mortgages with payments that balloon at some date in the future (such as those studied in Ganong and Noel’s paper) are by definition a gamble on future financial circumstances. This makes them entirely appropriate products for the small subset of borrowers who have the financial resources to deal with the worst case scenario, but the financial equivalent of Russian roulette for the majority of borrowers who don’t have financial backup in the worst case scenario. (Remember the probabilities are in your favor in Russian roulette, too.)

Gary Gorton once described the subprime mortgage model as one where the borrower is forced to refinance after a few years and this gives the bank the option every few years of whether or not to foreclose on the home. Because the mortgage borrower is in the position of having sold an option, the borrower’s position is closer to that of a renter than of homeowner. Mortgages that are structured to have payment increases a few years into the loan – which is the case for virtually all of the modifications offered to borrowers during the crisis – similarly tend to put the borrower into a situation more like that of a renter than a homeowner.

The ideology of financialization thus perverts the whole concept of debt. A debt contract is not a zero-sum transaction. Debt contracts exist because they are mutually beneficial and they should be designed to give benefits to both lenders and borrowers. Loans like subprime mortgages are literally designed to set the borrower up so the borrower will be forced into a renegotiation where the borrower can be held to his or her reservation value. That is, they are designed to shift the bargaining power in contracting in favor of the lender. HAMP modifications for underwater borrowers set up a similar situation.

Ganong and Noel treat this distorted bargaining situation as if it is normal in section 6 of their paper, where they purport to characterize “efficient modification design.” The first step in their analysis is to hold the borrowers who need modifications to their reservation values (p. 27).[1] Having done this, they then describe an “efficient frontier” that minimizes costs to lenders and taxpayers. A few decades ago when I studied Pareto efficiency, the characterization of the efficient frontier required shifting the planner’s weights on all members of the economy. What the authors have in fact presented is the constrained efficient frontier where the borrowers are held to their reservation values. Standard economic analysis indicates that starting from any point on this constrained efficient frontier, direct transfers from the lenders to the borrowers up until the point that the lenders are held to their reservation value should also be considered part of the efficient frontier.

In short, Ganong and Noel’s analysis is best viewed as a description of how the financial industry views and treats underwater borrowers, not as a description of policies that are objectively “efficient.” Indeed, when they “rank modification steps by their cost-effectiveness” they come very close to reproducing the HAMP waterfall (p. 31): the only difference is that maturity extension takes place before a temporary interest rate reduction. Perhaps the authors are providing valuable insight into how the HAMP waterfall was developed.

The unbalanced bargaining situation over contract terms that is presented in this paper should be viewed as a problem for the economy as a whole. As everybody realized post-crisis the macroeconomics of debt has not been fully explored by the economics profession and the profession is still in the early stages of addressing this lacuna. Thus, it is not surprising that this paper touches only very briefly on the macroeconomics of mortgage modification.

In my view the ideology of financialization with its short term focus has contributed significantly to growth of a heavily indebted economy. This burden of debt tends to reduce the bargaining power of the debtors and to interfere with their ability to realize their full potential in the economy. Arguably this heavily indebted economy is losing the capacity to grow because it is in a permanent balance sheet recession. At the same time, the ideology underlying financialization appears to be effectively a gamble that it’s okay to shift the debt off into the future, because we will grow out of it so it will not weigh heavily on the future. The risk is that, by taking it as given that g > r over the long run, this ideology may well be creating a situation of permanent balance sheet recession where g is necessarily less than r, even given optimal monetary policy.

[1] The authors justify this because they have “shown” that principal reductions for underwater borrowers do not reduce defaults or increase consumption. Of course, they have shown no such thing because they have only evaluated 5-10% of the life of the mortgage – and even that analysis is flawed.

The Ideology of Financialization

This is post 1 in my HAMP and principal reduction series. For the introductory post see here.

The analysis in Peter Ganong and Pascal Noel’s Liquidity vs. wealth in household debt obligations: Evidence from housing policy in the Great Recession is an object lesson in the ideological underpinnings of “financialization”. So this first post in my HAMP and principal reduction series dissects the general approach taken by this paper. Note that I have no reason to believe that these authors are intentionally promoting financialization. The fact that the framing may be unintentionally ideological makes it all the more important to expose the ideology latent in the paper.

The paper studies government and private mortgage modification programs and in particular seeks to differentiate the effects of principal reductions from those of payment reductions. The paper concludes “we find that principal reduction that increases housing wealth without affecting liquidity has no significant impact on default or consumption for underwater borrowers [and that] maturity extension, which immediately reduces payments but leaves long-term obligations approximately unchanged, does significantly reduce default rates” (p. 1). The path that the authors follow to arrive at these broad conclusions is truly remarkable.

The second paragraph of this paper frames the analysis of the relative effects of modifying mortgage debt by either reducing payments or forgiving mortgage principal. This first post will discuss only the first three sentences of this paragraph and what they imply. They read:

“The normative policy debate hinges on fundamental economic questions about the relative effect of short- vs long-term debt obligations. For default, the underlying question is whether it is primarily driven by a lack of cash to make payments in the short-term or whether it is a response to the total burden of long-term debt obligations, sometimes known as ‘strategic default.’ For consumption, the underlying question is whether underwater borrowers have a high marginal propensity to consume (MPC) out of either changes in total housing wealth or changes in immediate cash-flow.”

Each of the sentences in the paragraph above is remarkable in its own way. Let’s take them one at time.

First sentence

“The normative policy debate hinges on fundamental economic questions about the relative effect of short- vs long-term debt obligations.”

This is a paper about mortgage debt – that is, long term debt – and how it is restructured. This paper is, thus, not about “the relative effect of short- vs long-term debt obligations,” it is about how choices can be made regarding how long-term debt obligations are structured. This paper has nothing whatsoever to do with short-term debt obligations, which are, by definition, paid off within a year and  do not figure in paper’s analysis at any point.

On the other hand, the authors’ analysis is short-term. It evaluates data only on the first two to three years (on average)  after a mortgage is modified. The whole discussion takes it as given that it is appropriate to evaluate a long-term loan over a horizon that covers only 5 to 10% of its life, and that we can draw firm conclusions about the efficiency of a mortgage modification by only evaluating the first few years of the mortgage’s existence. Remember the authors were willing to state that “principal reduction … has no significant impact on default or consumption for underwater borrowers” even though they have no data on 90 – 95% of the performance of the mortgages they study (that is, on the latter 30-odd years of the mortgages’ existence).

Note that the problem here is not the nature of the data in the paper. It is natural that topical studies of mortgage performance will typically only cover a portion of those mortgages’ lives. But it should be equally natural that every statement in the study acknowledges the inadequacy of the data. For example, the authors could have written: “principal reduction … has no significant impact on immediate horizon default or immediate horizon consumption for underwater borrowers.” Instead, the authors choose to discuss short-term performance as if it is all that matters.

This focus on the short-term, as if it is all that matters, is I would argue the fundamental characteristic of “financialization.” It is also the classic financial conman’s bait and switch. The key when selling a shoddy financial product is to focus on how good it is in the short-term and to fail to discuss the long-term risks. When questions arise regarding the long-term risks, these risks are minimized and are not presented accurately. This bait and switch was practiced on municipal borrowers who issued adjustable rate securities and purchased interest rate swaps, on adjustable rate mortgage borrowers who were advised that they would be able to refinance before the mortgage rate adjusted up, and even on the Trustees of Harvard University, who apparently entered into interest rate swaps without bothering to understand to long-term obligations associated with them.

The authors embrace this deceptive framework of financialization whole-heartedly throughout the paper by discussing the short-term performance of long-term loans as if it is all that matters. While it is true that there are a few nods in footnotes and deep within the paper to what is being left out, they are wholly inadequate to address the fact that the basic framing of the paper is extremely misleading.

Second sentence

“For default, the underlying question is whether it is primarily driven by a lack of cash to make payments in the short-term or whether it is a response to the total burden of long-term debt obligations, sometimes known as ‘strategic default.’”

The second sentence is based on the classic distinction between a temporary liquidity-driven stoppage of payments and a stoppage due to negative net worth – i.e. insolvency. (Note that these are the two long-standing reasons for filing bankruptcy.) But the framing in this sentence is remarkably ideological.

The claim that those defaults that are “a response to the total burden of long-term debt obligations” are “sometimes known as ‘strategic default’” is ideologically loaded language. Because the term “strategic default” has a pejorative connotation, this sentence has the effect of putting a moralistic framing on the problem of default: liquidity-constrained defaults are implicitly unavoidable and therefore non-strategic and proper, whereas all non-liquidity-constrained defaults are strategic and implicitly improper. This framing ignores the fact that a default may be due to balance sheet insolvency, which will necessarily be “a response to the total burden of long-term debt obligations” and yet cannot be classified a “strategic” default. What is commonly referred to as strategic default is the case where the debtor is neither liquidity constrained, nor insolvent, but considers only the fact that for this particular asset the payments are effectively paying rent and do not build any principal in the property.

By linguistically excising the possibility that the weight of long-term debt obligations leads to an insolvency-driven default, the authors are already demonstrating their bias against principal reduction and once again exhibiting the ideology of financialization: all that matters is the short-term, therefore balance sheet insolvency driven by the weight of long-term debt does not need to be taken into account.

In short, the implicit claim is that even if the borrower is insolvent and not only has a right to the “fresh start” offered by bankruptcy, but likely needs it to get onto his or her feet again, this would be “strategic” and improper. Overall, the moralistic framing of the paper’s approach to debt is not consistent with either the long-standing U.S. legal framework governing debt which acknowledges the propriety of defaults due to insolvency, or with social norms regarding debt where business-logic default (which is a more neutral term than strategic default) is common.

Third sentence

“For consumption, the underlying question is whether underwater borrowers have a high marginal propensity to consume (MPC) out of either changes in total housing wealth or changes in immediate cash-flow.”

The underlying assumption in this sentence is that mortgage policy had as one of its goals immediate economic stimulus, and that one of the choices for generating this economic stimulus was to use mortgage modifications to encourage troubled borrowers to increase current consumption at the expense of a future debt burden. In short, this is the classic financialization approach: get the borrower to focus only on current needs and discourage focus on the costs of long-debt. Most remarkably it appears that Tim Geithner actually did view mortgage policy as having as one of its goals immediate economic stimulus and that this basic logic was his justification for preferring payment reduction to principal reduction.[1]

Just think about this for a moment: Policy makers in the midst of a crisis were so blinded by the ideology of financializaton that they used the government mortgage modification program as a form of short-term demand stimulus at the cost of inducing troubled borrowers (i.e. the struggling middle class) to further mortgage their futures. And this paper is a full-throated defense of these decisions.

The ideology of financialization has become powerful indeed.

Financialization Post 2 will answer the question: What’s the problem with the ideology of financialization?

[1] See, e.g., the quote from Geithner’s book in Mian & Sufi, Washington Post, 2014

HAMP and principal reduction: an overview

I spent the summer of 2011 helping mortgage borrowers (i) correct bank documentation regarding their loans and (ii) extract permanent mortgage modifications from banks. One of things I did was check the bank modifications for compliance with the government’s mortgage modification program, HAMP, and with the HAMP waterfall including the HAMP Principal Reduction Alternative. At that time I put together HAMP spreadsheets, and typically when I read articles about HAMP I go back to my spreadsheets to refresh my memory of the details of HAMP.

So when I learned about a paper that finds that HAMP “placed an inefficient emphasis on reducing borrowers’ total mortgage debt” and should have focused more on reducing borrowers payments in the short-run — which goes contrary to everything I know about HAMP, I decided to read the paper.

Now I am an economist, so even though my focus is not quantitative data analysis, when I bother to put the time into reading an econometric study it’s not difficult to see problems with the research design. On the other hand, I usually avoid being too critical, on the principle that econometrics is a little outside the area of my expertise. In this case, however, I know that very few people have enough knowledge of HAMP to actually evaluate the paper — and that many of those who do are interested parties.

The paper Peter Ganong and Pascal Noel’s Liquidity vs. wealth in household debt obligations: Evidence from housing policy in the Great Recession. This paper has been published as a working paper by the Washington Center for Equitable Growth and NBER, both of which provided funding for the research. Both the Wall Street Journal and Forbes have published articles on this paper. So as one of the few people who is capable offering a robust critique of the paper, I am going to do a series of posts explaining why the main conclusion of this paper is fatally flawed and why the paper reads to me as financial industry propaganda.

Note that I am not making any claims about the authors’ motivation in writing this paper. I see some evidence in the paper to support the view that the authors were manipulated by some of the people providing them with the data and explaining it to them. Overall, I think this paper should however serve as a cautionary tale for all those who are dependent on interested parties for their data.

Here is the overview of the blogposts I will post discussing this paper:

HAMP and principal reduction post 1: The ideology of financialization

HAMP and principal reduction post 2: What’s the problem with financialization?

HAMP and principal reduction post 3: A regression discontinuity error
The principal result in the paper is invalid, because the authors did not have a good understanding of HAMP and of HAMP PRA, and therefore did not understand how the variable they use to distinguish treatment from control groups converges to their threshold precisely when principal reduction converges to zero. The structure of this variable invalidates the regression discontinuity test that the authors perform.

How to evaluate “central banking for all” proposals

The first question to ask regarding proposals to expand the role of the central bank in the monetary system is the payroll question: How is the payroll of a new small business that grows, for example, greenhouse crops that have an 8 week life cycle handled in this environment? For this example let’s assume the owner had enough capital to get the all the infrastructure of the business set up, but not enough to make a payroll of say $10,000 to keep the greenhouse in operation before any product can be sold.

Currently the opening of a small business account by a proprietor with a solid credit record will typically generate a solicitation to open an overdraft related to the account. Thus, it will in many cases be an easy matter for the small business to get the $10,000 loan to go into operation. Assuming the business is a success and produces regular revenues, it is also likely to be easy to get bank loans to fund slow expansion. (Note the business owner will most likely have to take personal liability for the loans.)

Thus, the first thing to ask about any of these policy proposals is: when a bank makes this sort of a loan how can it be funded?

In the most extreme proposals, the bank has to have raised funds in the form of equity or long-term debt before it can lend at all. This is such a dramatic change to our system that it’s hard to believe that the same level of credit that is available now to small business will be available in the new system.

Several proposals (including Ricks et al. – full disclosure: I have not read the paper) get around this problem by allowing banks to fund their lending by borrowing from the central bank. This immediately raises two questions:

(i) How is eligibility to borrow at the central bank determined? If it’s the same set of banks that are eligible to earn interest on reserves now, isn’t this just a transfer of the benefits of banking to a different locus. As long as the policy is not one of “central bank loans for all,” the proposal is clearly still one of two-tier access to the central bank.

(ii) What are the criteria for lending by the central bank? Notice that this necessarily involves much more “hands on” lending than we have in the current system, precisely because the central bank funds these loans itself. In the current system (or more precisely in the system pre-2008 when reserves were scarce), the central bank provides an appropriate (and adjustable) supply of reserves and allows the banks to lend to each other on the Federal Funds market. Thus, in this system the central bank outsources the actual lending decisions to the private sector, allowing market forces to play a role in lending decisions.

Overall, proposals in which the central bank will be lending directly to banks to fund their loans create a situation where monetary policy is being implemented by what used to be called “qualitative policy.” After all if the central bank simply offers unlimited, unsecured loans at a given interest rate to eligible borrowers, such a policy seems certain to be abused by somebody. So the central bank is either going to have to define eligible collateral, eligible (and demonstrable) uses of the funds, or some other explicit criteria for what type of loans are funded. This is a much more interventionist central bank policy than we are used to, and it is far from clear that central banks have the skills to do this well. (Indeed, Gabor & Ban (2015) argue that the ECB post-crisis set up a catastrophically bad collateral framework.)

Now if I understand the Ricks et al. proposal properly (which again I have not read), their solution to this criticism is to say, well, we don’t need to go immediately to full-bore central banking for all, we can simply offer central bank accounts as a public option and let the market decide.

This is what I think will happen in the hybrid system. Just as the growth of MMMFs in the 80s led to growth of financial commercial paper and repos to finance bank lending, so this public option will force the central bank to actively operate its lending window to finance bank loans. Now we have two competing systems, one is the old system of retail and wholesale banking funding, the other is the central bank lending policy.

The question then is: Do federal regulators have the skillset to get the rules right, so that destabilizing forces don’t build up in this system? I would analogize to the last time we set up a system of alternative funding for banks (the MMMF system) and expect regulators to set up something that is temporarily stable and capable of operating for a decade or two, before a fundamental regulatory flaw is exposed and it all comes apart in a terrifying crash. The last time we were lucky, as regulatory ingenuity and legal duct tape held the system together. In this new scenario, the central bank, instead of sitting somewhat above the fray will sit at the dead center of the crisis and may have a harder time garnering support to save the system.

And then, of course, all “let the market decide” arguments are a form of the “competition is good” fallacy. In my view, before claiming that “competition is good,” one must make a prior demonstration that the regulatory structure is such that competition will not lead to a race to the bottom. Given our current circumstances where, for example, the regulator created by the Dodd-Frank Act to deal with fraud and near-fraud is currently being hamstrung, there is abundant reason to believe that the regulatory structure of the financial system is inadequate. Thus, appeals to a public option as a form of healthy competition in the financial system as it is currently regulated are not convincing.

Brokers, dealers and the regulation of markets: Applying finreg to the giant tech platforms

Frank Pasquale (h/t Steve Waldman) offers an interesting approach to dealing with the giant tech firms’ privileged access to data: he contrasts a Jeffersonian — “just break ’em up” approach — with a Hamiltonian — regulate them as natural monopolies approach. Although Pasquale favors the Hamiltonian approach, he opens his essay by discussing Hayekian prices. Hayekian prices simultaneously aggregate distributed knowledge about the object sold and summarize it, reflecting the essential information that the individuals trading in the market need to know. While gigantic firms are alternate way of aggregating data, there is little reason to believe that they could possibly produce the benefits of Hayekian prices, the whole point of which is to publicize for each good a specific and extremely important summary statistic, the competitive price.

Pasquale’s framing brings to mind an interest parallel with the history of financial markets. Financial markets have for centuries been centralized in stock/bond and commodities exchanges, because it was widely understood that price discovery works best when everyone trades at a single location. The single location by drawing almost all market activity offers both “liquidity” and the best prices. The dealers on these markets have always been recognized as having a privileged position because of their superior access to information about what’s going on in the market.

One way to understand Google, Amazon, and Facebook is that they are acting as dealers in a broader economic marketplace. That with their superior knowledge about supply and demand they have an ability to extract gains that is perfectly analogous to dealers in financial markets.

Given this framing, it’s worth revisiting one of the most effective ways of regulating financial markets: a simple, but strict, application of a branch of common law, the law of agency was applied to the regulation of the London Stock Exchange from the mid-1800s through the 1986 “Big Bang.” It was remarkably effective at both controlling conflicts of interest and producing stable prices, but post World War II was overshadowed and eclipsed by the conflict-of-interest-dominated U.S. markets. In the “Big Bang” British markets embraced the conflicted financial markets model — posing a regulatory challenge which was recognized at the time (see Christopher McMahon 1985), but was never really addressed.

The basic principles of traditional common law market regulation are as follows. When a consumer seeks to trade in a market, the consumer is presumed to be uninformed and to need the help of an agent. Thus, access to the market is through agents, called brokers. Because a broker is a consumer’s agent, the broker cannot trade directly with the consumer. Trading directly with the consumer would mean that the broker’s interests are directly adverse to those of the consumer, and this conflict of interest is viewed by the law as interfering with the broker’s ability to act an agent. (Such conflicts can be waived by the consumer, but in early 20th century British financial markets were generally not waived.)

A broker’s job is to help the consumer find the best terms offered by a dealer. Because dealers buy and sell, they are prohibited from acting as the agents of the consumers — and in general prohibited from interacting with them directly at all. Brokers force dealers to offer their clients good deals by demanding two-sided quotes and only after learning both the bid and the ask, revealing whether their client’s order is a buy or a sell. Brokers also typically get bids from different dealers to make sure that the the prices on offer are competitive.

Brokers and dealers are strictly prohibited from belonging to the same firm or otherwise working in concert. The validity of the price setting mechanism is based on the bright line drawn between the different functions of brokers and of dealers.

Note that this system was never used in the U.S., where the law of agency with respect to financial markets was interpreted very differently, and where financial markets were beset by conflicts of interest from their earliest origins. Thus, it was in the U.S. that the fixed fees paid to brokers were first criticized as anti-competitive and eventually eliminated. In Britain the elimination of fixed fees reduced the costs faced by large traders, but not those faced by small traders (Sissoko 2017). By adversely affecting the quality of the price setting mechanism, the actual costs to traders of eliminating the structured broker-dealer interaction was hidden. We now have markets beset by “flash-crashes,” “whales,” cancelled orders, 2-tier data services, etc. In short, our market structure instead of being designed to control information asymmetry, is extremely permissive of the exploitation of information asymmetry.

So what lessons can we draw from the structured broker-dealer interaction model of regulating financial markets? Maybe we should think about regulating Google, Amazon, and Facebook so that they have to choose between either being the agents in legal terms of those whose data they collect, or of being sellers of products (or agents of these sellers) and having no access to buyer’s data.

In short, access to customer data should be tied to agency obligations with respect to that data. Firms with access to such data can provide services to consumers that help them negotiate a good deal with the sellers of products that they are interested in, but their revenue should come solely from the fees that they charge to consumers on their purchases. They should not be able to either act as sellers themselves or to make any side deals with sellers.

This is the best way of protecting a Hayekian price formation process by making sure that the information that causes prices to move is the flow of buy or sell orders that is generated by a dealer making two-sided markets and choosing a certain price point. And concurrently by allowing individuals to make their decisions in light of the prices they face. Such competitive pricing has the benefit of ensuring that prices are informative and useful for coordinating economic decision making.

When prices are not set by dealers who are forced to make two-sided markets and who are given no information about the nature of the trader, but instead prices are set by hyper-informed market participants, prices stop having the meaning attributed to them by standard economic models. In fact, given asymmetric information trade itself can easily degenerate away from the win-win ideal of economic models into a means of extracting value from the uninformed, as has been demonstrated time and again both in theory and in practice.

Pasquale’s claim that regulators need to permit “good” trade on asymmetric information (that which “actually helps solve real-world problems”) and prevent “bad” trade on asymmetric information (that which constitutes “the mere accumulation of bargaining power and leverage”) seems fantastic. How is any regulator to have the omniscience to draw these distinctions? Or does the “mere” in the latter case indicate the good case is to be presumed by default?

Overall, it’s hard to imagine a means of regulating informational behemoths like Google, Amazon and Facebook that favors Hayekian prices without also destroying entirely their current business models. Even if the Hamiltonian path of regulating the beasts is chosen, the economics of information would direct regulators to attach agency obligations to the collection of consumer data, and with those obligations to prevent the monetization of that data except by means of fees charged to the consumer for helping them find the best prices for their purchases.

When can banks create their own capital?

A commenter directed me to an excellent article by Richard Werner comparing three different approaches to banking. The first two are commonly found in the economics literature, and the third is the credit creation theory of banking. Werner’s article provides a very good analysis of the three approaches, and weighs in heavily in favor of the credit creation theory.

Werner points out that when regulators use the wrong model, they inadvertently allow banks to do things that they should not be allowed to do. More precisely, Werner finds that when regulators try to impose capital constraints on banks without understanding how banks function, they leave open the possibility that the banks find a way to create capital “out of thin air,” which clearly is not the regulator’s intent.

In this post I want to point out that Werner does not give the best example of how banks can sometimes create their own capital. I offer two more examples of how banks created their own capital in the years leading up to the crisis.

1. The SIVs that blew up in 2007

You may remember Hank Paulson running around Europe in the early fall of 2007 trying to drum up support for something called the Master Liquidity Enhancement Conduit (MLEC) or more simply the Super-SIV. He was trying to address the problem that structured vehicles called SIVs were blowing up left, right, and center at the time.

These vehicles were essentially ways for banks to create capital.  Here’s how:

According to a Bear Stearns report at the time, 43% of the assets in the SIVs were bank debt, and commentators a the time make it clear that the kind of bank debt in the SIVs was a special kind of debt that was acceptable as capital for the purposes of bank capital requirements because of the strong rights given to the issuer to forgo making interest payments on the debt.

The liability side of a SIV was comprised of 4-6% equity and the rest senior liabilities, Medium Term Notes (MTNs) of a few years maturity and Commercial Paper (CP) that had to be refinanced every few months. Obviously SIVs had roll-over (or liquidity) risk, since their assets were much longer than their liabilities. The rating agencies addressed this roll-over risk by requiring the SIVs to have access to a liquidity facility provided by  a bank. More precisely the reason a SIV shadow bank was allowed to exist was because there was a highly rated traditional bank that had a contractual commitment to provide funds to the SIV on a same-day basis in the event that the liquidity risk was realized. Furthermore, triggers in the structured vehicle’s paperwork required it to go into wind down mode if, for example, the value of its assets fell below a certain threshold. All the SIVs breached their triggers in Fall 2007.

Those with an understanding of the credit creation theory of banking would recognize immediately that the “liquidity facility” provided by the traditional bank was a classic way for a bank to transform the SIV’s liabilities into monetary assets. That’s why money market funds and others seeking very liquid assets were willing to hold SIV CP and MTNs. In short, a basic understanding of an SIV asset and liability structure and of the banks’ relationship to it would have been a red flag to a regulator conversant with the credit creation theory that banks were literally creating their own capital.

2. The pre-2007 US Federal Home Loan Bank (FHLB) System

In the early naughties all of the FHLBs revised their capital plans. For someone with an understanding of the credit creation theory, these capital plans were clearly consistent with virtually unlimited finance of mortgages.

The FHLBs form a system with a single regulator and together offer a joint guarantee of all FHLB liabilities. The FHLB system is one of the “agencies” that can easily raise money at low cost on public debt markets. Each FHLB covers a specific region of the country and is cooperatively owned by its member banks. In 2007 every major bank in the US was a member of the FHLB system. As a result, FHLB debt was effectively guaranteed by the whole of the US banking system. Once again using the credit creation theory, we find that the bank guarantee converted FHLB liabilities into monetary assets.

The basic structure of the FHLBs support of the mortgage market was this (note that I will frequently use the past tense, because I haven’t looked up what the current capital structure is and believe that it has changed):

The FHLBs faced a 4% capital requirement on their loans. Using the Atlanta FHLB’s capital plan as an example, we find that whenever a member bank borrowed from the Atlanta FHL bank, it was required to increase its capital contribution by 4.5% of the loan. This guaranteed that the Atlanta FHL bank could never fall foul of its 4% capital requirement — and that there was a virtually unlimited supply of funds available to finance mortgages in the US.

The only constraint exercised by FHLBs on this system was that they would not lend for the full value of any mortgage. Agency MBS faced a 5% haircut, private label MBS faced a minimum 10% haircut, and individual mortgages faced higher haircuts.

In short, the FHLB system was designed to make it possible for the FHLBs to be lenders of last resort to mortgage lenders. As long as a member bank’s assets were mortgages that qualified for FHL bank loans, credit was available for a bank that was in trouble.

The system was designed in the 1930s — by people who understood the credit creation theory of banking — to deliberately exclude commercial banks which financed commercial activity and whose last-resort lender was the Federal Reserve. Only when the FIRRE Act in 1989 was passed subsequent to the Savings and Loan crisis were commercial banks permitted to become FHLB members.

From a credit creation theory perspective this major shift in US bank regulation ensured that the full credit creation capacity of the commercial banking system was united with the US mortgage lending system making it possible for the FHLBs to create their own capital and use it to provide virtually unlimited funds to finance mortgage lending in the US.