Why claims that the 2008 bailout was a “success” should make you angry

In 2008 we needed a bailout – or at least significant government/central bank intervention – but the bailout we got was unfair and almost certainly hampered the recovery. Furthermore, claims that “the bailout made money in the end” need to address the actual structure of the bailout.

So let’s talk about how the 2008-10 bailout of mortgage-related securities and instruments was structured. I focus on the mortgage-related bailout, because even when you’re talking about much more complicated instruments like CDOs, a lot of the trouble came from the outrageous practices that had been going on for the last few years in the US mortgage market. Here I’m not going to get into how the various instruments were related to mortgages, I’m just going to break down how the US used government funds to bail out the issuers and investors in private housing market-related instruments. There were three steps.

STEP 1: The Fed provided temporary assistance by supporting asset prices from March 2008 through February 2010 by accepting just about everything as collateral at the TSLF and PDCF and thus preventing fire sales of assets. The Fed also wrote supervisory letters granting bank holding companies (BHCs) the right to exceed normal limits on aid from the FDIC-insured bank to the investment bank, so that a lot of support of these asset markets took place on the balance sheets of the BHCs.

STEP 2: Many of the mortgages underlying the troubled assets were refinanced with the support of government guarantees against credit risk. The process of refinancing a mortgage requires the existing mortgage to be paid off in full. Thus, these refis had the effect of transferring poorly originated mortgages out of private portfolios and into government insured portfolios. This would not be a problem if the government insured mortgages were carefully originated, but that would not have solved the private sector’s problem, so that’s not what happened. Step 2 required both immense purchases by the government of mortgage backed securities and a simultaneously massive expansion in insurance offered for riskier loans.

1.  Massive purchases of GSE MBS.
The goals were to make sure the GSEs could continue to be active in the mortgage market, to drive down the 30 year mortgage rate to facilitate refinancing as well as purchases, and to raise the price of housing.

a. On Sept 7 2008 when Fannie Mae and Freddie Mac were put into conservatorship, Treasury also announced plan to purchase MBS securities. Apparently this program only ever reached about $200 billion in size (Sigtarp Report July 2010 136). Soon it was superseded by:

b. The Federal Reserve’s QE1: In November 2008 the Federal Reserve announced a massive program of supporting mortgage markets by buying mortgage backed securities issued by Fannie Mae, Freddie Mac and Ginnie Mae. This purchase program ended up buying $1.25 trillion in MBS and continued until February 2010.

  • By the end of 2008 the 30 year fixed mortgage rate had fallen by a full percentage point. and would only decline further in later years.

//fred.stlouisfed.org/graph/graph-landing.php?g=ldKE&width=670&height=475

  • Private sector MBS issues had declined to almost nothing by mid 2008 and even GSE MBS issues had dropped over the course of 2008. In 2009 GSE MBS came roaring back so that by mid-2009 monthly MBS issues were almost as high as they had ever been. The fact that in several months Fed purchases in the form of QE1 exceeded GSE MBS issues undoubtedly played a role in this dramatic recovery of the MBS market.

2008 Housing mkt
from “Charting the Financial Crisis” by Brookings & Yale SOM

2.   FHA insurance grew to account for almost 1/3 of the mortgage market.
From mid-2009 to mid-2010 alone FHA and GNMA insured loans increased by $500 billion (Sigtarp Report July 2010 p. 119).

FHA insured loans became a growing and then significant portion of the mortgage market after the major subprime lenders collapsed in early 2007, and FHA became the only choice for borrowers who couldn’t put down much of a down payment. Prior to the crisis FHA loans accounted for as little as 3% of the market. By June 2009 FHA loans accounted for 30% of the market and would continue to do so for several years. (See Golobay 2009 and Berry 2011a.)

By mid-2011 all the major banks held billions in FHA insured loans that were 90 days or more past due: BoA $20 billion, WFC $14 billion, JPM $10 billion, Citi $5 billion. Eventually every major bank would end up settling lawsuits over misrepresentations in FHA insurance applications. In the meanwhile they were using FHA insurance as a cover to avoid taking writedowns on the loans. (See Berry 2011b.)

Here is the FHA’s 2015 report on how the loans it guarantees have been performing. Note that the FHA insured $73 billion single family mortgages in FY 2006, $84 billion in FY 2007, $205 billion in FY 2008 and $365 billion in FY 2009 (see Table 1 here.)

FHA loan performance
(Note that the decision to separate fiscal year 2009 into first half (October 2008 to March 2009) and second half (April 2009 to September 2009) appears to be a genuine effort to show how different the two cohorts are, and as far as I can tell should not be interpreted as questionable data manipulation.)<\small>

3. Expansion of loans eligible for securitization by Fannie Mae and Freddie Mac by increasing the conforming loan limit to $729,750 in high cost states (which lasted until 10-1-2011).

  • The Special Inspector General for the Troubled Asset Relief Program concluded that the government had adopted an explicit policy of supporting housing market prices (SIGTARP report Jan 2010 p. 126). These programs stopped the decline in house prices nationally (the yellow line in the chart below) for the year 2009 and slowed the drop in house prices thereafter. As a result, nationally the bottom in housing prices wasn’t reached until January 2012. This meant that the massive 2009 government guaranteed refinancing of mortgages was deliberately executed at higher than market prices.

CR Case Shiller Index

Before going on to Step 3, let’s pause for a moment to get a good picture of what is going on here. By late 2008, it had become abundantly clear that Private Label Securitization was a shitshow. Tanta, who had 20-odd years of mortgage industry experience and spent the months before her death blogging at Calculated Risk, put it well in a July 2007 blogpost :

“we as an industry have known how to prevent a lot of fraud for a long time; we just didn’t do it. It costs too much, and too many bonuses were at stake to carve out the percent of loan production it would take to get a handle on fraud. The only thing that got anybody’s attention, finally, was a flood of repurchase demands on radioactive EPD [early payment default, i.e. 3 missed payments in first 6 months of loan] loans and other violations of reps and warranties. If [you] want[] to accomplish something, I’d suggest [you] … start slapping some issuers around on their pre-purchase or pre-securitization quality control and due diligence.”

So what was going on in 2007 and 2008 is that the market was recognizing that the “Non-Agency MBS” in the chart below was going to perform very badly, because it was so full of loans that should never have been made.

collapse of PLMBS
In many cases the originators who were theoretically on the hook for the reps and warranties they had made when they sold the loans to Wall Street had been driven into bankruptcy by – you guessed it – claims based on their reps and warranties. The bag they had in theory been holding had most definitely been passed on to someone else, but it wasn’t clear yet to whom. The obvious candidate was the issuers who had packaged these loans – with utterly inadequate due diligence – into securities for investors to buy. The catch was that the issuers were all the big banks: Bank of America, JP Morgan Chase, Citibank, Goldman Sachs, etc.

And we had financial regulators who were like deer in the headlights, transfixed by terror, when they heard that one of the big retail banks might be in danger. These regulators threw themselves headlong into the project of rescuing the big banks from their failure to perform the due diligence necessary to issue mortgage-backed securities according to the terms in their securities documentation. While I suspect that Ben Bernanke never quite wrapped his head around these issues (he had plenty of other things to worry about), it seems fairly clear that Hank Paulson and Timothy Geithner worked consciously to “save the financial system” by hiving loans that should never have been made off onto the Government. Geithner, in particular, would almost certainly claim that this was the right thing to do in the interests of financial stability.[1]

Thus, the mortgage sector bailout was designed so that the mortgages underlying the private label mortgage backed securities (PLMBS), the bulk of which had been made at the peak of the bubble, would be refinanced out of the PLMBS securities as quickly as possible. The private sector had no interest in financing such an endeavor itself, so the only way to do it was through the government sponsored entities.

By engineering a drop in the 30 year mortgage rate (the announcement of QE1 was apparently enough to do this), an incentive was created for mortgagors to refinance their loans. The same Fed program ensured that Fannie Mae, Freddie Mac, and Ginnie Mae would have no problem getting the funds to buy the refinanced mortgages. There was only one catch, a nontrivial segment of the PLMBS mortgages were not of a quality that could be sold to Fannie and Freddie – and the same would be true of any refis of those mortgages. That’s where the FHA comes in: by guaranteeing 30% of all mortgages in the crucial years 2009-2010, the FHA provided a way for some of the more dubious mortgages in the PLMBS to be refinanced and be paid in full. FHA loans are typically securitized by Ginnie Mae and may also be held on a bank’s balance sheet. The PLMBS loans that were paid in full – due solely to the presence of government guarantees in the mortgage market – almost certainly played a huge role in protecting the returns on the PLMBS, in reducing the losses to investors, and in reducing the liability of the issuers for their due diligence failures.

The key point to remember here is that there was nothing “market” about this whole process. The Fed was both providing the funds and driving down the interest rates, while a government backstop for the credit risk on the loans was provided by the GSEs. Multiple experts described the housing finance market as having been “nationalized” or put “on government life support” in this period.

Because of the degree to which the government took over the mortgage market in these crucial years, it becomes a little silly to focus on the fact that no money was lost (in aggregate) due to the government’s support of PLMBS and related assets. (As far as I can tell the costs included in bailout figures never include the losses that the GSEs incurred on the loans guaranteed from 2008Q4 to 2010Q4.) Overall it can hardly be a surprise that the government made money on the officially recognized bailout loans given that the government also took steps in to make sure that many of the underlying assets were paid off in full.

At this point you may be saying: Well okay, but given that the Fed and Treasury were successful in returning the banks to health and the GSEs are all doing okay now too, was there really any harm done by a few years of de facto nationalization of the housing market?

This is where Step 3 comes in. The whole scheme only works because of Step 3, and Step 3 is what has most of those who understand what happened absolutely smoking mad about the bailouts. The key to the PLMBS performing well was that the mortgages in them had to be paid off in full. In order for the existing mortgage to be paid in full, the refi that pays it off will have to be for the same amount as the existing mortgage or a little more.

STEP 3: No principal reduction for mortgage holders. It was essential to make sure that people who hold mortgages don’t have access to a program that allows principal to be reduced. Effectively, since the banks can’t be the bagholders because of the terror of financial instability and the government can’t just be handed the bag because that has very bad visuals, the public had to be made the bagholders. The only way to do this was to make sure the public was not cut any breaks.

1. Prevent cramdown legislation from being passed
Cramdown is how bankruptcy law treats collateral that has fallen in value below the value of the loan. If the debtor declares bankruptcy, the lender only has a security interest up to the value of the collateral and remainder of the loan is not treated as collateralized debt. An exception was written into the 1977 Bankruptcy Code excluding mortgages on primary residences from cramdown. (The claim at the time was that this would be better for borrowers. LOLWT[2].) In short, the bankruptcy code takes the position that finding a good solution to someone’s inability to pay debt requires recognizing the economic reality of the situation in virtually every case except for mortgages on primary residences.

Forcing lenders to come to the table on the basis of economic reality is something that every collateralized borrower can do – except for the little guy whose only collateralized loan is on his/her primary residence. Fixing the cramdown inequity was one of President Obama’s promises before he was elected. But lo and behold Treasury staffers in his administration “stressed the effects of cramdown on the nation’s biggest banks, which were still fragile. The banks’ books could take a beating if too many consumers [were] lured into bankruptcy by cramdown ” (Kiel & Pierce 2011). Treasury’s position on this should be read: we need to bail out the banks, so we can’t allow the economic reality of the situation to affect the cut that the banks get.

2. Failure to establish an effective principal reduction program until 2012
In July 2010 SIGTARP called Treasury out for its failure to establish an effective principal reduction program as part of its mortgage modification program (Sigtarp Report July 2010 174ff.) However, not until May 2011 had the Treasury been sufficiently shamed over the lack of principal reductions to begin reporting on the Principal Reduction Alternative (PRA) data. By May 2011 less than 5000 permanent modifications had been started that included principal reduction. This was less than 1% of the permanent modifications started under the HAMP program (MHA Report May 2011).

This delay was important, because if borrowers had been offered modifications with principal reduction in the crucial years from 2009-10, it undoubtedly would have affected decisions to refinance loans that had been made at the peak of the bubble. By May 2012 permanent modifications with PRA that had been started had jumped to 83,362 which was over 8% of all permanent modifications started (MHA Report May 2012).  More recent reports indicate that ultimately 17% of all permanent modifications started included principal reduction. (MHA Report 2017Q4 p. 4)

3. Failure of FHA short refinance program. In August 2010 the FHA established a short refinance program which imposed strict rules on lenders including 10% 1st lien principal writedowns.  A year later the program had helped only 246 borrowers, in part because Fannie and Freddie refused to participate, and the program was slated to be closed (Prior 2011).

So what’s my conclusion? Everybody who wants to tout the success of the bailout needs to tackle the reality of the bailout’s structure. There was a housing bubble. Somebody was going to have to absorb the losses that are created when lending takes place against overpriced assets.

Because in the name of financial stability the Fed and Treasury decided that banks weren’t going to bear any of the losses on the origination and securitization of bad mortgages, they had to find a way to put the tab to the government and to the public.

It was put to the government by putting the mortgage market on government life support from late 2008 to 2010, so that people would refinance out of the bad mortgages in PLMBS securitizations into FHA loans and into GSE MBS.

It was put to the public by making sure that their mortgages were not written down in value, even though the value of the house being used as collateral had collapsed. This means that the housing price bubble of 2006-07 is still with us today. It is being paid off by homeowners who are still paying these mortgages, who can’t spend that money on consumption, and who are scheduled to keep paying off bubble-level housing prices right up until 2050.

HH svgs
From Deutsche Bank via Tracy Alloway: https://twitter.com/tracyalloway/status/1040391962090590209

So when you see a chart like the one just above, which shows US consumers saving far more than predicted, you should recall that paying down mortgage principal counts as savings and a lightbulb should go off in your head. You should be thinking when you see this chart: “Aha. Look at all the US consumers who are still paying for the housing bubble. The 2008 crisis should have been handled differently.”

P.S. While we’re talking about anger and crisis housing policy let me offer two notes on HAMP modifications.

  1. Look at this chart from “Charting the Financial Crisis” by Brookings & Yale SOM (part of a project advised by Tim Geithner)

HAMP by count

They very carefully report the number of borrowers helped, but not the principal value of the mortgages before the modification and the principal value of the mortgage after the modification. Most HAMP modifications included significant increases in the principal borrowed, as not only interest accrued during trial modifications but also a variety of fees that borrows rarely understood or reviewed, were capitalized into the loans.

  1. In general the HAMP program is performing execrably as might have been expected given its design. (See here for details.) After 60 months the program increases the payments that were carefully set to the maximum the borrower can afford when the loan was made. The program may continue to increase payments each year for 2 to 3 years, that is, at 72 and 84 months. In short, the program was designed to give borrowers as little as possible: borrowers get five years respite in payments without reducing the present value of the modified loan on bank balance sheets. To avoid hitting bank balance sheets payments have to go up for the remaining 35 years of the loan. On pages 7 and 9 of the 2017Q4 MHA Report, the data on performance is very carefully presented only up to 60 months. One has to read the appendices – specifically Appendix 6 – to learn that for each vintage with 84 months of data at least 50% (and up to 65%) of loans have become delinquent.

[1] I have a draft paper in which I draw the analogy between Geithner and a couple of early 19th c. Bank of England directors who had been similarly traumatized by their early experiences dealing with financial crises and also advocated throwing money at them no matter what. The difference is that these two directors were lambasted by their contemporaries including Ricardo, and their claims have gone down in history as “answers that have become almost classical by their nonsense” (Bagehot 1873, p. 86).

[2] LOLWT = Laugh out loud with tears.

Advertisements

Brokers, dealers and the regulation of markets: Applying finreg to the giant tech platforms

Frank Pasquale (h/t Steve Waldman) offers an interesting approach to dealing with the giant tech firms’ privileged access to data: he contrasts a Jeffersonian — “just break ’em up” approach — with a Hamiltonian — regulate them as natural monopolies approach. Although Pasquale favors the Hamiltonian approach, he opens his essay by discussing Hayekian prices. Hayekian prices simultaneously aggregate distributed knowledge about the object sold and summarize it, reflecting the essential information that the individuals trading in the market need to know. While gigantic firms are alternate way of aggregating data, there is little reason to believe that they could possibly produce the benefits of Hayekian prices, the whole point of which is to publicize for each good a specific and extremely important summary statistic, the competitive price.

Pasquale’s framing brings to mind an interest parallel with the history of financial markets. Financial markets have for centuries been centralized in stock/bond and commodities exchanges, because it was widely understood that price discovery works best when everyone trades at a single location. The single location by drawing almost all market activity offers both “liquidity” and the best prices. The dealers on these markets have always been recognized as having a privileged position because of their superior access to information about what’s going on in the market.

One way to understand Google, Amazon, and Facebook is that they are acting as dealers in a broader economic marketplace. That with their superior knowledge about supply and demand they have an ability to extract gains that is perfectly analogous to dealers in financial markets.

Given this framing, it’s worth revisiting one of the most effective ways of regulating financial markets: a simple, but strict, application of a branch of common law, the law of agency was applied to the regulation of the London Stock Exchange from the mid-1800s through the 1986 “Big Bang.” It was remarkably effective at both controlling conflicts of interest and producing stable prices, but post World War II was overshadowed and eclipsed by the conflict-of-interest-dominated U.S. markets. In the “Big Bang” British markets embraced the conflicted financial markets model — posing a regulatory challenge which was recognized at the time (see Christopher McMahon 1985), but was never really addressed.

The basic principles of traditional common law market regulation are as follows. When a consumer seeks to trade in a market, the consumer is presumed to be uninformed and to need the help of an agent. Thus, access to the market is through agents, called brokers. Because a broker is a consumer’s agent, the broker cannot trade directly with the consumer. Trading directly with the consumer would mean that the broker’s interests are directly adverse to those of the consumer, and this conflict of interest is viewed by the law as interfering with the broker’s ability to act an agent. (Such conflicts can be waived by the consumer, but in early 20th century British financial markets were generally not waived.)

A broker’s job is to help the consumer find the best terms offered by a dealer. Because dealers buy and sell, they are prohibited from acting as the agents of the consumers — and in general prohibited from interacting with them directly at all. Brokers force dealers to offer their clients good deals by demanding two-sided quotes and only after learning both the bid and the ask, revealing whether their client’s order is a buy or a sell. Brokers also typically get bids from different dealers to make sure that the the prices on offer are competitive.

Brokers and dealers are strictly prohibited from belonging to the same firm or otherwise working in concert. The validity of the price setting mechanism is based on the bright line drawn between the different functions of brokers and of dealers.

Note that this system was never used in the U.S., where the law of agency with respect to financial markets was interpreted very differently, and where financial markets were beset by conflicts of interest from their earliest origins. Thus, it was in the U.S. that the fixed fees paid to brokers were first criticized as anti-competitive and eventually eliminated. In Britain the elimination of fixed fees reduced the costs faced by large traders, but not those faced by small traders (Sissoko 2017). By adversely affecting the quality of the price setting mechanism, the actual costs to traders of eliminating the structured broker-dealer interaction was hidden. We now have markets beset by “flash-crashes,” “whales,” cancelled orders, 2-tier data services, etc. In short, our market structure instead of being designed to control information asymmetry, is extremely permissive of the exploitation of information asymmetry.

So what lessons can we draw from the structured broker-dealer interaction model of regulating financial markets? Maybe we should think about regulating Google, Amazon, and Facebook so that they have to choose between either being the agents in legal terms of those whose data they collect, or of being sellers of products (or agents of these sellers) and having no access to buyer’s data.

In short, access to customer data should be tied to agency obligations with respect to that data. Firms with access to such data can provide services to consumers that help them negotiate a good deal with the sellers of products that they are interested in, but their revenue should come solely from the fees that they charge to consumers on their purchases. They should not be able to either act as sellers themselves or to make any side deals with sellers.

This is the best way of protecting a Hayekian price formation process by making sure that the information that causes prices to move is the flow of buy or sell orders that is generated by a dealer making two-sided markets and choosing a certain price point. And concurrently by allowing individuals to make their decisions in light of the prices they face. Such competitive pricing has the benefit of ensuring that prices are informative and useful for coordinating economic decision making.

When prices are not set by dealers who are forced to make two-sided markets and who are given no information about the nature of the trader, but instead prices are set by hyper-informed market participants, prices stop having the meaning attributed to them by standard economic models. In fact, given asymmetric information trade itself can easily degenerate away from the win-win ideal of economic models into a means of extracting value from the uninformed, as has been demonstrated time and again both in theory and in practice.

Pasquale’s claim that regulators need to permit “good” trade on asymmetric information (that which “actually helps solve real-world problems”) and prevent “bad” trade on asymmetric information (that which constitutes “the mere accumulation of bargaining power and leverage”) seems fantastic. How is any regulator to have the omniscience to draw these distinctions? Or does the “mere” in the latter case indicate the good case is to be presumed by default?

Overall, it’s hard to imagine a means of regulating informational behemoths like Google, Amazon and Facebook that favors Hayekian prices without also destroying entirely their current business models. Even if the Hamiltonian path of regulating the beasts is chosen, the economics of information would direct regulators to attach agency obligations to the collection of consumer data, and with those obligations to prevent the monetization of that data except by means of fees charged to the consumer for helping them find the best prices for their purchases.

Access to Credit is the Key to a Win-Win Economy

Matt Klein directs our attention to an exchange between Jason Furman and Dani Rodrik that took place at the “Rethinking Macroeconomic Policy” Conference. Both argued that, while economists tend to focus on efficiency gains or “growing the pie”, most policy proposals have a small or tiny efficiency effect and a much much larger distributional effect. Matt Klein points out that in a world like this political competition for resources can get ugly fast.

I would like to propose that one of the reasons we are in this situation is that we have rolled back too much of a centuries-old legal structure that used to promote fairness — and therefore efficiency — in the financial sector.

Adam Tooze discusses 19th century macro in follow up to Klein’s post:

Right the way back to the birth of modern macroeconomics in the late 19th century, the promise of productivist national economic policy was that one could suspend debate about distribution in favor of “growing the pie”.

In Britain where this approach had its origins, access to bank credit was extremely widespread (at least for those with Y chromosomes). While the debt was typically short-term, it was also the case that typically even as one bill was paid off, another was originated. Such debt wasn’t just generally available, it was usually available at rates of 5% per annum or less. No collateral was required to access the system of bank credit, though newcomers to the system typically had to have 1 or 2 people vouch for them.

I’ve just completed a paper that argues that this kind of bank credit is essential to the efficiency of the economy. While it’s true that in the US discrimination has long prevented certain groups from having equal access to financial services — and that the consequences of this discrimination show up in current wealth statistics, it seems to me that one of the disparities that has become more exaggerated across classes over the past few decades is access to lines of credit.

The facts are a harder to establish than they should be, because as far as I can tell the collection of business lending data in the bank call reports has never carefully distinguished between loans secured by collateral other than real estate and loans that are unsecured. (Please let me know if I’m wrong and there is somewhere to find this data.) In the early years of the 20th century, the “commercial and industrial loans” category would I believe have comprised mostly unsecured loans. Today not only has the C&I category shrunk as a fraction of total bank loans, but given current bank practices it seems likely that the fraction of unsecured loans within the category has also shrunk.

This is just a long form way of stating that it appears that the availability of cheap unsecured credit to small and medium sized business has declined significantly from what it was back when early economists were arguing that we could focus on efficiency and not distribution. Today small business credit is far more collateral-dependent than it was in the past — with the exception of course of credit card debt. Charge cards, however, charge more than 19% per annum for a three-month loan which is about a 300% markup on what would have been charged to an unsecured business borrower in the 19th century. To the degree that it is collateralized credit that is easily available today, it will obviously favor the wealthy and aggravate distributional issues.

In my paper the banking system makes it possible for allocative efficiency to be achieved, because everybody has access to credit on the same terms. As I explained in an earlier post, in an economy with monetary frictions there is no good substitute for credit. For this reason it seems obvious that an economy with unequal access to short term bank credit will result in allocations that are bounded away from an efficient allocation. In short, in the models with monetary frictions that I’m used to working with equal access to credit is a prerequisite for efficiency.

If we want to return to a world where economics is win-win, we need a thorough restructuring of the financial sector, so that access to credit is much more equal than it is today.

Discount Markets, Liquidity, and Structural Reform

Bengt Holmstrom has a paper explaining the “diametrically opposite” foundations of money markets and capital markets.* This dichotomy is also a foundation of traditional banking theory, and of the traditional functional separation that was maintained in the U.S. and Britain between money and capital markets.

Holmstrom explains that “the purpose of money markets is to provide liquidity,” whereas price discovery is an important function of capital markets. In a paper I extend this view a step further: money markets don’t just provide liquidity but a special form of price stable liquidity that is founded on trade in safe short-term assets; by contrast capital markets provide market liquidity which promotes price discovery, not price stability.

A century ago in Britain privately issued money market assets were, like capital market assets, actively traded on secondary markets. The two types of assets traded, however, on completely different markets with completely different structures that reflected the fact that money market assets needed to be “safe.”

To understand why the markets had different structures consider this question: how does one ensure that the safety of the money market is not undermined by asymmetric information or more specifically by the possibility that when the owners of money market assets have information that the assets are likely to default they do not use the market to offload the assets, adversely affecting the safety of the market itself, and therefore its efficacy as a source of price stable liquidity? The answer is to structure the market as a discount market.

In a discount market, every seller offers a guarantee that the asset sold will pay in full. (You do this yourself when you endorse a check, signing its value over to a bank — while at the same time indemnifying the bank against the possibility that the check is returned unpaid.) This structure was one of the foundations upon which the safety of the London money market was built. The structure ensures that the owner of a dubious asset has no incentive to attempt to sell it, and in fact is very unlikely to sell it in order to hide from the public the fact that it is exposed to such assets.

From their earliest days it was well-understood that discount markets were designed to align the incentives of banks originating money market assets and to promote the safety of the assets on the money market. (See van der Wee in Cambridge Economic History of Europe 1977.) Any bank that originates or owns a money market asset can never eliminate its exposure to that asset until it is paid in full. For this reason a discount market is specifically designed to address problems of liquidity only. That is, a bank that is illiquid can get relief by selling its money market assets, but if it has originated so many bad assets that it is insolvent, the money market will do nothing to help.

Contrast the structure of a discount market with that of an open market. On an open market, the seller is able to eliminate its exposure to the risks of the asset. This has the effect of attracting sellers (and buyers) with asymmetric information and as a result both increasing the riskiness of the market and creating the incentives that make the prices of the risky assets that trade on open market informative. Thus, it is because price discovery is important to capital markets, that they are structured as open markets. Capital markets can only offer market liquidity — or liquidity with price discovery — rather than the price stable liquidity of the money market. On the other hand, an entity with asymmetric information about the assets that it holds can use the open market structure of capital markets to improve its solvency as well as its liquidity position.

Historically it appears that in order for a money market to have active secondary markets, it must be structured as a discount market. (Does anyone have counterexamples?) That is, it appears that when the only option for secondary trading of money market instruments is an open market, then secondary markets in such instruments will be moribund. This implies not only that the absence of incentives to exploit asymmetric information plays an important role in the liquidity available on money markets (cf. Holmstrom) — but also that price stable liquidity is an important benefit of the discount market structure.

Both discount markets and open markets can be adversely affected by extraordinary liquidity events. But only one of the two markets is premised on safe assets and price stable liquidity. Thus, the lender of last resort role of the central bank developed in Britain to support the money (discount) market only. (In fact, I would argue that the recognized need for a provider of liquidity support to the discount market explains why the Bank of England was structured as it was when it was founded, but that goes beyond the scope of this post. See Bowen, Bank of England during the Long 18th c.) One consequence of the fact that the central bank supported only assets that traded on a discount market is that it was able to support the liquidity of the banks, without also supporting their solvency.

Given the common claim that one hears today that it is unreasonable to ask a central bank to distinguish illiquidity from insolvency in a crisis, perhaps it is time to revisit the discount market as a useful market structure, since acting through such a market makes it easier for a central bank to provide liquidity support without providing solvency support.

 

*His focus is actually money markets and stock markets, but in my view he draws a distinction between debt and equity that is far less clear in practice than in theory. In a modern financial system unsecured long-term bonds are not meaningful claims on the assets of a firm, because as the firm approaches bankruptcy it is likely to take on more and more secured debt leaving a remnant of assets that is literally unknowable at the time that one buys an unsecured long-term bond.

What Gorton and Holmstrom get right and get wrong

Mark Thoma directs us to David Warsh on Gorton and Holmstrom’s view of the role of banking. I’ve written about this view in several places. My own view of banking is very different and here is a quick summary of my key points.

The source of Gorton and Holmstrom’s errors: Taking U.S. banking history as a model

In my view Gorton and Holmstrom err by basing their view of what banking is on the pre-Fed U.S banking system. Nobody argues that the U.S. represented a “state-of-the-art” banking system in the late 19th century. In fact, in the late 19th century the U.S. banking system was still recovering from the reputational consequences of the combination of state and bank defaults in the 1840s that had led many Europeans to conclude that American institutions facilitated fraud. By the end of the 19th century, however, the U.S. did have access to European markets and there is evidence that the U.S. banking system relied heavily on the much more advanced European banking system for liquidity (e.g. the flow of European capital during seasonal fluctuations). Indeed, the crisis of 1907, during which the none-too-respected U.S. banking system was at least partially cut off from the London money market, was so severe, it led to the decision to emulate European banking by establishing the Federal Reserve.

What Gorton and Holmstrom get right: the fundamental difference between money market and capital market liabilities, or as Warsh puts it: “Two fundamentally different financial systems [are] at work in the world”

In particular, it is essential for the debt that circulates on the money market to be price stable or “safe.” This distinguishes money markets are from capital markets, where price discovery is essential. Holmstrom writes:

Among economists, the mistake is to apply to money markets the lessons and logic of stock markets. … Stock markets are … aimed at sharing and allocating aggregate risk … [and this] requires a market that is good at price discovery. … [By contrast,] The purpose of money markets is to provide liquidity for individuals and firms. The cheapest way to do so is by … obviat[ing] the need for price discovery.

What Gorton and Holmstrom get wrong:

1.  The historical mechanisms by which the banking system created “safe” money market assets.

Holmstrom writes: “Opacity is a natural feature of money markets and can in some instances enhance liquidity.” This is the basic thesis of Gorton and Holmstrom’s work.

A study of the early 20th century London money market indicates, however that the best way to create safe money market assets is to (i) offset the implications of “opacity” by aligning incentives: any bank originating or selling a money market asset is liable for its full value, and (ii) establish a central bank that (a) has the capacity to expand liquidity and thereby prevent a crisis of confidence from causing a shift to a “bad” equilibrium, and (b) controls the assets that are traded on the money market by (1) establishing a policy of providing central bank liquidity only against assets guaranteed by at least two banks, and (2) withdrawing support from assets guaranteed by low-quality originators. (ii)(b) plays a crucial role in making the money market safe: no bank can discount its own paper at the central bank, so it has to hold the paper of other banks; at the same time, no bank wants to hold paper that the central bank will reject. Thus, the London money market was designed to ensure that the banks police each other — and there is no American-style problem of competition causing the origination practices of banks to deteriorate.

The Gorton-Holmstrom approach is based on the historical U.S. banking system and sometimes assumes that deterioration of origination quality is inevitable — it is this deterioration that is “fixed” by financial crises, which have the effect of publicizing information and thereby resetting the financial system. In short, by showing us how a banking system can function in the presence of both opacity and misaligned incentives, Gorton and Holmstrom show us how a low-quality banking system, like that in the late 19th century U.S. which could only create opaque (not safe) assets, can be better than no banking system.

Surely, however, what we want to understand is how to have a high-quality banking system. The kind of system represented by the London market is ruled out by assumption in the Gorton-Holmstrom framework which focuses on collateralized rather than unsecured debt. An alternative model for high-quality banking may be given by the 1930s reforms in the U.S. which improved the origination practices of U.S. banks and — temporarily at least — stopped the continuous lurching of the U.S. banking system from one crisis to another that is implied by opaque (rather than safe) money market assets.

2. Gorton and Holmstrom err by focusing on collateral rather than on overlapping guarantees.

Holmstrom writes: “Trading in debt that is sufficiently over-collateralised is a cheap way to avoid
adverse selection.” His error, however is to use both language and a model that emphasize collateral in the literal sense. The best form of “over-collateralization” for a $10,000 privately-issued bill is to add to the borrower’s liability the personal guarantee of Jamie Dimon — or even better both Jamie Dimon and Warren Buffett. This is the principle on which the London money market was built (and because both extended liability for bank shares and management ownership of shares was the norm until the 1950s in Britain, personal liability played a non-negligible role in the way the banking system worked). This is rather obviously an excellent mechanism for ensuring that money market debt is “safe.”

The fact that it may seem outlandish in 21st century America to require that a bank manager have some of his/her personal wealth at stake whenever a money market asset is originated, is really just evidence of the degree to which origination practices have deteriorated in the U.S.

Note also that there is no reason to believe that the high-quality money market I am describing will result in restricted credit. Nothing prevents banks from making the same loans they do now; the only issue is whether the loans are suitable for trade on the money market. Given that our current money market is very heavily reliant on government (including agency) assets and that these would continue to be suitable money market assets, there is little reason to believe that the high-quality money market I am describing will offer less liquidity that our current money market. On the other hand, it will offer less liquidity than, say, the 2006 money market — but I would argue that this characteristic is a plus, not a minus.

3. Holmstrom errs by focusing on debt vs. equity, rather than money markets vs. capital markets

Holmstrom claims that: “Equity is information-sensitive while debt is not.” He clearly was not holding GM bonds in the first decade of the current century. A more sensible statement (which is also consistent with the general theme of his essay) is that capital market assets including both equity and long-term debt are information sensitive, whereas it is desirable for money market assets not to be informationally sensitive.

Conclusion

In short, I argue that in a well-structured banking system money market assets are informationally insensitive because they are safe. For institutionally-challenged countries, a second-best banking system may well be that presented by Gorton and Holmstrom, where money markets assets are “safe” — at least temporarily — because they are informationally insensitive.

In my view, however, we should establish that a first-best banking system is unattainable, before settling on the second-best solution proposed by Gorton and Holmstrom.

Liquidity provision and total informational “efficiency” are incompatible goals

Matt Levine writes:

Prices very quickly reflect information, specifically the information that there are big informed buyers in the market.

That’s good! That’s good. It’s good for markets to be efficient. It’s good for prices to reflect information.

Let’s take this argument to the limit. Every order contains some small amount of information. Therefore every order should move the market (as they do in building block models of market microstructure)– and of course big orders should move the market even more than small orders. Matt Levine is claiming that this is the definition of efficiency.

But wait: What is the purpose of markets? Do we want them to be informationally efficient about the fundamental value of the assets, or do we want them to be informationally efficient about who needs/wants to buy and sell in the market? These are conflicting goals. When a hedge fund is forced to liquidate by margin calls, those sales contain no information about the fundamental value of the asset. Should prices reflect the market phenomena or should they reflect fundamental value? According to Matt Levine they should reflect the market not the fundamentals.

Matt Levine supports his view by referencing an academic paper that assumes on p. 3 that all orders contain some information about fundamental value — and thus assumes away the problem that some market information has nothing to do with fundamental value. With only a few exceptions the theory supporting the view that trade makes markets informationally efficient in the academic literature assumes (i) that  informed traders trade on the basis of fundamental information about the value of the asset and (ii) that the informed traders have no opportunity to use their information strategically by delaying its deployment. Almost nobody models the issue of intermediaries trading on the basis of market information.  And the whole literature by definition has nothing to say about efficiency in the sense of welfare (i.e. the Pareto criterion) because it assumes that liquidity traders are made strictly worse off by participating in markets.

It has long been recognized that liquidity is one of, if not, the most important service provided by secondary markets. Liquidity is the ability to buy or sell an asset in sizable amounts with little or no effect on the price.

Matt Levine’s version of informational efficiency presumes that there is no value to liquidity in markets. Every single order should move the market because there is some probability that it contains information.

I thought the reason that financial markets attract vast amounts of money from the uninformed was because they were carefully structured to provide liquidity and to ensure that the uninformed could get a fair price. Now it’s true that U.S. markets were never designed to be fair — and were undoubtedly described in extremely deprecating terms by London brokers and dealers for decades — at least prior to 1986. But there’s a big difference between arguing that markets don’t provide liquidity as well as they should, and arguing, as Matt Levine does, that the provision of liquidity should be sacrificed at the altar of some poorly defined concept of informational efficiency.

If Matt Levine is expressing the views of a large chunk of the financial world, then I guess we were all wrong about the purpose of financial markets: as far as the intermediaries are concerned the purpose of financial markets is to improve the welfare of the intermediaries because they’re the ones with access to information about the market.  Good luck with that over the long run.

Time priority is the key to fair trading

A true national market system would have the following property. There are clearly defined points of entry to the system: that is, when an order is placed on specific exchanges, ECNs or ATSs, they will count as part of the system. These orders are time-stamped by a perfectly synchronized process. In other words, it doesn’t matter where your point of entry is, the time-stamp on your order will put it in the correct order relative to every other part of the system.

Order matching engines are, then, required to take the time to check that time-priority is respected across the national market system as a whole.

This structure would eliminate many of the nefarious aspects of speedy trading, while at the same time allowing high-speed traders to provide liquidity within the constraints of a strictly time priority system. Speedy orders couldn’t step in front of existing orders, because time-priority would be violated. Cancellations couldn’t be executed until after the matching engine had swept the market to look for an order preceding the cancellation that required a fill. In short, speedy traders would be forced to take the actual risk of market making, by always being at risk of having their limit orders matched before they can be cancelled.

Overall, it seems to me that the error the SEC made was in creating a so-called “national market system” without a time-priority rule.

Note: this post was probably influenced by @rajivatbarnard ‘s tweets about this same topic today.

Update: Clark Gaebel explains very clearly that we don’t have anything remotely resembling a “national market system.” We have a plethora of independent trading venues and your trade execution is highly dependent on your routing decisions.