The government cannot be responsible for systemic credit risk

Brad DeLong‘s latest has me sputtering.  (He seems to have fallen for Caballero’s view that the government has to insure private markets against tail risk by insuring private assets — which I have addressed many times.  Note that DeLong already has a second post up on this topic.)

When there is excess demand for safe, liquid, high-quality financial assets, the rule for which economic policy to pursue – if, that is, you want to avoid a deeper depression – has been well-established since 1825. If the market wants more safe, high-quality, liquid financial assets, give the market what it wants.

The policy “well-established since 1825” to which deLong refers is the Bank of England’s practice of lending generously into a financial panic.  Unfortunately, the historical episodes to which deLong refers are not really comparable to our current situation.

(i) Bankers had personal liability for their debts in England — and because the par value of shares was rarely fully paid up by investors this was effectively true for joint stock banks too.  Thus, when England’s financial system was shaken to its core by the failure of Overend and Gurney (roughly comparable to Lehman Brothers), not one creditor lost a dime.  The panic that ended the bank had been triggered by the sale of the Gurneys’ personal assets to honor debts that they had guaranteed before selling shares.  Even so, shareholders were forced to put in additional funds to honor the firm’s remaining debts.  (Ackrill and Hannah, Barclays: the business of banking, p. 46.)

(ii)  The combination of personal liability and capital calls on shareowners was an effective preventative against “moral hazard” for most of the 19th century in England:  the banking system as a whole simply didn’t issue bad debt — though individual banks could make mistakes, or be mismanaged.  Thus 19th c crises were systemic liquidity crises, not systemic solvency crises.

(iii)  The strongest evidence that the 19th c crises were liquidity, not solvency, crises is that the central bank actions were almost always completed, such that markets had returned to normal, within three months.  The idea of providing extraordinary liquidity over a period of more than a year as the Federal Reserve has done — or worse the practice of allowing the long-term instability of the financial sector to drive interest rates to zero for years — would have been unthinkable.

(iv) In short, the 19th century environment where borrowers were held to extremely high standards is not comparable to our moral hazard ridden financial system.  For this reason, DeLong’s appeal to ancient truths is misguided.

However, the assertion that really set me off is the following:

Creditworthy governments around the world can create more safe, liquid, high-quality financial assets through a number of channels. They can spend more or tax less and borrow the difference. They can guarantee the debt of private-sector entities, thus transforming now-risky leaden assets back into golden ones. Their central banks can borrow and use the money to buy up some of the flood of risky assets in the market.

Which of these steps should the world’s creditworthy governments take in response to the asset-price movements of May? All of them, because we really are not sure which would be the most effective and efficient at the task of draining excess demand for high-quality assets.

I find it ironic that DeLong compares a government guarantee of private debt to alchemy, because those of us who are concerned about the government over-reaching in trying to support the value of intrinsically flawed private sector assets are concerned precisely because we suspect that it a project that is doomed to failure.  While it is certain that governments can spend their resources directly and indirectly reflating private sector asset bubbles, it is far from clear that anything good will come of doing so.

When Bagehot argued that the Bank of England should not be overly discriminating in the bills it purchased, he did so explicitly because he knew that the British financial system was sound and produced only a tiny fraction of bad assets.  Unfortunately the modern US financial system does not meet this standard.  What good can come of putting a government guarantee behind debt that is sure to default?  What reason is there to believe that delaying a default by a year or two or three or four is in the interests of either the debtor or the creditor?

The answer presumably is in the second post:

The hope is that, by Walras’s Law which tells us that excess demands across all markets must sum to zero, that relieving excess demand for AAA assets will produce as a consequence the relief of excess supply and full-employment balance in the markets for goods, services, and labor as well.

The problem with this answer is that it is purely aspirational.  Somehow the government purchase of bad debt is supposed to return the economy to the growth path that existed when the bad debt was being issued and nobody realized how unreasonable expectation of repayment really was.  That is, instead of recognizing that the boom times were just that and that the economy has no choice, one way or another, but to shift to a more sustainable growth path, the formula promoted by  DeLong and Caballero is for government to support asset prices until either (i) growth returns and my view is proven wrong or (ii) the government is no longer capable of supporting asset prices.  My real concern is that neither DeLong nor Caballero take the possibility of (ii) seriously — they are unwilling to consider the possibility that government does not in fact have the capacity to levitate the whole economy.  They don’t appear to have a plan b after we implement their recommendations, and it fails.  In fact, Caballero explicitly assumes the success of his proposal (my emphasis):

Instead, if the government only provides an explicit insurance against systemic events to the micro-AAA assets produced by the private sector, we could have a significant expansion in the supply of safe assets without the corresponding expansion of public debt. Of course there would a significant expansion of the notional liabilities of the government, but it is nearly certain that the ex-post cost would be much less than in any of the real alternatives.

To which I can only say that I agree with the anonymous blogger at Macroeconomic Resilience that Jon Stewart has demonstrated a remarkably accurate diagnosis of our current problems:

Why is it that whenever something happens to the people that should’ve seen it coming didn’t see coming, it’s blamed on one of these rare, once in a century, perfect storms that for some reason take place every f–king two weeks. I’m beginning to think these are not perfect storms. I’m beginning to think these are regular storms and we have a sh–ty boat.

When perfect storms (i.e. systemic crises) are taking place with ever increasing regularity, maybe it’s not a good idea to sign the government up for systemic risk insurance.

In short, I’d sputter much less if DeLong and Caballero would spend a significant amount of their time addressing the problem of bad debt and how to deal with it.  Remember that the US government has already spent almost two years underwriting mortgage refinances at extremely low rates that do exactly what DeLong and Caballero recommend.  The remaining mortgage borrowers are simply not good credit risks (look at the back end DTI here).

The evidence all points to the fact that what we have experienced is a systemic solvency crisis, that we are in a balance sheet recession and that the only way out is to rebuild private sector balance sheets.  There are two basic methods to do the latter:  (i) shock therapy, where bankruptcy wipes out debt and transfers assets to creditors and (ii) decades long stagnation, where private sector debt is rolled over repeatedly — allowing debtors to avoid bankruptcy, pay mostly interest and very slowly pay down their debt.  As long the debt is not wiped out by bankruptcy (or inflation), an economy in a balance sheet recession simply cannot flourish.  Pretending — or praying — that the economy will flourish despite the debt overhang, which seems to me to be DeLong and Caballero’s plan, amounts to extreme risk-taking of the first order and is not the domain of sound governance.

While I am extremely strongly opposed to the government getting into the business of underwriting private credit risk, I do believe there is an important role for fiscal policy in the current crisis.  Supporting the economy by protecting the jobs of municipal and state employees — especially teachers — and programs to help the unemployed, etc., seem to me to be just plain common sense, given the economic straits in which we find our economy today.  Keeping a cap on the federal deficit by allowing children to go uneducated or hungry has never, and will never, be good policy.

On the other hand, I’m a bit of an apostate when it comes to the role of monetary policy at the current juncture.  I think we will get out of the “liquidity trap” sooner if we give savers who want to put their money in safe assets a small return on their funds.  Instead of keeping interest rates at the zero lower bound, and pushing money managers into unreliable and unworthy risk assets, the Fed should raise rates to 1 or 1.5%.  Also we should not ignore the benefits of the bankruptcy process in allowing firms and individuals to get a fresh start while transferring assets to creditors who are not crushed by debt.

At the same time aggressive fiscal policy needs to be used to offset the more nefarious consequences of these policies.  But the overall task must be to help the economy find its new sustainable growth path.  Because zero rates are most definitely not part of this new path, they really only succeed in creating new distortions.  The Fed should give the economy a sustainable baseline from which to work and allow market forces to sort out the details, with relatively generous support via fiscal policy.


Release the Flash Crash Data

Simon Johnson is right to worry about the SEC being overmatched by existing interests in securities markets.  I think  the biggest problem is that the people with the access to the data that needs to be analyzed are embedded in the status quo.

So I think the CME — in the interests of proving the liquidity benefits of HFT — should release the tape of the 100,000 or so trades that took place on the June e-mini contract on May 6, 2010 from 2:43 pm to 2:50 pm with the trader names removed, but identified as trader 1, trader 2, trader … etc.  Ideally the tape should include orders that were outstanding at 2:43 and orders that were placed, but not filled.  Of course the tape needs to include precise time tags down to the microsecond, if that is relevant to the sequencing of trades.

Note that the release of trade and order data (without any trader tags) for all exchange traded products should be considered the norm — after all these are public exchanges and none of this data should be in any way privileged.  The information on traders is clearly a much more delicate matter — but if high frequency traders wish to defend their role in markets they will have to do so on the basis of data and not on the basis of unverifiable claims.

Failure to release this data for analysis will mean that HFT remains a complete black box and that the public is being asked once again to trust entrenched interests to tell the truth about what goes on in these markets.

Update:  It’s very disturbing to realize that according to an SEC commissioner (via Alphaville) the SEC even now may not have this data set.  Thus, the challenge for the SEC isn’t just one of analyzing millions of trades, but of gathering the raw data to analyze.  Not good.

What does it mean to be a market maker?

The auction rate securities debacle (which was discussed recently here) raises the question of what should be the legal responsibilities of any entity that claims to be a market maker.

It is generally acknowledged (but not necessarily accepted) that market makers exit the market in extreme events.  This can be viewed as understandable given that the market makers themselves are unlikely to understand the cause of extreme fluctuation and therefore will face significant uncertainty as to what constitutes a profitable pricing schema in these circumstances.  On the other hand once the market has settled into a new pricing “equilibrium”, the market makers are expected to come back and perform their role in setting prices.  That is, market making involves continuously posting prices with a few short-lived exceptions.

It’s not clear that a “market maker” can reserve the right to stop making a market and still call itself a market maker.  That is, a crucial element of the definition of a market maker is that it is a firm that enters into an obligation to continuously post prices.  Every participant in the market is an occasional trader.  If the definition of a market maker is allowed to include entities that can decide not to trade in the given market, then there really is no difference between a market maker and an occasional trader — that is, the term market maker has been stripped of its meaning.

Proposal:  Regulators should create formal definitions for the marketing materials for over the counter markets.

For example, when claiming to be a market maker for an over the counter product, a firm engages itself to be a market maker for the life of the product.

The FT gives us a new definition of market-making

According to the FT:  “The conflict between the concept of fiduciary duty and the practicalities of market-making, where a bank such as Goldman brings together a buyer and a seller of a security, appears to have been little understood by Congress.”

I guess every securities broker is actually a market maker — who knew?  In fact, if “security” isn’t a crucial part of that definition, maybe my local realtor is a market maker too.

To be more serious, it’s pretty obvious that the distinctions between “broker”, “dealer” and “market maker” are important in financial markets, and confusing the terms helps market makers avoid their responsibilities.

Volume is no longer a measure of stock market liquidity

Reading the CFTC-SEC preliminary report on the crash of 2:45, I think we can draw at least one very clear conclusion:  In a world with high frequency trading, volume is not a measure of stock market liquidity.

Why?  Because the report states very clearly that volume was at its highest when prices were at their lows.  (See figures 13, 29, 30.)  In other words, the data doesn’t support the view that the high-frequency traders left the market en masse, only that they dropped their bids by more than 5% in a matter of minutes.

i.  What happened in the e-mini market

In particular on the e-mini market, a 2.5% decline from 1097 to 1069 took place over about one minute in an orderly market with unusually high volumes of trade and at normal bid-offer spreads.  Since the best spread throughout this process was less than 0.025% of the price, the speed of the price adjustments indicates that algorithms were very active in the market over this one-minute time period.

At 2:45:27 the price of the contract fell a full percent in half a second to 1056 and the bid offer spread gapped to more than half a percent of the value of the contract.  These events triggered a five second pause in the market — and the price recovered from there.

This series of trades successfully removed an existing (and persistent) imbalance in the limit order book that had favored sellers.  Thus, the algorithms were merrily trading away in this environment, until at 2:45:27 something — perhaps the accelerating drop in the market, perhaps the reversal of the limit order imbalance, or perhaps human intervention — led to a gap in the bid-offer spread.

It is interesting that this gap in the bid-offer spread — which would appear to be a sign of a disorderly market — was the harbinger of the recovery of the market.  It really only took one minute of  high volume trade at extreme bid-offer spreads, before the e-mini market returned to a state of perfectly normal liquidity — with trading volumes declining to normal as prices rose and there was a return to balance of the limit order book.

In short the CFTC data makes it appear that the algorithms trading on the e-mini market worked very efficiently to eliminate a persistent limit order book imbalance by driving the price of the contract down until the book was balanced again.

It is interesting that the CME’s analysis of the crash concludes:

However, there is no visible support of the notion that algorithmic trading models deployed in the context of stock index futures traded on CME Group exchanges caused the market fluctuations in question.

Rather, we believe that automated trading contributes to market efficiencies, generally bolsters liquidity and thereby contributes to the price discovery function served by futures markets. This view is supported in the academic literature where one study found that “the move to screen trading strengthens the simultaneity of price discovery in the cash and futures markets and lessens the existence of a lead-lag relationship.”4 Another study concluded that their “results are consistent with the hypothesis that screen trading accelerates the price discovery process.”5

Further, we find no evidence in CME stock index futures of any undue concentration of activity amongst algorithmic or any other types of traders. In fact, activity levels amongst various CME constituencies on May 6th were quite consistent with normally observed patterns.

Trading by the most active of these traders was generally balanced between buy and sell orders during the period from 13:30 to 14:00 (CT). It is difficult to attribute the declining market action to any concentration of high frequency traders. Rather, we suggest that HFTs may have had the effect of providing a buoyant function in the market.

I would be interested in seeing the CME’s second by second analysis of the trades that took place from 2:44 to 2:46.  It is simply hard to believe that humans could have managed a 2.5% one minute drop in an orderly market.  It also seems to me that market participants should do their best to leave their “beliefs” out of their analysis and avoid citing academic studies that are inherently ambiguous in their conclusions.

The last paragraph addresses concerns that are only peripheral to the issue of algorithmic involvement in the crash.  Since high frequency algorithms are designed to buy and sell with extraordinary speed, we would not expect the crash to result in an imbalance between the algorithms’ buy and sell orders — presumably they were buying and selling all the way down and all the way up.  The only point at which one would expect a small concentration of algorithmic activities is in the minute where the market fell from 1097 to 1069 — it seems likely that many human traders would have paused to evaluate the market activity during this period and thus that a disproportionate share of the non-algorithmic trades would be outstanding limit or stop loss orders.  Finally, I don’t think anyone disputes the view that algorithms were healthy participants in the recovery of the market from its low and in this sense buoyed the market.

In short, despite the CME’s argument that there is no support for the view that algorithms caused the fluctuation, the orderly speed of the drop in the market belies this claim.  Until the CME releases a second by second analysis of trades involved in the descent, it seems to me that we must lean towards the common-sense view that only computers could manage such an orderly collapse.  It seems that the CFTC and SEC agree with me.

ii.  Why volume is irrelevant

Between 2:45:19 and 2:45:29 17000 emini contracts traded at an average price of 1056 and in a 10 second interval approaching 2:49 about 13000 contracts traded at a price of about 1085.  Both these volume figures are well over 10x typical trading volume.  That is prices collapsed and recovered in a period of about four minutes on record trading volume.

The flash crash is a clear indicator that trade now takes place over such small increments of time that measures of volume have no meaning.  By dividing time into infinitesimally small increments, modern markets have created an environment where it is extremely difficult for long-term investor-buyers to trade simultaneously with investor-sellers.  Instead at the particular microsecond in which the investor’s order hits the market the only other traders available with which one’s order can be matched are high frequency traders.  Any imbalances between buyers and sellers that last for a second or more will be resolved by changes in prices.

What we observe in the e-mini market on May 6 was an imbalance that lasted for several minutes — and was very efficiently resolved by the algorithms trading on the market.  This is presumably just a magnified version of the micro-sized process that goes on all the time in our algorithm-driven markets.

The way to think about the relationship between volume and trading intervals seems to me to be the following:  every time the trading interval over which prices are set shrinks by an order of magnitude, volume must increase by an order of magnitude in order to maintain the same effective level of liquidity as before.  Since the trading interval has been shrinking much faster than volume has been growing, we end up with markets that are very thin, despite unprecedented trading volume — and on these “thin” markets a whole days worth of trading volatility can take place in a matter of minutes.

I suspect that unless regulators impose a limit on technology’s ability to fragment markets over time, “flash crashes” will become a periodic regularity in our markets.  While finding ways to slow down trade in individual stocks is useful, if it is in fact the case that it was algorithmic trading that triggered the market dislocations of May 6, then the solution to the problem will lie in the regulation of algorithmic trading, not the underlying stocks.

The de(con)struction of the “market maker”

Sometimes I think that the financial crisis is driven by a collapse in the meaning of words.  The whole financial industry has gone completely post-modern on us:  Derivatives are “investments”, even when they are as likely to be liabilities as assets.  The asset side of a synthetic CDO is effectively an insurance obligation.  In a world where liabilities are assets and assets are liabilities, it can be far from clear how to interpret a balance sheet.

So I guess I shouldn’t really be surprised that the term “market maker” doesn’t mean what you think it means any more.  For a little history lets start with the definition of “market maker” from NASDAQ, one of the earlier OTC markets.

A market maker is a NASDAQ member firm that buys and sells securities at prices it displays in NASDAQ for its own account (principal trades) and for customer accounts (agency trades).

Traditionally market makers were always required to post bid and ask prices for the securities they quote — but there are exceptions to that rule.  For example, on Thursday in the midst of the stock market crash of 2:45, for several minutes there were no quotes on some option contracts.  Also, for a NASDAQ listed company the market must have at least three market makers quoting the stock.

Now for a view of the post-modern version of “market making”, let’s look at the discussion of the CDO market in the risk factors section of Goldman’s Abacus prospectus (thanks to Danny Black for pointing me here).

Limited Liquidity and Restrictions on Transfer.
There is currently no market for the Notes.
Although the Initial Purchaser has advised the Issuers that it intends to make a market in the Notes, the Initial Purchaser is not obligated to do so, and any such market-making with respect to the Notes may be discontinued at any time without notice. There can be no assurance that any secondary market for any of the Notes will develop, or, if a secondary market does develop, that it will provide the Holders of such Notes with liquidity of investment or that it will continue for the life of such Notes. Consequently, a purchaser must be prepared to hold the Notes for an indefinite period of time or until Stated Maturity.

Here Goldman is making it clear that no market exists for the Abacus CDO and there is no reason to believe that a market will exist.  At the same time Goldman states that the firm “intends to make a market” without entering into any obligation whatsoever to do so.

Here are my questions:  Given our understanding of what a market maker is based on the NASDAQ OTC market,

(i)  Does it make any sense to state that a single firm will “make a market” where no market exists?  What Goldman appears to mean in its statement is that Goldman intends to quote bid and ask prices for the CDO on demand.

(ii)  Does the statement that a firm “intends to make a market” in a security have any meaning whatsoever when it is followed by the qualification that “any such market making may be discontinued at any time without notice”?

In short, Goldman is (appropriately) disclosing that there is no secondary market in the Abacus CDO, and that, while Goldman may choose to buy the CDO back in the future, the firm is under no obligation to do so.

The mystery is why the terms “make a market” and “market-making” are used in the disclosure that there is no market. The effect of this new usage is to create a new definition of “to make a market”:

To quote a bid price at which a security will be purchased and an ask price at which the security will be sold.

In other words, market making has gone synthetic too:  It’s no longer necessary to maintain inventory and buy and sell an asset class in order to make a market in it;  in the financial world’s newspeak all a firm needs to do to make a market is to quote bid and ask prices — without actively trading in the product class at all.

And one consequence of this synthetic market making is that a new asset class was created — the ABS CDO — that for accounting purposes could be marked to a market that the prospectuses stated very clearly did not exist.  Only to be marked down to zero, when the little matter of cash flow entered the picture.

Maybe Ann Rutledge is right:  the first step in fixing financial markets is to clearly define the words we are using.

The Myth of the Market-Maker in CDOs

The repeated appeals to the market-maker excuse for Goldman’s CDO sales merits a rant.  (Note:  inspired by zerobeta tweets).

The secondary market for CDOs has always been very thin.  Basically the bank that issued the CDO stood ready theoretically to buy the CDO back, but, well, it almost never happened.

So when people claim that banks were making markets in CDOs, I think the question is:  “Well, then, where was your CDO trading inventory?” CDO trading inventory — as I am using the term — can only include CDOs that were placed by the issuing investment bank with an investor and were subsequently repurchased by the same or another investment bank.  Such trading inventory is entirely distinct from the CDO inventory that was created by issuing new CDO securities and failing to sell them. (Citigroup, Merrill Lynch and UBS were chock of new issue CDO inventory).

Now, maybe somebody will correct me, but it’s my understanding that there really wasn’t any secondary market to speak of in CDOs and that the investment banks held minuscule quantities, if any, of second-hand CDOs in their trading inventories.  If this understanding of the market is correct, I would like to know how anybody can claim that investment banks “made markets” in CDOs.  They may have originated CDOs, issued CDOs and placed CDOs, but unless they carried trading inventories in second-hand CDOs, the investment banks can not claim to have made markets in CDOs in any meaningful sense of the word.

Using models to give  theoretic prices to clients who need to mark their CDOs to market is not market making — for the simple reason that these prices are not tested by the market unless transactions are actually taking place at these prices.  Unless we have the evidence of a transaction to demonstrate that the market maker was willing to take the CDO onto it’s books at the price in question, the price quoted has very, very limited meaning in a market economy.

In short, one of the biggest failures of the investment banks in the CDO market was precisely the failure to make markets in CDOs.  The creation of an illiquid, untradeable product that the issuers themselves did not want to hold in trading inventory on their books was a disaster.   And for these same investment banks to turn around now and claim market making as a shield in their defense is almost beyond belief.

A lesson from the Great Depression

Central bankers have to be realistic about the political environment in which they operate.  All the central bank cooperation in the world can’t solve a problem of imbalances, if the politicians don’t decide to cooperate too.  And betting the economy on the theory that the politicians just *have* to cooperate is not good central banking.

Central bank policy needs to take into account the possibly of political failure — even when that failure is “unthinkable”.

Chicago’s defense of speculation assumes synthetic assets don’t exist

Gary Becker has kindly explained “The Value of Profitable Speculation

As a good rule of thumb-there are some exceptions to this rule- speculators in competitive speculation markets, whether long or short, contribute to a more efficient functioning of the economy when they make money, and they help make the economy less efficient when they lose money.

Notice that Prof. Becker assumes that when speculation takes place in competitive markets, each act of speculation either results in profits or losses for the speculator.  Of course, many speculative contracts have a speculator on both sides of the transaction.  (In fact we have heard the argument that when trading derivatives “one counterparty must be long … and one counterparty must be short” pretty frequently these days.)

When there is a speculator on each side of a transaction, then one party necessarily loses and other necessarily gains.  According to Prof. Becker’s analysis this transaction both increases economic efficiency and decreases it.  Hmm.

The resolution of this conundrum is, of course, that Prof. Becker is assuming that “speculators” are interacting with the real economy, not with each other.  In the Anglo-American legal tradition, however, when a so-called “speculator” is interacting with the real economy — and thus taking on real economic risk — the transaction is not a “wager” and therefore there is no speculation going on.

In short, it is precisely when speculators are not speculating that they can contribute positively to economic efficiency by making money.  (Note: I recognize — just like SCOTUS in the 1880s early 1900s — that regulated futures markets merit a notable exception to this principle.)

I beg to differ with Prof. Becker when he discusses housing as well:

Applied to the financial crisis, if when housing prices were rising so rapidly, more speculators had been shorting the housing market, or shorted mortgage-backed securities whose value depended on what happened in the housing market, their actions would have reduced the sharp increase in housing prices, and reduced the subsequent steep fall in these prices.

There was no lack of short speculators in the housing market.  The problem was that the vast majority of their trades were offset by long speculators like AIG, the monoline insurance companies, German banks in search of yield, etc.  In my view, the problem was not a deficit of short speculators, but a failure of the short speculators to interact with the real economy and affect the underlying prices.

Note: 5-4-10 Title revised