In defense of economic theory

I’ve just read JW Mason’s post “The Wit and Wisdom of Trygve Haavelmo.” I read this post as an empiricist’s view of economics, and I think that there is an equally valid theorist’s view of economics. The difference, in my view, lies more in how we think about what economics is that in the more practical question of how we do economics.

That is, I agree “that we study economics not as an end in itself, but in response to the problems forced on us by the world,” but I disagree strongly with the claim that “the content of a theory is inseparable from the procedures for measuring the variables in it.”

JW Mason writes “Within a model, the variables have no meaning, we simply have a set of mathematical relationships that are either tautologous, arbitrary, or false. The variables only acquire meaning insofar as we can connect them to concrete social phenomena.” Oddly, while I disagree vehemently with the first sentence, I have a lot of sympathy with the second.

So how does a theorist think about economic modelling?

To me the purpose of an economic model is to define a vocabulary that we can use to discuss economic phenomena. So the inherent value of a variable in an economic model is the way that the economic model gives the variable a very specific concrete meaning. “Consumer demand” means something very specific and clear in the context of a neoclassical model, and the fact that we can agree on this — separate and apart from economic data — is useful for the purposes of economic discourse.

Of course, it is also true that we need to be able to map this vocabulary over to real economic phenomena in order for the value of the vocabulary to be realized. Thus, the hardest and most important part of economic theory is mapping the theory back into real world phenomena. Thus while I don’t agree that “the content of a theory is inseparable from the procedures for measuring the variables in it,” I wouldn’t have a problem with the claim that “the usefulness of a theory is inseparable from the procedures for measuring the variables in it.”

Economic models are dictionaries, whereas a brilliant economic paper is more like a literary classic. As someone who is always using dictionaries to check the meaning of words, I consider dictionaries valuable in and of themselves, even though I don’t by any means consider that value to be the same as the value of literary classic.

I hope JW Mason won’t see this as splitting hairs, but I think it’s important to understand economic modelling as a means of creating a vocabulary for discussing the economy. The power of theory is that if it is mastered, it can be used to create new words and new ways of understanding the economy. Such a new vocabulary will only be truly useful if it can be brought to the data and if it helps explain the real world. But I think it is essential to understand the power of theory, lest this point be lost in a sea of data.

Advertisements

Brokers, dealers and the regulation of markets: Applying finreg to the giant tech platforms

Frank Pasquale (h/t Steve Waldman) offers an interesting approach to dealing with the giant tech firms’ privileged access to data: he contrasts a Jeffersonian — “just break ’em up” approach — with a Hamiltonian — regulate them as natural monopolies approach. Although Pasquale favors the Hamiltonian approach, he opens his essay by discussing Hayekian prices. Hayekian prices simultaneously aggregate distributed knowledge about the object sold and summarize it, reflecting the essential information that the individuals trading in the market need to know. While gigantic firms are alternate way of aggregating data, there is little reason to believe that they could possibly produce the benefits of Hayekian prices, the whole point of which is to publicize for each good a specific and extremely important summary statistic, the competitive price.

Pasquale’s framing brings to mind an interest parallel with the history of financial markets. Financial markets have for centuries been centralized in stock/bond and commodities exchanges, because it was widely understood that price discovery works best when everyone trades at a single location. The single location by drawing almost all market activity offers both “liquidity” and the best prices. The dealers on these markets have always been recognized as having a privileged position because of their superior access to information about what’s going on in the market.

One way to understand Google, Amazon, and Facebook is that they are acting as dealers in a broader economic marketplace. That with their superior knowledge about supply and demand they have an ability to extract gains that is perfectly analogous to dealers in financial markets.

Given this framing, it’s worth revisiting one of the most effective ways of regulating financial markets: a simple, but strict, application of a branch of common law, the law of agency was applied to the regulation of the London Stock Exchange from the mid-1800s through the 1986 “Big Bang.” It was remarkably effective at both controlling conflicts of interest and producing stable prices, but post World War II was overshadowed and eclipsed by the conflict-of-interest-dominated U.S. markets. In the “Big Bang” British markets embraced the conflicted financial markets model — posing a regulatory challenge which was recognized at the time (see Christopher McMahon 1985), but was never really addressed.

The basic principles of traditional common law market regulation are as follows. When a consumer seeks to trade in a market, the consumer is presumed to be uninformed and to need the help of an agent. Thus, access to the market is through agents, called brokers. Because a broker is a consumer’s agent, the broker cannot trade directly with the consumer. Trading directly with the consumer would mean that the broker’s interests are directly adverse to those of the consumer, and this conflict of interest is viewed by the law as interfering with the broker’s ability to act an agent. (Such conflicts can be waived by the consumer, but in early 20th century British financial markets were generally not waived.)

A broker’s job is to help the consumer find the best terms offered by a dealer. Because dealers buy and sell, they are prohibited from acting as the agents of the consumers — and in general prohibited from interacting with them directly at all. Brokers force dealers to offer their clients good deals by demanding two-sided quotes and only after learning both the bid and the ask, revealing whether their client’s order is a buy or a sell. Brokers also typically get bids from different dealers to make sure that the the prices on offer are competitive.

Brokers and dealers are strictly prohibited from belonging to the same firm or otherwise working in concert. The validity of the price setting mechanism is based on the bright line drawn between the different functions of brokers and of dealers.

Note that this system was never used in the U.S., where the law of agency with respect to financial markets was interpreted very differently, and where financial markets were beset by conflicts of interest from their earliest origins. Thus, it was in the U.S. that the fixed fees paid to brokers were first criticized as anti-competitive and eventually eliminated. In Britain the elimination of fixed fees reduced the costs faced by large traders, but not those faced by small traders (Sissoko 2017). By adversely affecting the quality of the price setting mechanism, the actual costs to traders of eliminating the structured broker-dealer interaction was hidden. We now have markets beset by “flash-crashes,” “whales,” cancelled orders, 2-tier data services, etc. In short, our market structure instead of being designed to control information asymmetry, is extremely permissive of the exploitation of information asymmetry.

So what lessons can we draw from the structured broker-dealer interaction model of regulating financial markets? Maybe we should think about regulating Google, Amazon, and Facebook so that they have to choose between either being the agents in legal terms of those whose data they collect, or of being sellers of products (or agents of these sellers) and having no access to buyer’s data.

In short, access to customer data should be tied to agency obligations with respect to that data. Firms with access to such data can provide services to consumers that help them negotiate a good deal with the sellers of products that they are interested in, but their revenue should come solely from the fees that they charge to consumers on their purchases. They should not be able to either act as sellers themselves or to make any side deals with sellers.

This is the best way of protecting a Hayekian price formation process by making sure that the information that causes prices to move is the flow of buy or sell orders that is generated by a dealer making two-sided markets and choosing a certain price point. And concurrently by allowing individuals to make their decisions in light of the prices they face. Such competitive pricing has the benefit of ensuring that prices are informative and useful for coordinating economic decision making.

When prices are not set by dealers who are forced to make two-sided markets and who are given no information about the nature of the trader, but instead prices are set by hyper-informed market participants, prices stop having the meaning attributed to them by standard economic models. In fact, given asymmetric information trade itself can easily degenerate away from the win-win ideal of economic models into a means of extracting value from the uninformed, as has been demonstrated time and again both in theory and in practice.

Pasquale’s claim that regulators need to permit “good” trade on asymmetric information (that which “actually helps solve real-world problems”) and prevent “bad” trade on asymmetric information (that which constitutes “the mere accumulation of bargaining power and leverage”) seems fantastic. How is any regulator to have the omniscience to draw these distinctions? Or does the “mere” in the latter case indicate the good case is to be presumed by default?

Overall, it’s hard to imagine a means of regulating informational behemoths like Google, Amazon and Facebook that favors Hayekian prices without also destroying entirely their current business models. Even if the Hamiltonian path of regulating the beasts is chosen, the economics of information would direct regulators to attach agency obligations to the collection of consumer data, and with those obligations to prevent the monetization of that data except by means of fees charged to the consumer for helping them find the best prices for their purchases.

When can banks create their own capital?

A commenter directed me to an excellent article by Richard Werner comparing three different approaches to banking. The first two are commonly found in the economics literature, and the third is the credit creation theory of banking. Werner’s article provides a very good analysis of the three approaches, and weighs in heavily in favor of the credit creation theory.

Werner points out that when regulators use the wrong model, they inadvertently allow banks to do things that they should not be allowed to do. More precisely, Werner finds that when regulators try to impose capital constraints on banks without understanding how banks function, they leave open the possibility that the banks find a way to create capital “out of thin air,” which clearly is not the regulator’s intent.

In this post I want to point out that Werner does not give the best example of how banks can sometimes create their own capital. I offer two more examples of how banks created their own capital in the years leading up to the crisis.

1. The SIVs that blew up in 2007

You may remember Hank Paulson running around Europe in the early fall of 2007 trying to drum up support for something called the Master Liquidity Enhancement Conduit (MLEC) or more simply the Super-SIV. He was trying to address the problem that structured vehicles called SIVs were blowing up left, right, and center at the time.

These vehicles were essentially ways for banks to create capital.  Here’s how:

According to a Bear Stearns report at the time, 43% of the assets in the SIVs were bank debt, and commentators a the time make it clear that the kind of bank debt in the SIVs was a special kind of debt that was acceptable as capital for the purposes of bank capital requirements because of the strong rights given to the issuer to forgo making interest payments on the debt.

The liability side of a SIV was comprised of 4-6% equity and the rest senior liabilities, Medium Term Notes (MTNs) of a few years maturity and Commercial Paper (CP) that had to be refinanced every few months. Obviously SIVs had roll-over (or liquidity) risk, since their assets were much longer than their liabilities. The rating agencies addressed this roll-over risk by requiring the SIVs to have access to a liquidity facility provided by  a bank. More precisely the reason a SIV shadow bank was allowed to exist was because there was a highly rated traditional bank that had a contractual commitment to provide funds to the SIV on a same-day basis in the event that the liquidity risk was realized. Furthermore, triggers in the structured vehicle’s paperwork required it to go into wind down mode if, for example, the value of its assets fell below a certain threshold. All the SIVs breached their triggers in Fall 2007.

Those with an understanding of the credit creation theory of banking would recognize immediately that the “liquidity facility” provided by the traditional bank was a classic way for a bank to transform the SIV’s liabilities into monetary assets. That’s why money market funds and others seeking very liquid assets were willing to hold SIV CP and MTNs. In short, a basic understanding of an SIV asset and liability structure and of the banks’ relationship to it would have been a red flag to a regulator conversant with the credit creation theory that banks were literally creating their own capital.

2. The pre-2007 US Federal Home Loan Bank (FHLB) System

In the early naughties all of the FHLBs revised their capital plans. For someone with an understanding of the credit creation theory, these capital plans were clearly consistent with virtually unlimited finance of mortgages.

The FHLBs form a system with a single regulator and together offer a joint guarantee of all FHLB liabilities. The FHLB system is one of the “agencies” that can easily raise money at low cost on public debt markets. Each FHLB covers a specific region of the country and is cooperatively owned by its member banks. In 2007 every major bank in the US was a member of the FHLB system. As a result, FHLB debt was effectively guaranteed by the whole of the US banking system. Once again using the credit creation theory, we find that the bank guarantee converted FHLB liabilities into monetary assets.

The basic structure of the FHLBs support of the mortgage market was this (note that I will frequently use the past tense, because I haven’t looked up what the current capital structure is and believe that it has changed):

The FHLBs faced a 4% capital requirement on their loans. Using the Atlanta FHLB’s capital plan as an example, we find that whenever a member bank borrowed from the Atlanta FHL bank, it was required to increase its capital contribution by 4.5% of the loan. This guaranteed that the Atlanta FHL bank could never fall foul of its 4% capital requirement — and that there was a virtually unlimited supply of funds available to finance mortgages in the US.

The only constraint exercised by FHLBs on this system was that they would not lend for the full value of any mortgage. Agency MBS faced a 5% haircut, private label MBS faced a minimum 10% haircut, and individual mortgages faced higher haircuts.

In short, the FHLB system was designed to make it possible for the FHLBs to be lenders of last resort to mortgage lenders. As long as a member bank’s assets were mortgages that qualified for FHL bank loans, credit was available for a bank that was in trouble.

The system was designed in the 1930s — by people who understood the credit creation theory of banking — to deliberately exclude commercial banks which financed commercial activity and whose last-resort lender was the Federal Reserve. Only when the FIRRE Act in 1989 was passed subsequent to the Savings and Loan crisis were commercial banks permitted to become FHLB members.

From a credit creation theory perspective this major shift in US bank regulation ensured that the full credit creation capacity of the commercial banking system was united with the US mortgage lending system making it possible for the FHLBs to create their own capital and use it to provide virtually unlimited funds to finance mortgage lending in the US.

 

Access to Credit is the Key to a Win-Win Economy

Matt Klein directs our attention to an exchange between Jason Furman and Dani Rodrik that took place at the “Rethinking Macroeconomic Policy” Conference. Both argued that, while economists tend to focus on efficiency gains or “growing the pie”, most policy proposals have a small or tiny efficiency effect and a much much larger distributional effect. Matt Klein points out that in a world like this political competition for resources can get ugly fast.

I would like to propose that one of the reasons we are in this situation is that we have rolled back too much of a centuries-old legal structure that used to promote fairness — and therefore efficiency — in the financial sector.

Adam Tooze discusses 19th century macro in follow up to Klein’s post:

Right the way back to the birth of modern macroeconomics in the late 19th century, the promise of productivist national economic policy was that one could suspend debate about distribution in favor of “growing the pie”.

In Britain where this approach had its origins, access to bank credit was extremely widespread (at least for those with Y chromosomes). While the debt was typically short-term, it was also the case that typically even as one bill was paid off, another was originated. Such debt wasn’t just generally available, it was usually available at rates of 5% per annum or less. No collateral was required to access the system of bank credit, though newcomers to the system typically had to have 1 or 2 people vouch for them.

I’ve just completed a paper that argues that this kind of bank credit is essential to the efficiency of the economy. While it’s true that in the US discrimination has long prevented certain groups from having equal access to financial services — and that the consequences of this discrimination show up in current wealth statistics, it seems to me that one of the disparities that has become more exaggerated across classes over the past few decades is access to lines of credit.

The facts are a harder to establish than they should be, because as far as I can tell the collection of business lending data in the bank call reports has never carefully distinguished between loans secured by collateral other than real estate and loans that are unsecured. (Please let me know if I’m wrong and there is somewhere to find this data.) In the early years of the 20th century, the “commercial and industrial loans” category would I believe have comprised mostly unsecured loans. Today not only has the C&I category shrunk as a fraction of total bank loans, but given current bank practices it seems likely that the fraction of unsecured loans within the category has also shrunk.

This is just a long form way of stating that it appears that the availability of cheap unsecured credit to small and medium sized business has declined significantly from what it was back when early economists were arguing that we could focus on efficiency and not distribution. Today small business credit is far more collateral-dependent than it was in the past — with the exception of course of credit card debt. Charge cards, however, charge more than 19% per annum for a three-month loan which is about a 300% markup on what would have been charged to an unsecured business borrower in the 19th century. To the degree that it is collateralized credit that is easily available today, it will obviously favor the wealthy and aggravate distributional issues.

In my paper the banking system makes it possible for allocative efficiency to be achieved, because everybody has access to credit on the same terms. As I explained in an earlier post, in an economy with monetary frictions there is no good substitute for credit. For this reason it seems obvious that an economy with unequal access to short term bank credit will result in allocations that are bounded away from an efficient allocation. In short, in the models with monetary frictions that I’m used to working with equal access to credit is a prerequisite for efficiency.

If we want to return to a world where economics is win-win, we need a thorough restructuring of the financial sector, so that access to credit is much more equal than it is today.

Bank deposits as short positions

A quick point about monetary theory and banking.

Monetary economics has a basic result: nobody wants to hold non-interest bearing fiat money over time unless the price level is falling, so that the value of money is increasing over time. Many, if not most, theoretic discussions of money are premised on the assumption that fiat money is an object and that therefore one can hold no money or positive quantities of money, but one can’t hold a short position in fiat money.

Maybe this is one of macroeconomics greatest errors. Perhaps the whole point of the banking system is to allow the economy as whole to hold a short position in fiat money. After all, from the perspective of a bank what is a bank deposit if not a naked short position in cash? And by lending to businesses and consumers banks allow the rest of us to be short cash, too. This makes sense, because the basic principles of intertemporal economic efficiency state that we should all be short cash.

Is medicine as flawed as finance?

Events that took place this past holiday season have set me to thinking not just about the awful nexus that takes place when illness, addictive drugs, and the American medical system meet, but also about the nature of observation-based (as opposed to controlled-study-based) science and the relationship between the practice of this science and the giant corporations that have an interest in this practice. In short, I’ve been thinking about how failures in the world of medicine look very similar to failures in the world of finance.

What happens in an environment where data is important, but its interpretation is necessarily imprecise, and there are corporations whose goal is to profit off of any structural weaknesses in the methods used to interpret the data? The combination of weak antitrust enforcement that has placed immense power in the hands of a very small number of corporations and a corporate focus on shareholder value rather than stakeholder value means that there simply aren’t that many influential corporations left whose core business strategy is to serve those who buy their products to the best of the corporation’s ability.

In finance this means that clients are often treated as “the mark”, and client losses are justified by those who generate them on the Darwinian principle that good things will happen when dumb or uneducated people lose money. Financiers know that the nature of the data ensures that they can almost always come up for some kind of an explanation for why the product they use to garner some “dumb money” is in some way beneficial and should not be banned. (e.g. “in an efficient market, only people who need product X will buy product X, so we don’t need to worry about the losses of the “dumb money,” which exists to make the market more efficient.”) The tools of the academics are used, not for the purpose for which they were invented, but to make the world a worse place to live in.

Unfortunately I’m beginning to suspect that our drug companies function on the same principles as the financial industry. It seems to me that doctors have been trained not to listen too closely to patient complaints about side effects. Now there are probably good reasons for this: if the doctor is conservative about prescribing medicine so that you really need the medicine when you get it, then the side effects will need to be quite severe in order for them to outweigh the need for the medication. And it is true that doctors almost certainly receive many complaints about perceived side effects that are in fact due to other causes. In short, doctors have a very hard job.

It seems to me that pharmaceutical have turned the challenge of medicine into a profit opportunity through two mechanisms. First, they work aggressively to get doctors to prescribe their medications for minor ailments that could be addressed through over-the-counter or non-pharmaceutical means. When the pharmaceutical companies are successful, doctors end up prescribing drugs that are net “bads” for their patients, and frequently choose to address side effects not by taking the patient off the medication, but by prescribing another medication to address the side effect. A patient with a minor complaint can end up on a cocktail of drugs that causes far more damage to the patient’s health than the minor complaint itself. Who has not heard a doctor state when the patient questions whether her growing health problems are not in fact being caused by the cocktail of medication that “It’s not cause and effect,” pooh-poohing the patient’s concerns? While there are certainly very good doctors out there (and I recommend that you seek them out), the medical profession has done far too little to offset the nefarious influence of drug company incentives.

Secondly, it appears that drug companies have learned that addictive drugs are some of the most profitable. In my view this is likely to be due to the fact that these drugs often have the side effect of causing the malady they are prescribed to cure. That is, once you have become addicted to the drug, trying to get of the med will often cause you to experience the illness that you took it to address — but even worse than before you took it. It’s not unusual for patients to get into a pattern where the doctor keeps prescribing higher and higher doses of these medications and that the patient ends up facing very strong disincentives to go off the medicine. A profit-maximizing pharmaceutical company will likely prefer to develop this type of medicine than a medicine that can treat the ailment, but that is non-addictive. That is, the profit motive is very much adverse to what is in patients’ best interests. When you add to this dynamic the tendency of many doctors to pooh-pooh patient concerns about side-effects and in particular concerns that the medication may be worsening the condition (“It’s not cause and effect. Your symptoms are probably just the progression of your ailment.”), it hardly surprising that the way these medications are being used is often toxic.

Overall, when I hear complaints about how too much of the public doesn’t believe in science anymore, I can’t help wondering: Well, what is their experience of how science is applied in the modern world?

Re-imagining Money and Banking

I’ve written a new paper motivated by my belief that the recent financial crisis was in no small part a failure of economic theory and therefore of economic thinking. In particular, there is a missing model of banking that was well understood a century ago, but is completely unfamiliar to modern scholars and practitioners. The goal of this paper is to introduce modern students of money and banking to the model of money that shaped the 19th century development of a financial infrastructure that both supported modern economic growth for more than 100 years and was passed down to us as our heritage before we in our hubris tore that infrastructure apart.

Another goal is to illustrate what I believe is a fundamental property of environments with (i) liquidity frictions and (ii) a large population with no public visibility but a discount factor greater than zero: in such an environment anyone with a notepad, some arithmetic skills, and some measure of public visibility can offer – and profit from – the account-keeping services that make incentive feasible a much better allocation than autarky for the general populace. Importantly collateral is completely unnecessary in a bank-based payments system.

This model has two key components. First, banks transform non-bank debt into monetary debt. Thus, the transformative function of banking is not principally a matter of maturity, but instead of the nature of the debt itself, that is, of its acceptability as a means of exchange. Second, monetary debt is money (contra Kocherlakota 1998). There is no hierarchy of moneys where some assets have more monetary characteristics than others. Instead there is only monetary debt and non-monetary debt. When we study this very simple model of money in an environment with liquidity frictions using the tools of mechanism design, we see that the economic function of the banking system is to underwrite a payments system based on unsecured debt and thereby to make intertemporal budget constraints enforceable or equivalently to make it possible for the non-banks in our economy to monetize the value of the weight that they place on the future in the form of a discount factor. Banking transforms an autarkic economy into one that flourishes because credit is abundantly available. In this model, constraints on the economy’s capacity to support debt are not determined by “deposits” or by “collateral”, but instead by the incentive constraints associated with banking.

In this environment, banking provides the extraordinary liquidity that is only possible when the payments system is based on unsecured debt. Underlying this form of liquidity is the banks’ profound understanding of the incentive structures faced by non-banks, as it is this understanding that makes it possible for banks to structure the system of monetary debt so that it is to all intents and purposes default-free. (This is actually a fairly accurate description of 19th century British banking. The only people who lost money were the bank owners who guaranteed the payments system. See Sissoko 2014.) Although this concept of price stable liquidity is unfamiliar to many modern scholars, Bengt Holmstrom (2015) has given it a name: money market liquidity.[1] In such a system the distinctions between funding liquidity and market liquidity collapse, because the whole point of the banking system is to ensure that default occurs with negligible probability. Thus, the term money market liquidity references the idea that in money markets, the process by which assets are originated must be close to faultless or instability will be the result, because the relationship between money – when it takes the form of monetary debt – and prices is not inherently stable (cf. Smith 1776, Sargent & Wallace 1982).

This paper employs the tools of New Monetarism, mechanism design, and more particularly the model of Gu, Mattesini, Monnet, and Wright (2013) to explain the extraordinary economic importance of the simplest and most ancient function of a bank: in this paper banks are account-keepers, whose services support a payment system based on unsecured credit. Unsecured credit is incentive feasible, because banks provide account-keeping services and can use the threat of withdrawing access to account-keeping services to make the non-bank budget constraint enforceable.

The basic elements of the argument are this: an environment with anonymity, liquidity frictions and somewhat patient agents is an environment that begs for an innovation that both remedies the problem of anonymity and realizes the value of the unsecured credit that the patience of the agents in the economy supports. I argue that the standard way in which economies from ancient Rome to medieval Europe to modern America address this problem is by introducing banking – or fee-based account-keepers – in order to alleviate the problem of anonymity that prevents agents from realizing the value inherent in the weight they place on the future. I demonstrate that in this environment, the introduction of a bank improves welfare. The improvement in welfare can be dramatic when the discount factor is not close to zero.

This paper uses the environment of Gu, Mattesini, Monnet, and Wright (2013) but is distinguished from that model, because here the focus is on a different aspect of banking. We study how the account-keeping function of banks serves to support unsecured credit, whereas GMMW studies how the deposit-taking function of banks is able to support fully collateralized credit.

The model of banking in this paper has implications that are very different from much of the existing literature on banking. This literature typically assumes the anonymity of agents and then argues – contrary to real-world experience – that unsecured non-bank credit is unimaginable (see, e.g., Gorton & Ordonez 2014, Monnet & Sanches 2015). In other words, the existing literature takes the position that in the presence of anonymity, no paid account-keeper will arise who will make it possible for agents in the economy to realize the value of unsecured credit that their discount factor supports. In the absence of unsecured credit, lending is generally constrained as much by the available collateral or deposits, as by incentive constraints themselves. This paper argues that standard assumptions such as loans must equal deposits (see, e.g. Berentsen, Camera & Waller 2007) or debt must be supported by collateral (see e.g. Gu, Mattesini, Monnet, and Wright (2013), Gorton & Ordonez 2014) are properly viewed as ad hoc assumptions that should be justified by some explanation for why banking has not arisen and made unsecured credit available to anonymous agents.

[1] While Holmstrom (2015) and this paper agree on the principle that money market liquidity is characterized by price stability, the mechanism by which that price stability is achieved is very different in the two papers: for Holmstrom it is the opacity of collateral that makes price stability possible.