In defense of economic theory

I’ve just read JW Mason’s post “The Wit and Wisdom of Trygve Haavelmo.” I read this post as an empiricist’s view of economics, and I think that there is an equally valid theorist’s view of economics. The difference, in my view, lies more in how we think about what economics is that in the more practical question of how we do economics.

That is, I agree “that we study economics not as an end in itself, but in response to the problems forced on us by the world,” but I disagree strongly with the claim that “the content of a theory is inseparable from the procedures for measuring the variables in it.”

JW Mason writes “Within a model, the variables have no meaning, we simply have a set of mathematical relationships that are either tautologous, arbitrary, or false. The variables only acquire meaning insofar as we can connect them to concrete social phenomena.” Oddly, while I disagree vehemently with the first sentence, I have a lot of sympathy with the second.

So how does a theorist think about economic modelling?

To me the purpose of an economic model is to define a vocabulary that we can use to discuss economic phenomena. So the inherent value of a variable in an economic model is the way that the economic model gives the variable a very specific concrete meaning. “Consumer demand” means something very specific and clear in the context of a neoclassical model, and the fact that we can agree on this — separate and apart from economic data — is useful for the purposes of economic discourse.

Of course, it is also true that we need to be able to map this vocabulary over to real economic phenomena in order for the value of the vocabulary to be realized. Thus, the hardest and most important part of economic theory is mapping the theory back into real world phenomena. Thus while I don’t agree that “the content of a theory is inseparable from the procedures for measuring the variables in it,” I wouldn’t have a problem with the claim that “the usefulness of a theory is inseparable from the procedures for measuring the variables in it.”

Economic models are dictionaries, whereas a brilliant economic paper is more like a literary classic. As someone who is always using dictionaries to check the meaning of words, I consider dictionaries valuable in and of themselves, even though I don’t by any means consider that value to be the same as the value of literary classic.

I hope JW Mason won’t see this as splitting hairs, but I think it’s important to understand economic modelling as a means of creating a vocabulary for discussing the economy. The power of theory is that if it is mastered, it can be used to create new words and new ways of understanding the economy. Such a new vocabulary will only be truly useful if it can be brought to the data and if it helps explain the real world. But I think it is essential to understand the power of theory, lest this point be lost in a sea of data.


Brokers, dealers and the regulation of markets: Applying finreg to the giant tech platforms

Frank Pasquale (h/t Steve Waldman) offers an interesting approach to dealing with the giant tech firms’ privileged access to data: he contrasts a Jeffersonian — “just break ’em up” approach — with a Hamiltonian — regulate them as natural monopolies approach. Although Pasquale favors the Hamiltonian approach, he opens his essay by discussing Hayekian prices. Hayekian prices simultaneously aggregate distributed knowledge about the object sold and summarize it, reflecting the essential information that the individuals trading in the market need to know. While gigantic firms are alternate way of aggregating data, there is little reason to believe that they could possibly produce the benefits of Hayekian prices, the whole point of which is to publicize for each good a specific and extremely important summary statistic, the competitive price.

Pasquale’s framing brings to mind an interest parallel with the history of financial markets. Financial markets have for centuries been centralized in stock/bond and commodities exchanges, because it was widely understood that price discovery works best when everyone trades at a single location. The single location by drawing almost all market activity offers both “liquidity” and the best prices. The dealers on these markets have always been recognized as having a privileged position because of their superior access to information about what’s going on in the market.

One way to understand Google, Amazon, and Facebook is that they are acting as dealers in a broader economic marketplace. That with their superior knowledge about supply and demand they have an ability to extract gains that is perfectly analogous to dealers in financial markets.

Given this framing, it’s worth revisiting one of the most effective ways of regulating financial markets: a simple, but strict, application of a branch of common law, the law of agency was applied to the regulation of the London Stock Exchange from the mid-1800s through the 1986 “Big Bang.” It was remarkably effective at both controlling conflicts of interest and producing stable prices, but post World War II was overshadowed and eclipsed by the conflict-of-interest-dominated U.S. markets. In the “Big Bang” British markets embraced the conflicted financial markets model — posing a regulatory challenge which was recognized at the time (see Christopher McMahon 1985), but was never really addressed.

The basic principles of traditional common law market regulation are as follows. When a consumer seeks to trade in a market, the consumer is presumed to be uninformed and to need the help of an agent. Thus, access to the market is through agents, called brokers. Because a broker is a consumer’s agent, the broker cannot trade directly with the consumer. Trading directly with the consumer would mean that the broker’s interests are directly adverse to those of the consumer, and this conflict of interest is viewed by the law as interfering with the broker’s ability to act an agent. (Such conflicts can be waived by the consumer, but in early 20th century British financial markets were generally not waived.)

A broker’s job is to help the consumer find the best terms offered by a dealer. Because dealers buy and sell, they are prohibited from acting as the agents of the consumers — and in general prohibited from interacting with them directly at all. Brokers force dealers to offer their clients good deals by demanding two-sided quotes and only after learning both the bid and the ask, revealing whether their client’s order is a buy or a sell. Brokers also typically get bids from different dealers to make sure that the the prices on offer are competitive.

Brokers and dealers are strictly prohibited from belonging to the same firm or otherwise working in concert. The validity of the price setting mechanism is based on the bright line drawn between the different functions of brokers and of dealers.

Note that this system was never used in the U.S., where the law of agency with respect to financial markets was interpreted very differently, and where financial markets were beset by conflicts of interest from their earliest origins. Thus, it was in the U.S. that the fixed fees paid to brokers were first criticized as anti-competitive and eventually eliminated. In Britain the elimination of fixed fees reduced the costs faced by large traders, but not those faced by small traders (Sissoko 2017). By adversely affecting the quality of the price setting mechanism, the actual costs to traders of eliminating the structured broker-dealer interaction was hidden. We now have markets beset by “flash-crashes,” “whales,” cancelled orders, 2-tier data services, etc. In short, our market structure instead of being designed to control information asymmetry, is extremely permissive of the exploitation of information asymmetry.

So what lessons can we draw from the structured broker-dealer interaction model of regulating financial markets? Maybe we should think about regulating Google, Amazon, and Facebook so that they have to choose between either being the agents in legal terms of those whose data they collect, or of being sellers of products (or agents of these sellers) and having no access to buyer’s data.

In short, access to customer data should be tied to agency obligations with respect to that data. Firms with access to such data can provide services to consumers that help them negotiate a good deal with the sellers of products that they are interested in, but their revenue should come solely from the fees that they charge to consumers on their purchases. They should not be able to either act as sellers themselves or to make any side deals with sellers.

This is the best way of protecting a Hayekian price formation process by making sure that the information that causes prices to move is the flow of buy or sell orders that is generated by a dealer making two-sided markets and choosing a certain price point. And concurrently by allowing individuals to make their decisions in light of the prices they face. Such competitive pricing has the benefit of ensuring that prices are informative and useful for coordinating economic decision making.

When prices are not set by dealers who are forced to make two-sided markets and who are given no information about the nature of the trader, but instead prices are set by hyper-informed market participants, prices stop having the meaning attributed to them by standard economic models. In fact, given asymmetric information trade itself can easily degenerate away from the win-win ideal of economic models into a means of extracting value from the uninformed, as has been demonstrated time and again both in theory and in practice.

Pasquale’s claim that regulators need to permit “good” trade on asymmetric information (that which “actually helps solve real-world problems”) and prevent “bad” trade on asymmetric information (that which constitutes “the mere accumulation of bargaining power and leverage”) seems fantastic. How is any regulator to have the omniscience to draw these distinctions? Or does the “mere” in the latter case indicate the good case is to be presumed by default?

Overall, it’s hard to imagine a means of regulating informational behemoths like Google, Amazon and Facebook that favors Hayekian prices without also destroying entirely their current business models. Even if the Hamiltonian path of regulating the beasts is chosen, the economics of information would direct regulators to attach agency obligations to the collection of consumer data, and with those obligations to prevent the monetization of that data except by means of fees charged to the consumer for helping them find the best prices for their purchases.