A stylized fact about post-crisis economies is that asset markets have become segmented with “safe assets” trading differently from assets more generally. I have argued elsewhere that the collateralization of financial sector liabilities has played an important role in this segmentation of markets.
I believe that this creates a puzzle for the implementation of monetary policy that provides at least a partial explanation for why we are stuck at the zero lower bound. Consider the consequences of an increase in the policy rate by 25 bps. This has the effect of lowering the price of ultra-short-term Treasury debt, and particularly when combined with a general policy of raising the policy rate over a period of months or years this policy should have the effect of lowering the price of longer term Treasuries as well (due to the fact that long-term yields can be arbitraged by rolling over short-term debt).
A decline in the price of long-term Treasuries will have the effect of reducing the dollar value of the stock of outstanding Treasuries (as long as the Treasury does not have a policy of responding to the price effects of monetary policy by issuing more Treasuries). But now consider what happens in the –segmented — market for Treasury debt. Assuming that demand for Treasuries is downward sloping, then the fact that contractionary monetary policy tends to shrink the stock of Treasuries itself puts upward pressure on the price of Treasuries that, particularly when demand for Treasuries is inelastic, will tend to offset and may even entirely counteract the tendency for the yield on long-term Treasuries to rise. (Presumably in a world where markets aren’t segmented demand for Treasuries is fairly elastic and shifts into other financial assets quash this effect.)
In short, a world where safe assets trade in segmented markets may be one where implementing monetary policy using the interest rate as a policy tool is particularly difficult. Can short-term and long-term safe assets become segmented markets as well? Given arbitrage, it’s hard to imagine how this is possible.
These thoughts are, of course, motivated by the behavior of Treasury yields following the Federal Reserves 25 bp rate hike in December 2015.
My research agenda employs deconstructive method to motivate a reconsideration of the meaning of neoclassical economics. Thus, an economic theory paper introduces a liquidity friction into a competitive model to study how standard models are constructed on the assumption of perfect intermediation (Sissoko 2007), an economic history paper demonstrates that in fact the markets of industrializing Britain relied on a carefully calibrated banking system that successfully stabilized money growth (Sissoko 2016a), an economic theory paper uses new monetarist methods to model banking and how it stabilizes the relationship between unsecured debt and the money supply (Sissoko 2016b), and another paper analyzes modern finance and explains how the growth of market-based lending has disrupted market liquidity by circumventing the stabilizing force of banks (Sissoko 2016c). Together these papers invert the mainstream view by arguing that starting in late 18th c Britain the banking system effectively stabilized the money supply, allowing it to be treated as a stable background condition: this made modern capitalism possible and neoclassical economics itself imaginable. This post explains the “big picture” of the role played by innovations in banking on European industrialization, and this chapter of my dissertation gives even more detail.
I’ve just read Eugene White’s Bank Underground post on the Baring liquidation in 1890. He is notable in getting the facts of what he calls the “rescue” mostly right. He accurately portrays the “good bank-bad bank” structure and the fact that the partners who owned the original bank bore the losses of the failure. What he doesn’t explain clearly is the degree to which the central bank demanded insurance from the private sector banks before agreeing to extend a credit line that would allow the liquidation of the bad bank to take place slowly.
These facts matter, because a good central banker has to make sure that the incentives faced by those in the financial community are properly aligned. In the case of Barings macroeconomic incentives were aligned by making it clear to the private banks that when a SIFI fails, the private banking sector will be forced to bear the losses of that failure. This brings every bank on board to the agenda of making sure the financial system is safely structured.
In the 19th c. the Bank of England understood that few things could be more destabilizing to the financial system than the expectation that the government or the central bank was willing to bear the losses of a SIFI failure. Thus, the Bank of England protected the financial system from the liquidity consequences of a fire sale due to the SIFI, but was very careful not to take on more than a small fraction (less than 6%) of the credit losses that would be created by the SIFI failure.
This is the comment I posted:
While this is one of the better discussions of the 1890 Barings liquidation, for some reason modern economic historians have a lot of difficulty acknowledging the degree to which moral hazard concerns drove central bank conduct in the 19th c. White writes:
The Barings rescue or “lifeboat” was announced on Saturday November 15, 1890. The Bank of England provided an advance of £7.5 million to Barings to discharge their liabilities. A four-year syndicate of banks would ratably share any loss from Barings’ liquidation. The guarantee fund of £17.1 million included all institutions, and some of the largest shares were assigned to banks whose inattentive lending had permitted Barings to swell its portfolio.
Clapham (cited by White), however makes it clear that the way the Bank of England drummed up support for the guarantee fund was by making a very credible threat to let Barings fail. Far from what is implied by the statement “The Bank of England provided an advance of £7.5 million to Barings to discharge their liabilities”, the Bank of England point blank refused to provide such an advance until and unless the guarantee fund was funded by private sector banks to protect the central bank from losses, Clapham p. 332-33.
In short, treating the £7.5 million (which is actually the maximum liability supported by the guarantee fund over a period of four years, Clapham p. 336) as a Bank of England advance may be technically correct because of the legal structure of the guarantee fund (which was managed by the Bank), but gets the economics of the situation dead wrong.
19th century and early 20th century British growth could only take place in an environment where central bankers in London were obsessed with the twin problems of aligning incentives and controlling moral hazard. Historians who pretend that anything else was the case are fostering very dangerous behavior in our current economic climate.
Note: Updated to make the last paragraph specific to Britain.
I have a paper forthcoming in the Financial History Review that studies the role played by the Bank of England in the London money market at the turn of the 20th century. The Bank of England in this period is, of course, the archetype of a lender of last resort, so its activities shed light on what precisely it is that a lender of last resort does.
The most important implication of my study is that the standard understanding of what a lender of last resort does gets the Bank’s role precisely backwards. It is often claimed that the way that a lender of last resort functions is to make assets safe by standing ready to lend against them.
My study of the Bank of England makes it clear, however, that the duties of a lender of last resort go far beyond simply lending against assets to make them safe. What the Bank of England was doing was monitoring the whole of the money market, including the balance sheets of the principal banks that guaranteed the value of money market assets, to ensure that the assets that the Bank was engaged to support were of such high quality that it would be a good business decision for the Bank to support them.
In short, a lender of last resort does not just function in a crisis. A lender of last resort plays a crucial role in normal times of ensuring that the quality of assets that are eligible for last resort lending have an extremely low risk of default. This function of the central bank was known as “qualitative control” (although of course quantitative measures were used to predict when quality was in decline).
Overall, if we take the Bank of England as our model of a lender of last resort, then we must recognize that that the duty of such a lender is not just to lend, but also to constantly monitor the money market and limit the assets that trade on the money market to those that are of such high quality that when they are brought to the central bank in a crisis, it will be a good business decision for the bank to support them.
A central bank that fails to exercise this kind of control over the money market, can expect in a crisis to be forced, as the Fed was in 2008, to support the value of all kinds of assets that it does not have the capacity to value itself.
Note: the forthcoming paper is a new and much improved version of this paper.
Steve discusses the decomposition of financial positions on which MMT is based. He points out that the term “net financial assets” is used for the “private sector domestic financial position” which refers exclusively to the aggregate netted financial position of both households and firms and explicitly excludes “real” savings such as any housing stock that is fully paid up. By definition, if the “private sector domestic financial position” is positive, then it must be the case that on net the private sector holds claims on either the government or on foreign entities. Of course, the value of such claims depends entirely on the credibility of the underlying promises — this is the essential characteristic that distinguishes a claim to a financial asset from a claim to a real asset.
For Steve, there is a tradeoff between holding financial claims and holding real claims, and a principal reason for holding financial claims is to offset the risk of the real claims. Thus, Steve goes on to claim that to the degree that such a positive private sector financial position is due to claims on government, the government is using its credibility to provide a kind of insurance against real economy risk.
This is where I think Steve both gets what happened in 2008 right, and gets the big picture of the relationship between the financial and the real, and between the private sector and the public sector wrong. Steve is completely correct that in 2008 the issue of public sector liabilities played a huge insurance and stabilization role. But Steve extends his argument to the claim that: “The domestic private sector simply cannot produce assets that provide insurance against systematic risks of the domestic economy without the help of the state.”
The key point I want to make in this post is this: the financial and the real are so interdependent that they cannot actually be divorced. The same is true of the private and the public sectors. Financial activity and real activity, public sector activity and private sector activity are all just windows into a single, highly-integrated economy. Thus, I would argue that it is equally correct to state that: “The domestic public sector simply cannot produce assets that provide insurance against systematic risks of the domestic economy without the help of the private sector.”
That financial activity and real activity are two sides of the same coin is most obvious when one considers that the credibility of private sector financial liabilities depends fundamentally on the performance of the real economy. But it is equally true that the credibility of public sector liabilities (when measured in real terms) depends fundamentally on the robustness of the real economy as well. Those countries that have very highly rated debt did not achieve this status ex nihilo, but because of the historical performance of their economies and the robustness of their private sectors.
Thus, it is entirely correct that the public sector can temporarily step in to provide insurance for the private sector when it is struggling, but the view that it is the public sector that is the primary provider of insurance fails to capture the genuine interdependence that lies at the heart of a modern economy.
Indeed, Steve recognizes the danger of framing the financial and the real and the public and the private in this way in his last paragraph, where he acknowledges that this publicly-issued insurance is in fact provided in real terms at the expense of a segment of the private sector — the segment that does not hold the claims on government.
Michael Pettis on Creating Money out of Thin Air
Now let’s turn to Michael Pettis (whom I’ve never met, so I’ll call him by his last name). Pettis has long stood out as an economist with a uniquely strong understanding of the relationship between the financial and the real. He argues that “When banks or governments create demand, either by creating bank loans, or by deficit spending, they are always doing one or some combination of two things, as I will show. In some easily specified cases they are simply transferring demand from one sector of the economy to themselves. In other, equally easily specified, cases they are creating demand for goods and services by simultaneously creating the production of those goods and services. They never simply create demand out of thin air, as many analysts seem to think, because doing so would violate the basic accounting identity that equates total savings in a closed system with total investment.”
His two cases are a full employment economy (without growth) and an economy with an output gap. He argues that it is only in the latter case that the funding provided by banks (or government) can have an effect on output. In a comment to Pettis’ post I observed that his first case fails to take into account Schumpeter’s theory of growth. An economy is at full employment only for a given technology. Once there is a technical innovation, the full employment level of output will increase. Schumpeter’s theory was that the role of banking in the economy was to fund such innovation. Thus, there is a third case in which bank finance in a full employment economy does not just transfer resources to a different activity, but transfers them to an innovative activity that fundamentally alters the full employment level of output. Thus, it is not only when the economy is performing below potential that bank funding can create the production that makes savings equal to investment. When banks fund fundamental technological innovation, it is “as if” the original economy were functioning below potential (which of course if we hold technology constant at the higher level, was in fact the case — but this deprives the concept of “potential GDP” of its meaning entirely.)
Schumpeter was well aware that the same bank funding mechanisms that finance fundamental technological innovation, also finance technological failures and a vast amount of other business activity. Indeed, he argued that even though the banking system was needed to finance innovation and growth, the consequences of the decision making process by which banks performed this role included both business cycles and — when banking system performed badly — depressions.
In short, there is very good reason to believe that even in a “full-employment” economy when banks create debt, some fraction of that process creates additional demand. The problem is that the fraction in question depends entirely on the institutional structure of the banking system and its ability to direct financing into genuine innovation. It’s far from clear that this fraction will exhibit any stability over time.
How Did We Get Here: The Fault Lies in Our Models
So why do economists fall into the trap of treating the financial and the real as separable phenomena? Why do macroeconomists of all persuasion look for solutions in the so-called public sector?
The answer to the first question is almost certainly the heavy reliance of the economics profession on “market-clearing” based models. In models with market-clearing everybody buys and sells at the same time and liquidity frictions are eliminated by assumption. Of course, one of the most important economic roles played by financial assets is to address the problem of liquidity frictions. As a result, economists are generally trained to be blind to the connections between the financial and the real. People like Michael Pettis and proponents of MMT are trying to remove the blindfold. They are, however, attempting to do so without the benefit of formal models of liquidity frictions. This is a mistake, because the economics profession now has models of liquidity frictions. The future lies in the marriage of Schumpeter and Minsky’s intuition with New Monetarist models.
The answer to the second question is that we have a whole generation of macroeconomic policy-makers who think that the principal macroeconomic economic debate lies between Keynesians and Monetarists, when in fact both of these schools assume that the government is the insurer of last resort. The only distinction between these schools is whether the insurance is provided by fiscal or by monetary means. (To understand why our economies are struggling right now one need only understand how the assumption that the government is the fundamental source of liquidity has completely undermined the quality of our financial regulation.)
The concept of liquidity as a fundamentally private sector phenomenon that both drives the process of growth and periodically requires a little support from the government (e.g. giving the private sector time to weather a financial panic without the government actually bearing a penny of the losses) has been entirely lost. Only the future can tell us the price of this intellectual amnesia.
Given that my preferred explanation for low real interest rates are the changes that have been wrought within the developed world’s financial markets — and in particular the growth in collateralized inter-bank lending, I read with interest the newly released report, Low for Long? Causes and Consequences of Persistently Low Interest Rates by Sir Charles Bean, Christian Broda, Takatoshi Ito, and Randall Kroszner.
The authors establish the basic facts:
the world long-term real risk-free rate has been drifting down remorselessly from around 4% in the late 1990s to just below zero today.
The very low level of long-term risk-free real interest rates in the advanced economies is historically most unusual. Real rates have rarely been so low; when they have been, it has almost invariably been during or after a war, when there was a degree of financial repression and/or inflation was elevated. The present configuration of low real rates with low inflation appears to be unprecedented.
In Chapter 2 the authors discuss the various explanations for why real interest rates have declined “remorselessly” from 1998 or 1999 on. While the authors do discuss shifts in preferences in favor of “safe assets,” they do not even mention what I would consider the most important explanation for such a demand shift: the increasing collateralization of financial exposures on the part of our biggest financial institutions. (As was noted in my previous post, data on this is available from the ISDA.) Indeed, the only source of an increase in demand for safe assets that the report cites which is consistent with the timing of the drop in rates is the increase in emerging market demand for foreign exchange buffers subsequent to the Asian financial crisis. All of the other explanations in this subsection refer to sources of increased demand subsequent to the crisis (post-crisis recognizing of the extent of possible bad outcomes, post-crisis regulatory requirements for banks to hold larger buffers of safe assets, and the demand for safe assets created by central bank quantitative easing policies.)
What is missing from the report is this: the latter stage of the Asian financial crisis coincided with the LTCM failure. The LTCM failure led to a significant increase in the collateralization of financial sector exposures. Collateralization ramped up continuously over the early years of the current millennium as laws supporting collateralization regimes were adopted in the US, the UK, and Europe. It is remarkable that this potential explanation of prolonged low rates on “safe assets” which has the same timing as the emerging markets savings glut explanation and therefore meets one of the most important criteria considered by the report has been entirely omitted from it.
This lapse strikes me as evidence that the financial sector is invisible to modern macroeconomists. I, of course, can’t help wondering whether this blindness is generated by the models they work with. In my view, we need to take a long and hard look at modern finance and how it has changed the way the real economy operates.
Barry Ritholtz sends us to Brad DeLong using a sports analogy to criticize the economics profession:
The first principle of success in practically any endeavor is to move not toward where the ball is, but where it is going to be. Economists, as a rule, ignore this principle, indulging in the likely-to-be-vain hope that policies that would have worked yesterday will still work tomorrow
The only sport I have much understanding of is soccer, and this principle is far from correct in soccer. I have long felt that economists are seriously handicapped by their failure to follow basic team sport principles, so here’s my sports analogy for economics.
In soccer, the job of every player on the field is to move the ball to the position that maximizes the likelihood of scoring. The job of a coach at the developmental level is to teach the players to “see the field” before the ball is received in order to read where the ball needs to move next . Usually there are only one or two good answers to this maximization problem. Good players create danger by making the right choices of where to pass, and mediocre players don’t. (If you want to see a realization of this maximization process in action, watch Germany’s 2014 World Cup games.)
So “the first principle of success” in soccer is to create scoring opportunities by moving the ball effectively (and not always forward) on the field. (The second principle of success is to move when you don’t have the ball to a position that will make you the optimizing choice for the player receiving the ball. That is, contra DeLong, a good player creates the place where the ball will be next.) The problem with the economics profession is that it plays like Spain in the 2014 World Cup, it’s very good at passing side to side, but much weaker when it comes to creating and finishing scoring opportunities.
Thus, the economics profession must, first, define the scoring opportunities — or the big questions that economics must answer — and, second, the profession needs to play as a team always trying to move the ball into a position so that your teammate can score — except in the unusual case that you are best positioned to take the shot.
This, I think, is one way to understand Paul Romer’s critique of the economics profession. He is concerned that the current process promotes prima donna-like behavior, where the authors of articles set themselves the task of dribbling through four defenders and succeed only at losing possession of the ball entirely. And when there is team play it is wasted, dithering about, not even trying to answer important questions.
In short, the economics profession has an extraordinary wealth of formal analytic tools. Now that those tools have been developed, the challenge is to deploy them effectively to answer the big — mostly macroeconomic — questions. In order for this deployment to be effective, teamwork is necessary. The masters of the formal tools need to stop passing the ball laterally and start working on deploying them to answer the big questions. (The segment of the profession that seems to me to be making the greatest progress in the latter effort is “new monetarism.”) The proponents of big questions need to stop sniping at the formalists, acknowledge the weaknesses in their own economic toolbox, and, I would argue, set to work building models that take money and finance more seriously than New Keynesian models do.
I am optimistic that this challenge can be met. Teamwork was in evidence when the New Keynesian models were developed. And post-crisis, there is widespread acknowledgment of the need for better incorporation of finance into economic models, and many steps along this path have already been taken. But the economics profession needs to remember that the fastest way to progress is to act always with awareness of where the goal is, with a broad view of the entire field of players, and with each individual making careful decisions as to how to move the ball so that someone else can score.
Noah Smith reviews the debate over negative real rates, and Brad DeLong remarks on “how profoundly strange and unexpected” is the current environment. While Noah covers all the most common explanations for real rates, I think that he — and most of econo-blogosphere — are missing a key factor that is probably driving this data.
First, recall that the problem of negative real rates is very much focused on the “safe” side of the market. That is, it is Treasuries (and similar assets like Bunds) that bear negative real rates. The market rates available to non-public borrowers are much higher than the rate on “safe assets.” (The distinction between these two rates is the premise behind Caballero and Farhi’s work.)
In my view the missing element of the discourse on the low yields of safe assets is the remarkable change in the structure of the financial system that started very slowly in the 1990s, accelerated at the end of that decade, and was a full-fledged financial revolution by the end of the next decade. This change is the collateralization of inter-bank lending, that previously was unsecured and funded on the basis of reputation-type mechanism.
ISDA data shows that with the growth of swaps starting in the early 1990s, collateralization of bilateral derivatives contracts become fairly common, though far from ubiquitous. Subsequent to the 1998 LTCM crisis, collateralization of derivatives contracts became much more widespread. The 2000 Commodities Futures Modernization Act pre-empted long-standing common law and state law constraints on derivatives markets, which subsequently grew dramatically — along with the use of collateral. The 2005 bankruptcy reform act dramatically changed markets for collateral, and in particular made it possible for the repos of just about any asset to trade on a par with derivatives collateral.
In addition in the early naughties, the growth of structured financial assets that made possible synthetic assets in which “investors” sold protection on bonds (instead of investing in actual bonds) and held the collateral that was used to guarantee payment on the protection contracts in “safe assets.” Finally, financial market participants have sometimes commented that the Basel rules for banks promoted collateralized interbank lending over unsecured interbank lending (though I’ve never really investigated this point).
In short, the same data the Ben Bernanke explained in terms of a “savings glut” can also be explained by the financial industry’s massive increase in demand for safe assets that serve as high quality collateral over the same time period. The financial industry’s demand is a demand for safety and cannot be met by risky assets, so it is an excellent explanation for the 21st divergence between the behavior of “safe” interest rates and risky interest rates.
Furthermore, since the 2008 crisis the financial industry’s demand for collateral has only increased. In 2008 the unsecured interbank markets, including both the Federal Funds market and the Libor market, collapsed. They have not recovered. Interbank lending has shifted almost entirely to a collateralized basis. While it is true that the demand for collateral that was created by structured finance products has largely disappeared, this is most likely offset by regulatory changes that increase the demand for collateral.
In short, the best explanation for why private markets are forcing interest rates to zero is that the banking system is broken. The system which functioned for centuries on the basis of unsecured, reputation-based, inter bank lending no longer exists. ZIRP is just evidence that the financial industry is turning to government as a source of the liquidity that the financial industry is no longer capable of creating on its own.
My recent work has led me to study classical banking theory, which informed both Wicksell’s and Schumpeter’s understanding of the economy. Classical banking theory views bank liabilities as the primary form of money and argues that, given a well-structured financial system, the quantity of money is endogenously determined by the demands of the business community for the finance of accounts receivable and similar short-term loans.
This theory of money views the role of banks and supply of money as passively responding the needs of the real economy. And this view was, probably correctly, targeted as one of the reasons U.S. officials failed to act aggressively during the Great Depression in he 1930s. Certainly both the Monetarists and the Keynesians who would develop what is now known as macroeconomics saw classical banking theory as a school of thought that was best exterminated. And these days extraordinarily sophisticated works that took a banking theory approach in the mid-20th century are now relegated to the moth-balled “depository” shelves of university book stacks (e.g. the work of R.S. Sayers).
In short, in the 1930s there was a predominant model of the money and banking system. This model when it was applied to circumstances far beyond the realm of its usual operation failed. When it failed, proponents could not recognize that failure and instead used the model to justify the view that real economic performance was a “natural” phenomenon about which there was nothing they could do. They were firmly convinced there was no need to act.
Reading David Beckwith and Paul Krugman today, I couldn’t help wondering whether history is beginning to rhyme. After describing the views of those who doubt the continued efficacy of policy rates that are held at zero, Beckwith writes:
What I wish George Will, Bill Gross, and other free market advocates would consider is the possibility that the Fed itself is not the source of the low rates, but simply is a follower of where market forces have pushed interest rates. That is, the Great Recession and the prolonged slump that followed caused interest rates to be depressed and the Fed did its best to keep short-term interest rates near this low market-clearing level.
Krugman, discussing Beckwith’s post, gives a very clear description of how this view is the output of the current predominant model of the macroeconomy:
He’s completely right about the economics. . . . we have a very clear model that tells us what interest rates would be in the absence of distortions and rigidities, the Wicksellian natural rate — the rate of interest consistent with an economy subject neither to inflationary overheating nor deflationary excess supply. And with inflation consistently below the generally accepted 2 percent target, this model says that the actual interest rate, at zero, is above the natural rate, not below.
And all I can hear reading this is the rhyme of history. We have a model and we rely on it to be right. That model tells us that low rates are “natural.” There is nothing more to be done. We must keep rates at zero until the economy improves. But just as in the 1930s the model is being applied far beyond the region of the data in which we have knowledge that it works.
And my guess is that just as in the 1930s we will find that we need to develop a completely different model, built on completely different premises in order to develop policy recommendations for our current problems. My own view is that this a good time to revive classical banking theory and relearn what it has to say about central bank policy and what makes the economy tick.
David Andalfatto has given a very simple explanation of why such new models are needed: the data can be explained by debt-constraints just as well as it can be explained by a negative real interest rate. This accords very closely with Schumpeter’s view that every economic “catastrophe” can be attributed to dysfunction in the banking sector. Perhaps it will be only after we have relearned how to model the monetary role of the banking system that we will be able to escape the tragedy of ZIRP.
Pursuant to Attorney General Loretta Lynch’s welcome change in DoJ policy, it occurred to me that an old draft post of mine might actually merit being posted, so here goes:
After listening to a presentation on the impressive growth in enforcement actions resulting in corporate criminal liability a few months ago, it occurred to me that people without legal training might not actually understand the reasoning behind the critique that individual prosecutions should almost always accompany corporate criminal liability. (The presenter at one point framed such critiques as claiming that prosecutors were colluding with management against the shareholders.)
The problem with corporate criminal liability is this: every crime has a mens rea or element of intent that must be proved as part of the prosecutor’s case. Negligence is one of the lower levels of mens rea, but many instances of negligence are not crimes. Often a “knowing” or “should have known” standard is applied in criminal law.
When a prosecutor chooses to seek corporate criminal liability, without bringing any cases of individual criminal liability, the problem is whether it makes logical sense to argue that the corporation had the mens rea for the crime, but no individual in the corporation had the mens rea (or the one with the mens rea managed not to take relevant action in promotion of the crime). Now one can dream up special circumstances where this position would actually be logical, but it seems to a lot of people that this situation should be rare.
Critics of corporate liability (I’m thinking of Judge Rakoff and Bill Black here, for example) would probably argue that pursuing corporate criminal liability, without pursuing individual liability is tantamount to stating that a crime was committed, but we don’t know by whom. (Note that the reverse where there is individual criminal liability without corporate criminal liability is likely to be much more common. Rogue employees and a genuine effort on the part of the corporation to avoid the criminal activity would both be good reasons – though not necessarily successful reasons – for not extending criminal liability from an individual to the corporation.)
Overall an important criticism of the growth of deferred prosecution agreements and non prosecution agreements is that finding this growth acceptable in the absence of individual prosecutions is essentially lowering the standards for what a prosecutor is supposed to do. “A crime was committed, but I don’t know by whom” should not be the normal stopping point for a prosecutor’s case.
The argument is, of course, not that there should never be corporate criminal liability without an accompanying case for individual liability, but simply that this outcome should be relatively rare. In general, we want our prosecutors to think of their jobs as going all the way to finding out “who done it,” and not stopping with “a crime was committed” and a fine was paid.
In short, the argument against treating a finding of corporate criminal liability as an end point is not about “collusion,” but instead goes to the heart of what it means to enforce the law.