Thursday, December 01, 2011

Price Coherence on Intrade

A couple of days ago, Richard Thaler tweeted this:
Intrade prices seem incoherent. How can Newt nomination price soar but Obama win stay at 50%?
Here's what Thaler is talking about. Over the past couple of weeks, the price of a contract that pays $10 in the event that Gingrich is nominated has risen sharply from about a dollar to above $3.50:

Over the same period a contract that pays $10 if Obama is reelected has remained within a narrow window, trading within a ten cent band a shade above $5:

Thaler considers this pattern to be incoherent because Gingrich is widely believed to be a weaker general election candidate than Romney. For instance, in head-to-head poll averages Obama currently leads Gingrich by 5.7%, but leads Romney by the much smaller margin of 1.5%.

But even if Gingrich really is the weaker candidate against Obama under any set of conditions that might prevail on election day, it does not follow (as a point of logic) that a rise in the Gingrich nomination price must be associated with a rise in the Obama reelection price. For instance, a belief among voters that Obama is more vulnerable would ordinarily result in a decline in his likelihood of reelection, but this could be offset if the same belief also leads to the nomination by the GOP of a more conservative but less electable candidate.

This reasoning is consistent with the so-called Buckley Rule, which urges a vote for the most conservative candidate who is also electable. As perceptions about the electability of the incumbent shift, so does the perceived viability of more ideologically extreme members of the opposition. These countervailing effects can dampen fluctuations in the electability of the incumbent. Hence the market data alone cannot decisively settle the question of price coherence. 

Friday, October 07, 2011

Notes on a Worldly Philosopher

The very first book on economics that I remember reading was Robert Heilbroner's majesterial history of thought The Worldly Philosophers. I'm sure that I'm not the only person who was drawn to the study of economics by that wonderfully lucid work. Heilbroner managed to convey the complexity of the subject matter, the depth of the great ideas, and the enormous social value that the discipline at its best is capable of generating.

I was reminded of Heilbroner's book by Robert Solow's review of Sylvia Nasar's Grand Pursuit: The Story of Economic Genius. Solow begins by arguing that the book does not quite deliver on the promise of its subtitle, and then goes on to fill the gap by providing his own encapsulated history of ideas. Like Heilbroner before him, he manages to convey with great lucidity the essence of some pathbreaking contributions. I was especially struck by the following passages on Keynes:
He was not without antecedents, of course, but he provided the first workable intellectual apparatus for thinking about what determines the level of “output as a whole.” A generation of economists found his ideas the only available handle with which to grasp the events of the Great Depression of the time... Back then, serious thinking about the general state of the economy was dominated by the notion that prices moved, market by market, to make supply equal to demand. Every act of production, anywhere, generates income and potential demand somewhere, and the price system would sort it all out so that supply and demand for every good would balance. Make no mistake: this is a very deep and valuable idea. Many excellent minds have worked to refine it. Much of the time it gives a good account of economic life. But Keynes saw that there would be occasions, in a complicated industrial capitalist economy, when this account of how things work would break down.

The breakdown might come merely because prices in some important markets are too inflexible to do their job adequately; that thought had already occurred to others. It seemed a little implausible that the Great Depression of the 1930s should be explicable along those lines. Or the reason might be more fundamental, and apparently less fixable. To take the most important example: we all know that families (and other institutions) set aside part of their incomes as saving. They do not buy any currently produced goods or services with that part. Something, then, has to replace that missing demand. There is in fact a natural counterpart: saving today presumably implies some intention to spend in the future, so the “missing” demand should come from real capital investment, the building of new productive capacity to satisfy that future spending. But Keynes pointed out that there is no market or other mechanism to express when that future spending will come or what form it will take... The prospect of uncertain demand at some unknown time may not be an adequately powerful incentive for businesses to make risky investments today. It is asking too much of the skittery capital market. Keynes was quite aware that occasionally a wave of unbridled optimism might actually be too powerful an incentive, but anyone in 1936 would take the opposite case to be more likely.

So a modern economy can find itself in a situation in which it is held back from full employment and prosperity not by its limited capacity to produce, but by a lack of willing buyers for what it could in fact produce. The result is unemployment and idle factories. Falling prices may not help, because falling prices mean falling incomes and still weaker demand, which is not an atmosphere likely to revive private investment. There are some forces tending to push the economy back to full utilization, but they may sometimes be too weak to do the job in a tolerable interval of time. But if the shortfall of aggregate private demand persists, the government can replace it through direct public spending, or can try to stimulate additional private spending through tax reduction or lower interest rates. (The recipe can be reversed if private demand is excessive, as in wartime.) This was Keynes’s case for conscious corrective fiscal and monetary policy. Its relevance for today should be obvious. It is a vulgar error to characterize Keynes as an advocate of “big government” and a chronic budget deficit. His goal was to stabilize the private economy at a generally prosperous level of activity.
This is as clear and concise a description of the fundamental contribution of the General Theory that I have ever read. And it reveals just how far from the original vision of Keynes the so-called Keynesian economics of our textbooks has come. The downward inflexibility of wages and prices is viewed in many quarters today to be the hallmark of the Keynesian theory, and yet the opposite is closer to the truth. The key problem for Keynes is the mutual inconsistency of individual plans: the inability of those who defer consumption to communicate their demand for future goods and services to those who would invest in the means to produce them.

The place where this idea gets buried in modern models is in the hypothesis of "rational expectations." A generation of graduate students has come to equate this hypothesis with the much more innocent claim that individual behavior is "forward looking." But the rational expectations hypothesis is considerably more stringent than that: it requires that the subjective probability distributions on the basis of which individual decisions are made correspond to the objective distributions that these decisions then give rise to. It is an equilibrium hypothesis, and not a behavioral one. And it amounts to assuming that the plans made by millions of individuals in a decentralized economy are mutually consistent. As Duncan Foley recognized a long time ago, this is nothing more than "a disguised form of the assumption of the existence of complete futures and contingencies markets."

It is gratifying, therefore, to see increasing attention being focused on developing models that take expectation revision and calculation seriously. A conference at Columbia earlier this year was devoted entirely to such lines of work. And here is Mike Woodford on the INET blog, making a case for this research agenda:
This postulate of “rational expectations,” as it is commonly though rather misleadingly known... is often presented as if it were a simple consequence of an aspiration to internal consistency in one’s model and/or explanation of people’s choices in terms of individual rationality, but in fact it is not a necessary implication of these methodological commitments. It does not follow from the fact that one believes in the validity of one’s own model and that one believes that people can be assumed to make rational choices that they must be assumed to make the choices that would be seen to be correct by someone who (like the economist) believes in the validity of the predictions of that model. Still less would it follow, if the economist herself accepts the necessity of entertaining the possibility of a variety of possible models, that the only models that she should consider are ones in each of which everyone in the economy is assumed to understand the correctness of that particular model, rather than entertaining beliefs that might (for example) be consistent with one of the other models in the set that she herself regards as possibly correct...

The macroeconomics of the future, I believe, will still make use of general-equilibrium models in which the behavior of households and firms is derived from considerations of intertemporal optimality, but in which the optimization is relative to the evolving beliefs of those actors about the future, which need not perfectly coincide with the predictions of the economist’s model. It will therefore build upon the modeling advances of the past several decades, rather than declaring them to have been a mistaken detour. But it will have to go beyond conventional late-twentieth-century methodology as well, by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.
I think that the vigorous pursuit of this research agenda could lead to a revival of interest in theories of economic fluctuations that have long been neglected because they could not be reformulated in ways that were methodologically acceptable to the professional mainstream. I am thinking, in particular, of nonlinear models of business cycles such as those of Kaldor, Goodwin, Tobin and Foley, which do not depend on exogenous shocks to account for departures from steady growth. This would be an interesting, ironic, and welcome twist in the tangled history of the worldly philosophy.

Monday, August 08, 2011

David Levey on the Ratings Downgrade

David Levey (Managing Director, Sovereign Ratings, Moody's Investors Service, 1985-2004) sent out the following statement yesterday to a number of publications, including the New York Times, Wall Street Journal, Financial Times, and Bloomberg. Since I haven't seen it published anywhere and he has granted permission to freely reproduce it, I'm posting it here (I thank Sam Bowles for forwarding the statement to me):
The recent S&P downgrade of the credit rating of US Treasury bonds is unwarranted for the following reasons: 
  1. The US dollar remains the dominant global currency and no viable competitor is on the horizon. The euro is heading into dangerous and uncharted waters while deep and difficult political, economic and financial reforms will be required before the renminbi could become fully convertible for capital flows and Chinese government bonds a safe reserve asset. 
  2. US Treasury bills and bonds, along with government-guaranteed bonds and highly-rated corporates, will for the foreseeable future remain the assets of choice for global investors seeking a "safe haven", due to the unparalleled institutional strength, depth and liquidity of the market. Although there are several advanced Aaa-rated OECD countries with lower debt ratios and better fiscal outlooks than the US, their markets are generally too small to play that role. Since ratings are intended to function as a market signal, it makes little sense to implicitly suggest to investors seeking "risk-free" reserve assets that they reallocate their portfolios toward these relatively illiquid markets. 
  3. Despite the above positive factors for the US, it is certainly the case that the US long-term debt outlook is deteriorating under the pressure of rising entitlement costs and an inefficient, distortionary tax system. Failure to reverse that trajectory would eventually make a downgrade unavoidable. But the recent discussions signal to me that -- finally -- public awareness of the fiscal crisis is growing and beginning to influence Washington. There is still a window of time -- perhaps as much as a decade -- within which structural reforms to spending programs and the tax system could reverse the negative debt trajectory.
  4. The bottom line is that the global role of the dollar and the central position of US bond markets make somewhat elevated debt ratios more compatible with a Aaa rating than is the case for other countries, another version of the US's "exorbitant privilege". But that extra leeway is finite and serious reforms to entitlement programs, particularly Medicare, must be made in a reasonable time horizon. If not, global investors will eventually conclude that our political system is incapable of making the needed changes and turn away from US assets, regardless of the institutional strengths of US markets.
This is consistent with Warren Buffet's view of the downgrade.

Even more interesting than Levey's statement was his preamble, in which he states that he has "no connection with Moody's nor any non-public knowledge of what its analysts think about the rating or what they intend to do" and then adds the following: 
As I see our current situation, the Federal Reserve, with its too-tight monetary stance since the summer of 2008, has allowed nominal GDP to fall far below trend, causing a collapse of output and employment -- as described by the monetary bloggers Scott Sumner, David Beckworth, Bill Woolsey, and David Glasner. Had the Fed acted properly (by, for example, setting a nominal GDP level target) the recession would have been much shallower and fiscal stimulus might not have been undertaken. As it was, the collapse of nominal GDP drove the "fiscal multiplier" to zero, leaving us with more debt and nothing to show for it.
Whether or not the Fed had the capacity and the commitment to have substantially mitigated the recession in the absence of fiscal policy, I'm not qualified to judge. But I remain skeptical that the rating agencies have the ability to evaluate credit risk with greater accuracy than the market itself would do in their absence. Were it not for the fact that capital requirements for financial institutions are set on the basis of their ratings, I doubt that there would be much of a market for their services, or that they would have such visibility and influence. And as far as sovereign debt is concerned, I'm not sure that they provide us with any useful information or guidance.

Saturday, August 06, 2011

Rating the Agencies

It's being argued that yesterday's downgrade of the credit rating of the United States government by Standard and Poor's could increase borrowing costs throughout the economy, worsen the burden of debt, retard a recovery that already appears to be faltering, affect political brinkmanship in future negotiations, and further tarnish our national reputation.

Unless, of course, we chose to collectively ignore it, as Dan Alpert recommends:
Effectively – the S&P pronouncement last evening amounted to not much more than a guest in your house telling your children to clean up their rooms “or else.” I don’t know about you, but in my case, at least, I would ask such a guest to apologize or leave. 
But it's difficult to ignore events on which everyone else is lavishing such great attention, and this seems like an appropriate time to examine how these agencies managed to gain such visibility and influence. As Ross Levine notes in his recent autopsy of the financial crisis, this is where we stood forty years ago:
Until the 1970s, credit rating agencies were comparatively insignificant, moribund institutions that sold their assessments of credit risk to subscribers. Given the poor predictive performance of these agencies, the demand for their services was limited for much of the twentieth century (Partnoy, 1999). Indeed, academic researchers found that credit rating agencies produce little additional information about the firms they rate; rather, their ratings lag stock price movements by about 18 months (Pinches and Singleton, 1978).
But then a policy shift occurred that continues to have major ramifications to this day. The SEC provided a special designation to a class of rating agencies and then proceeded to use their opinions as a basis for setting capital requirements. The selected agencies suddenly found themselves endowed with vastly increased market power and a very lucrative business model:
In 1975, the SEC created the Nationally Recognized Statistical Rating Organization (NRSRO) designation, which it granted to the largest credit rating agencies. The SEC then relied on the NRSRO's credit risk assessment in establishing capital requirements on SEC-regulated financial institutions.

The creation of – and reliance on – NRSROs by the SEC triggered a cascade of regulatory decisions that increased the demand for their credit ratings. Bank regulators, insurance regulators, federal, state, and local agencies, foundations, endowments, and numerous entities around the world all started using NRSRO ratings to establish capital adequacy and portfolio guidelines. Furthermore, given the reliance by prominent regulatory agencies on NRSRO ratings, private endowments, foundations, and mutual funds also used their ratings in setting asset allocation guidelines for their investment managers. NRSRO ratings shaped the investment opportunities, capital requirements, and hence the profits of insurance companies, mutual funds, pension funds, and a dizzying array of other financial institutions.

Unsurprisingly, NRSROs shifted from selling their credit ratings to subscribers to selling their ratings to the issuers of securities. Since regulators, official agencies, and private institutions around the world relied on NRSRO ratings, virtually every issuer of securities was compelled to purchase an NRSRO rating if it wanted a large market for its securities. Indeed, Partnoy (1999) argues that NRSROs essentially sell licenses to issue securities; they do not primarily provide assessments of credit risk.
This shift in business model by the selected agencies raised some rather obvious conflicts of interest, since their customers were now issuers of debt who stood to gain from overly optimistic assessments of their credit risk. As is common in such cases, the counterargument was made that the need to preserve one's reputation for accuracy would provide adequate incentives for objective ratings:
There are clear conflicts of interest associated with credit rating agencies selling their ratings to the issuers of securities. Issuers have an interest in paying rating agencies more for higher ratings since those ratings influence the demand for and hence the pricing of securities. And, rating agencies can promote repeat business by providing high ratings...

Nevertheless, credit rating agencies convinced regulators that reputational capital reduces the pernicious incentive to sell better ratings. If a rating agency does not provide sound, objective assessments of a security, the agency will experience damage to its reputation with consequential ramifications on its long-run profits. Purchasers of securities will reduce their reliance on this agency, which will reduce demand for all securities rated by the agency. As a result, issuers will reduce their demand for the services provide by that agency, reducing the agency's future profits. From this perspective, reputational capital is vital for the long-run profitability of credit rating agencies and will therefore contain any short-run conflicts of interest associated with “selling” a superior rating on any particular security.
I have previously discussed some of the limitations of this argument in a different context, and such limitations were clearly evident in the case of the agencies:
Reputational capital will reduce conflicts of interest, however, only under particular conditions. First, the demand for securities must respond to poor rating agency performance, so that decision makers at rating agencies are punished for issuing bloated ratings on even a few securities. Second, decision makers at rating agencies must have a sufficiently long-run profit horizon, so that the long-run costs to the decision maker from harming the agencies reputation outweigh the short-run benefits from selling a bloated rating.

These conditions do not hold, however... regulations weaken the degree to which a decline in the reputation of a credit rating agency reduces demand for its services. Specifically, regulations induce the vast majority of the buyers of securities to use NRSRO rating in selecting assets. These regulations hold regardless of NRSRO performance, which moderates the degree to which poor ratings performance reduces the demand for NRSRO services. Such regulations mitigate the positive relation between rating agency performance and profitability.
This brings us to the role of the agencies in the financial crisis. The rapid growth of structured products provided the agencies with a substantial new source of demand, as well as the problem of assessing credit risk for securities of much greater complexity. Minor changes in modeling assumptions could lead to significantly different ratings for such assets. Nevertheless, there were strong incentives in place for the agencies to act as if they could make competent assessments of credit risk:
The explosive growth of securitized and structured financial products from the late 1990s onward materially intensified the conflicts of interest problem. Securitization and structuring involved the packaging and rating of trillions of dollars worth of new financial instruments. Huge fees associated with processing these securities flowed to banks and NRSROs. Impediments to this securitization and structuring process, such as the issuance of low credit rating on the securities, would gum-up the system, reducing rating agency profits.

In fact, the NRSROs started selling ancillary consulting services to facilitate the processing of securitized instruments, increasing NRSRO incentives to exaggerate ratings on structured products. Besides purchasing ratings from the NRSROs, the banks associated with creating structured financial products would first pay the rating agencies for guidance on how to package the securities to get high ratings and then pay the rating agencies to rate the resultant products.

Other evidence also indicates that rating agencies adjusted their behavior to capture the profits made available by securitization and the design of new structured financial products. Lowenstein's (2008) excellent description of the rating of a MBS by Moody's demonstrates the speed with which complex products had to be rated, the poor assumptions on which these ratings were based, and the profits generated by rating structured products... Indeed, internal e-mails indicate that the rating agencies lowered their rating standards to expand the business and boost revenues... A collection of documents released by the US Senate suggests that NRSROs consciously adjusted their ratings to maintain clients and attract new ones.

The short-run profits from these activities were mind bogglingly large and made the future losses from the inevitable loss of reputational capital irrelevant. For example, the operating margin at Moody's between 2000 and 2007 averaged 53 percent. This compares to operating margins of 36 and 30 percent at Microsoft and Google, or 17 percent at Exxon... Thus, rating agencies faced little market discipline, had no significant regulatory oversight, were protected from competition by regulators and legislators, and enjoyed a burgeoning market for their services... It was good to be an NRSRO.
Levine's bottom line is this:
While the crisis does not have a single cause, the behavior of the credit rating agencies is a defining characteristic. It is impossible to imagine the current crisis without the activities of the NRSROs. And, it is difficult to imagine the behavior of the NRSROs without the regulations that permitted, protected, and encouraged their activities.
Perhaps the time has come to consider a complete overhaul of this dysfunctional system. Withdraw the special designation accorded to the major agencies, so that they compete on a level playing field with new entrants. If they really do have the expertise to make assessments of credit risk that are more accurate than the market, let them build reputation and find clients willing to pay for their pronouncements. Make capital requirements for financial institutions independent of ratings, thus stripping the agencies of their monopoly power and guaranteed sources of income. And in the meantime, greet their pronouncements on sovereign debt not with an anxious wringing of hands, but with a collective yawn.


Update (8/10). Andrew Gelman follows up:
Another way to look at this is: Given all the above, those S&P dudes must really really think the U.S. is at risk of defaulting. Keeping the AAA rating would’ve been the safe default choice. Deciding to downgrade—that’s political dynamite, with a risk of losing their lucrative quasimonopoly. That’s a decision you’d only make for a really good reason. Or maybe they’re just overcompensating for all those bad AAA ratings they gave out a few years ago?
I certainly see his point. But I don't think that the agencies are in much danger of losing their quasimonopoly, which makes the decision a bit harder to interpret as a bold act driven by conviction.

Sunday, July 24, 2011

Greek Games

I haven't made up my mind yet about the wisdom of the latest plan to secure the financial viability of Greece within the eurozone, but as a piece of financial engineering it has some very intriguing features.

Current bondholders have the option of exchanging their assets for new issues that promise less and deliver later, but are considerably more secure. There are four new issues to choose from, varying with respect to maturity, interest rate, and the proportion of principal that is guaranteed (by highly rated zero coupon bonds or funds held in an escrow account). But these options are designed to be roughly equivalent in present value terms, and it is expected that they will be selected in approximately equal measure by those who choose to participate in the exchange.

Participation is voluntary, so current bondholders can simply choose to do nothing. For this reason, the financing offer does not trigger payouts on credit default swaps.

What makes the mechanism strategically interesting is that the payoffs from participation are highly sensitive to the overall participation rate. The higher the participation rate, the greater will be the ability of Greece to meet its financial obligations not only on the new issues but also on the outstanding ones. Participation by some raises the value of the assets held by the remainder. If the target participation rate of 90% is met, then those who decline to participate will find themselves holding bonds that are much less likely to default than is currently the case. In anticipation of this effect, yields on Greek bonds (and the cost of insuring them with credit derivatives) fell sharply following the announcement.

It's interesting to think about who gains and who loses from this. Contrary to most accounts in the media, current bondholders benefit from the existence of the financing offer, regardless of whether or not they choose to participate. Those who decline to participate experience a capital gain on their assets (relative to the status quo without the offer). And those who participate are choosing to forgo this capital gain and must therefore be even better off. Of course, there will be many bondholders who purchased their assets at times when Greek default was considered highly unlikely, and they will experience a loss on their original investment. But this loss has already been inflicted on them: the financing offer just gives them an opportunity to capitalize it in a manner that eases Greece's debt burden, as an alternative to selling their bonds in the open market. 

The fact that credit default swaps are not triggered by the offer, coupled with the lowered likelihood of default on current bonds, benefits sellers of protection on Greek debt. In fact, such sellers have strong incentives to buy up Greek bonds and participate in the exchange, thus lowering the probability that a credit event will arise in the near future. I would not be surprised if some of the buying that raised prices on the heels of the announcement came from such sources.

So who loses as a result of the financing offer? First and foremost, those who bought naked credit default swaps, thus making a directional bet on a credit event that is now less likely to occur. It is quite conceivable that the plan was designed to have precisely this effect. Speculators betting on sovereign default have come in for a fair amount of public criticism by political leaders in Europe, and stand accused of raising the cost of borrowing and the likelihood of default. (I have argued in joint work with Yeon-Koo Che that there is some theoretical basis for this claim.)

Costs will also be imposed on the countries of the eurozone core, who are providing the collateral to guarantee principal on the new issues (Greece remains solely responsible for all interest payments). But these countries are motivated by the belief that a formal default by Greece would have contagion effects across the periphery, leading to a chaotic collapse of the currency union. The biggest risk entailed in the current initiative is that it may not, in the end, be enough to prevent this.

Monday, July 18, 2011

Some Thoughts on the Unthinkable

In his April 4 letter to Congress on the urgent need to raise the debt limit, the Secretary of the Treasury made the following claims:
As the leaders of both parties in both houses of Congress have recognized, increasing the limit is necessary to allow the United States to meet obligations that have been previously authorized and appropriated by Congress. Increasing the limit does not increase the obligations we have as a Nation; it simply permits the Treasury to fund those obligations that Congress has already established.

If Congress failed to increase the debt limit, a broad range of government payments would have to be stopped, limited or delayed, including military salaries and retirement benefits, Social Security and Medicare payments, interest on the debt, unemployment benefits and tax refunds. This would cause severe hardship to American families and raise questions about our ability to defend our national security interests. In addition, defaulting on legal obligations of the United States would lead to sharply higher interest rates and borrowing costs, declining home values and reduced retirement savings for Americans. Default would cause a financial crisis potentially more severe than the crisis from which we are only now starting to recover.

For these reasons, default by the United States is unthinkable.
Unthinkable as it may be, it's worth giving this a little thought.

Strictly speaking, the Treasury could continue to make payments on all obligations authorized by Congress, simply by sending out checks as they come due. Commercial banks would undoubtedly accept these from depositors, confident in the knowledge that the Fed would create the reserves necessary to credit their accounts. If the Fed were concerned about the resulting expansion of the monetary base, it could neutralize this by selling bonds on the open market. The result would be an increase in the debt held by the public, with no change in the monetary base, which is exactly what would transpire if the deficit were financed by the issue of new bonds.

The problem, of course, is that the Treasury's account at the Fed would then be vastly overdrawn and the debt limit thereby exceeded. Instead of borrowing from bondholders, the Treasury would be borrowing, so to speak, from the Federal Reserve. I'm quite certain that in the current political climate this would be treated by Congress as a usurpation of its power, resulting in a constitutional crisis and possible impeachment. Not surprisingly, the Treasury Secretary is reluctant to go down this road.

The only alternative is for the Treasury to meet some of the obligations authorized by Congress while failing to meet others. For this to happen, someone in the executive branch would have to decide which prior appropriations made by Congress to respect, and which to ignore. Interest and principal on the debt would probably receive the highest priority, given constitutional imperatives. But everything beyond that, it seems, would be fair game. By respecting one law -- the debt ceiling -- the Treasury would be forced to disregard others. Payments to contractors, congressional and agency staff, state and local governments, social security recipients, and health care providers would all need to be prioritized. This is a bizarre and highly undemocratic manner of repealing legislation.

I doubt very much that it will come to this. If the Treasury were able to communicate its priorities credibly to the public, making clear exactly who would get paid and who would not, I suspect that we would have an agreement in short order. But even if we get past the current crisis unscathed, the same scenario is likely to be repeated whenever government is acrimoniously divided in the future. Accordingly, it's worth thinking about the kind of structural changes that could help us avoid a periodic repetition of this farce.

One possibility is to absorb an increase in the debt ceiling into any legislation that has budget implications, and to do so in a manner that allows for all the implied borrowing needs to be met. Any tax cuts or increases in appropriations should be accompanied by an authorization of borrowing so that all anticipated shortfalls in revenues relative to expenditures could be accommodated.

This would be very imperfect solution, because severe shortfalls in revenues relative to expenditures are often unanticipated. Unusual economic conditions (such as those we are currently navigating) can devastate revenues just as expenditures are rising sharply, thus pushing the deficit outside bounds that were forecast when the legislation was enacted.

The only sure way to eliminate the contradictions implicit in current laws would be to repeal the debt ceiling itself. This is what common sense would dictate. But expecting common sense to guide the legislative process in the present climate... now that is truly unthinkable.

Thursday, May 05, 2011

Commodity Corrections

As we close in on the one year anniversary of the flash crash, there are some fireworks on display in the commodities markets:
Commodities plunged the most since 2009, led by oil and silver... The Standard & Poor’s GSCI index of 24 commodities sank 6.5 percent... and has lost 9.9 percent this week. Oil tumbled 8.6 percent, the most in two years, to $99.80 a barrel. Silver dropped 8 percent, extending the biggest four-day slump since 1983 to 25 percent...

Selling swept commodities markets as investors sold positions following gains of more than 23 percent in 2011 through April 29 by silver, oil, gasoline, coffee and cotton... Futures on Brent crude, crude oil, gas oil, heating oil, gasoline and natural gas plunged more than 6.9 percent today. Crude oil dropped below $100 a barrel for the first time since March 17. Copper futures slumped 3.3 percent, falling below $4 a pound for the first time in five months. Among agricultural commodities, cocoa, cotton, corn and weak retreated more than 2.3 percent in futures trading.
Adherents of the efficient markets hypothesis will look for fundamental explanations for the sell-off, and will doubtless come up with some plausible triggers. But as John Kemp observes in an excellent post, it's impossible to understand the plunge without first recognizing that prices in speculative asset markets can become disconnected from fundamental values from time to time:
It will be entertaining to read the thousands of gallons of ink spilled over the next couple of days as journalists and analysts try to rationalise the sudden turn around and identify that one or few factors that were the “tipping point.”

In reality, commodity prices and other assets rise because investors and hedgers anticipate further gains. The market needs a steady stream of net buying orders to keep rising. But at some point the risk of a setback outweighs the prospect of further gains. Long liquidation offsets fresh buying orders, and the process heads into reverse as the length cascades out of the market.

Given the powerful role of expectations and sentiment in building and sustaining coalitions of long (or on occasion short) investors and hedgers, there does not really have to be a rational cause for the market to turn on its tail, if by rational we are looking for a trigger that seems proportionate to the effect caused.

Even in retrospect, and after thousands of hours of econometric analysis, it has proved impossible to identify rational triggers for big market movements ranging from the stock market crashes of 1907, 1929 and 1987, to the flash crash of May 2010, the implosion of the technology bubble in 2000 or the sudden collapse of the subprime madness in 2007-2008.

None of the prior market movements was in any rational sense sustainable. But when it comes to identifying a specific trigger that caused the market to peak and then head into sudden reverse, it has proved impossible in every case to find the rational cause.
I have little to add to this, except to suggest a more disaggregated view of speculative behavior and an explicit recognition of belief heterogeneity. At any point in time there are a variety of price views within the population of speculators, and trading based on this distribution of beliefs causes prices to move. Prices rise if those expecting appreciation are more confident or better capitalized than those expecting depreciation. The rise then reinforces the price views of buyers and further increases their capitalization advantage relative to sellers. This propels further appreciation.

The main check on the process, as Kemp says, is the increasing perception among some investors that "the risk of a setback outweighs the prospect of further gains." When such fears become sufficiently widespread, further price appreciation is arrested. But the crash does not follow until selling is synchronized, an event whose precise timing is essentially impossible to predict. 

There is some evidence that bubbles can be identified in real time by examining the prices of securities that provide crash insurance. But regardless of whether or not this can be done, the presence of non-fundamental volatility in speculative asset prices is important to consider in the execution of monetary policy. Headline inflation has recently exceeded core inflation largely due to pressures from commodity prices. This has put Fed officials in a bind, uncertain of the relative weights to place on the two measures. If there's a lesson in today's events, it is that the speculative components of inflation measures should not have first order effects on monetary policy, at least until the economy is operating closer to its capacity.

Thursday, April 07, 2011

The Self-Subversion of Albert Hirschman

Albert Hirschman is 96 years old today.

A year ago I marked the occasion with a post on Exit, Voice and Loyalty, a masterpiece full of deep and original insights into the mechanisms that can restore performance in failing organizations and states. This work continues to shed light on events of enormous contemporary importance, from the effects of forced migration to the maintenance of good governance.

This year, I'd like to focus instead on a little-known interview that Hirschman gave to a trio of Italian writers in 1993. The interview was translated into English by Hirschman himself a few years later and published (with minor revisions) in a slim volume called Crossing Boundaries. It covers his early life in a turbulent Europe, his escape to the United States, his work on the economic development of Latin America, and his thoughts on methodology and language.

Interesting lives make for interesting ideas, and Hirschman's is a case in point. Born to a German family of Jewish origin in 1915, he was baptized (but never confirmed) as a Protestant. His education was in French and German, though he would later become fluent in Italian, and eventually in Spanish and English. By the age of sixteen he had joined the youth movement of the Social Democratic Party. Through his sister Ursula (who was a major influence on his life and thought) he met Eugenio Colorni, whose Berlin hotel room was used for the production of anti-fascist pamphlets and fliers. Ursula would later marry Colorni, and one of their daughters, Eva, would go on to become an economist in her own right and marry Amartya Sen. (Eva's untimely death and her influence on Sen's thought is acknowledged in the touching leading footnote of this paper.)

Hirschman watched the rise of Hitler with increasing alarm, and fled Berlin for Paris alone at the age of 18 just a couple of months after the Reichstag fire. Over the course of the next few years he would live in France, England, Spain, and Italy. He spent a year at the London School of Economics in 1935-36, taking courses with Robbins and Hayek, but finding greater intellectual affinity with a younger group of economists among whom was Abba Lerner.

When war broke out in 1939 he joined the French Army and, for fear of being shot as a traitor by approaching German forces, was compelled to adopt a new identity as a Frenchman, Albert Hermant. By 1941 he had migrated to the United States, where he met and married Sarah Hirschman. (They have now been married for seventy years.) He joined the US Army in 1943, and found himself back in Italy as part of the war effort soon thereafter.

At the end of the war Hirschman returned to the US and was involved with the development of the Marshall plan. He subsequently spent four years in Bogota, first as an adviser to the government on development policy, and then as a private economic consultant. After a sequence of appointments at Yale, Stanford, Columbia and Harvard, he moved to the Institute for Advanced Study in Princeton where he and Sarah remain. 

As far as methodology is concerned, Hirschman expresses "a dislike for too unilateral and uniform diagnoses," preferring instead to imagine the unexpected:
I have always had a certain dislike for general principles and abstract prescriptions. I think it is necessary to have an "empirical lantern" or a "visit with the patient" before being able to understand what is wrong with him. It is crucial to understand the peculiarity, the specificity, and also the unusual aspects of the case.
I know well that the social world is most variable, in continuous change, that there are no permanent laws. Unexpected events constantly happen, new causality relations are being installed... with age one's new ideas are predominantly those that contradict the old.
Self-subversion has been a permanent trait of my intellectual personality...
I also feel the need to engage from time to time in abstract theory. This means that I am not totally "anti-theoretical," that I am not totally opposed to parsimony, nor totally in favor of complexity. Some of my ideas are essentially theories of economic development, on the importance of unbalanced growth, for example; the "exit/voice" schema may also derive from a new way of looking at social reality... The success of a theory consists precisely in that suddenly everyone begins to reason according to the new categories.
The idea of trespassing is basic to my thinking... Attempts to confine me to a specific area make me unhappy. When it seems that an idea can be verified in another field, then I am happy to venture in this direction...
I have always been against that methodology of certain social scientists... who study what has happened in some fifty or so countries and then proceed to draw deductions from there on what is likely to happen in the future. Of course, they find themselves without instruments in the face of "important exceptions," such as the case of Hitler in Germany. This is the reason that I have always disliked certain types of social research. I am always more interested in widening the area of the possible, of what may happen, rather than in prediction, on the basis of statistical reasoning, of what will actually happen. The inquiry into the statistical probability that certain social events will actually take place interests me little... I have always found that when something good happens, it occurs as a result of a conjunction of extraordinary circumstances... I am simply not much interested in forecasts; they are not part of my theoretical impulses.
Hirschman has always enjoyed playing with language, taking words with negative connotations and endowing them with fresh and positive meanings. Trespassing, subversion, bias, and doubt all start to carry strangely bright associations in his writing. But for Hirschman this act of appropriation is not simply a source of joy; it can also generate genuine insight:
I enjoy playing with words,  inventing new expressions. I believe there is much more wisdom in words than we normally assume.... Here is an example. One of my recent antagonists, Mancur Olson, uses the expression "logic of collective action" in order to demonstrate the illogic of collective action, that is, the virtual unlikelihood that collective action can ever happen. At some point I was thinking about the fundamental rights enumerated in the Declaration of Independence and that beautiful expression of American freedom as "the right to life, liberty, and the pursuit of happiness.'' I noted how, in addition to the pursuit of happiness, one might also underline the importance of the happiness of pursuit, which is precisely the felicity of taking part in collective action. I simply was happy when that play on words occurred to me.
Hirschman's love of words led him to invent a number of palindromes over the course of his life. Some of these he collected together under the title Senile Lines, signed by Dr. Awkward, for his daughter Katya upon her graduation (both title and author are, of course, palindromes).

Reading this interview made me wonder whether graduate programs in economics place enough emphasis on facility of expression when screening students for admission. A high level of mathematical preparation can certainly ease one's passage through the required coursework. But it does not seem too far-fetched to conjecture that an appreciation for language and a gift for expression might be valuable inputs in the generation of interesting new ideas.

Monday, March 14, 2011

From Order Books to Belief Distributions

In my last post I argued that the price at last trade in a prediction market doesn't generally correspond to the belief of an average or representative trader in any meaningful sense, and ought not to be interpreted loosely as the "perceived likelihood according to the market" that the underlying event will occur.

In contrast, the order book, which is the collection of all unexpired bids and offers that cannot currently be matched against each other, contains a wealth of information about the distribution of trader beliefs. Under certain assumptions about the risk preferences of market participants, one can deduce a distribution of trader beliefs from this collection of standing orders. The imputed distribution may then be used to infer what the average trader (in a well-defined sense) perceives the likelihood of the underlying event to be. Furthermore, it can be used to gauge the extent of disagreement about this likelihood within the trading population.

To illustrate, consider the order book for the contract PRESIDENT.DEM.2012, which pays $10 if the official nominee of the Democratic Party wins the next presidential election. At the time of writing my last post, the collection of unexpired and unfilled orders looked like this:


Prices are expressed as percentages of face value, so the highest bidder was willing to pay $6.25 per contract, while the lowest offer was at $6.29 per contract. The frequency distributions of orders on the two sides of the market were as follows:

These distributions necessarily have disjoint supports, otherwise some orders would be matched and exit the book. The median bid was 62.0 per contract, while the median offer was 64.6. 

How might one deduce a belief distribution from this data?

The first point to note is that anyone placing an order that cannot immediately be filled (and does not immediately expire) faces two kinds of risk. There is the obvious risk that the event may or may not occur; even those whose orders trade immediately are exposed to this. But for an order that remains in the book for some time there is a second source of risk: new information might arrive that substantially alters the likelihood that the event will occur, and results in the order being matched before it can be removed. Given these two sources of risk, traders will post bids at prices that are below their subjective estimates of the likelihood that the event in question will occur. Similarly, those who post offers to sell will do so at prices that lie above their subjective estimates of this likelihood.

A simple way to take these effects into account is to assume that the risk preferences of traders are given by a linear mean-variance objective function of the kind that may be found in any standard text on Investments, with risk aversion parameter A. As an example to illustrate the procedure, suppose that all traders have the same degree of risk-aversion given by A = 0.15, and that buyers post the highest price that they are willing to pay for the asset, while sellers post the lowest price that they are willing to accept. Then the order distribution implies the following distributions of beliefs on the two sides of the market:

Note first that buyers on the whole assign greater likelihood to the occurrence of the underlying event than sellers do, even though all bid prices lie strictly below the lowest of the offer prices. This is a direct consequence of risk-aversion, which induces buyers to post prices below their subjective beliefs and sellers to offer at prices above theirs. The median buyer belief is 0.65 while the median seller belief is 0.59.

Second, buyer beliefs are spread across a wider range than are the beliefs of sellers. This simply replicates a pattern in the order book, which is characterized by many large bids at varying prices but a concentration of offers in a narrower price range.

Third, the belief supports are not disjoint: there is a range of beliefs that is represented on both sides of the market. These beliefs probably correspond to orders placed by market makers who simultaneously place bids and offers with the aim of profiting from the spread. For instance, the bid for 200 contracts at 56.6 and the offer of 200 at 65.0 both imply an imputed belief of about 0.60 under the assumed value for the risk-aversion parameter. It is quite conceivable, indeed very likely, that these orders were placed by the same individual.

Aggregating the buyer and seller belief distributions yields the belief distribution for the market as a whole:

Since bids are more numerous than offers, this aggregate belief distribution lies closer to that for buyers. The median belief in this case is 0.63, which happens to be slightly above the price at last trade.

This is one way of making precise the idea of "the perceived likelihood according to the market." Under the specifications adopted here, this perception is close to the equilibrium price. But it need not be in general. Higher values of the risk-aversion parameter would generate belief distributions for buyers and sellers that are further apart. While the theoretical effect of this on the median belief is ambiguous, for the particular example considered here, a risk-aversion parameter of A = 0.25 would generate a median belief of 0.65.

Furthermore, changes in beliefs within the population of traders could make their presence known through changes in bids and offers without any change in the equilibrium price. That was essentially the point of my last post: any given equilibrium price is consistent with a broad range of belief distributions. By focusing on the complete distribution (rather than just a point estimate) one can get a better sense of where market perceptions really lie.

One interesting question that follows from the arguments advanced here is this: could one use an imputed belief distribution to predict short-term movements in the equilibrium price?

Not necessarily. Even if one felt that the belief corresponding to the median order was in some sense the best forecast regarding the likelihood of the underlying event, one would not be induced to place an order that moves the equilibrium price. This is simply due to the fact that bids lie below subjective beliefs while offer prices lie above them. If large numbers of individuals simply adopted the imputed median belief as their own forecast, they would be induced to enter bids and offers around this belief, affecting the variance of the belief distribution but not necessarily its median. Nevertheless, it is worth noting that high-frequency trading outfits in US equity markets do manage to use proprietary data feeds to make effective short-run price forecasts.

As Andrew Gelman put it in his (very kind) response to my earlier post:
Markets are impressive mechanisms for information aggregation but they're not magic. The information has to come from somewhere, and markets are inherently always living in the phase transition between stability and instability... This is not to say that prediction markets are useless, just that they are worth studying seriously in their own right, not to be treated as oracles.
Prediction markets are indeed worth studying seriously not only because they are complex and interesting mechanisms for information aggregation, but also because the simplicity of the contracts traded can allow strong inferences to be made about the behavior of market participants. And some of these insights could be generalized to apply to speculative asset markets with much greater volume, liquidity, and economic importance.

Sunday, March 06, 2011

On the Interpretation of Prediction Market Data

As the election season draws closer, considerable attention will be paid to prices in prediction markets such as Intrade. Contracts for potential presidential nominees are already being scrutinized for early signs of candidate strength. In a recent post on the 2012 Republican field, Nate Silver used prediction market data (among other sources of information) to generate the following very interesting chart:

Source: FiveThirtyEight: Nate Silver's Political Calculus

While Nate's post was concerned primarily with the positioning of candidates along two-dimensions of the political spectrum, he used market prices as a proxy for the probabilities of eventual nomination:
[The] area of each candidate’s circle is proportional to their perceived likelihood of winning the nomination, according to the Intrade betting market. Mitt Romney’s circle is drawn many times the size of the one for the relatively obscure talk-radio host Herman Cain because Intrade rates Mr. Romney many times as likely to be nominated.
This interpretation of prices as probabilities is common and will be repeated frequently over the coming months. But what could the "perceived likelihood according to the market" possibly mean?

Markets don't have perceptions. Traders do, but there is considerable heterogeneity in trader beliefs at any point in time. Prediction market prices contain valuable information about this distribution of beliefs, but there is no basis for the common presumption that the price at last trade represents the beliefs of a hypothetical average trader in any meaningful sense. In fact, to make full use of market data to make inferences about the distribution of beliefs, one needs to look beyond the price at last trade and examine the entire order book.

As an example, consider Intrade's market for the presidential election winner by party. This market consists of three contracts comprising a mutually exclusive and exhaustive set of outcomes. One  contract pays out if the winner is a Democrat, a second if the winner is a Republican, and the third if the winner is not the official nominee of either party. The current prices of these contracts are as follows:

Contract Bid AskLast
PRESIDENT.DEM.2012 62.562.9 62.5
PRESIDENT.REP.2012 35.1 35.5 35.0
PRESIDENT.OTHER.2012 2.2 2.3 2.2

These prices are expressed as percentages of contract face value, which in each case is $10. That is, the price at last trade of the DEM contract was $6.25. The buyer risks this amount (per contract purchased) and stands to receive $10 if (and only if) the specified event occurs. The seller risks $3.75 to take the opposite side of the bet.

It's tempting to interpret the price at last trade as a probability because the sum of these prices adds up to approximately 100% of the contract face value. The reason for this is that the sum of the ask prices must be no less than 100, otherwise an arbitrage opportunity would exist: one could buy all contracts and be sure that one will expire at face value, thus generating in a risk-free profit. Similarly, the sum of bid prices must be no greater than 100. If the market is liquid, so that bid-ask spreads are small, then all prices (bid, ask, and last) will sum to approximately 100. This is the basis for the claim that, at current prices, the "market" is predicting that the Democratic nominee will win the White House with probability 62.5%.

But is this interpretation reasonable? All that the price at last trade can tell us about is the beliefs of the two parties to this transaction. If both are risk-averse or risk-neutral, they each must believe that entering their respective positions will yield a positive expected return. Hence the buyer must assign probability at least 62.5% to the event that the Democrat is elected, while the seller assigns a likelihood of at most 62.5% to this event.

This tells us nothing about the beliefs of traders who are not party to this transaction. However, additional information about the distribution of beliefs in the trader population can be obtained by looking at the order book, which at present looks like this:


Note that there are several large orders (in excess of 100 contracts) but these are unevenly distributed on the two sides of the market. Consider, for instance, the bid for 500 contracts at 62. Whenever such an order is placed, Intrade freezes funds in the trader's account equal to the worst case loss, which in this case is $3,100. Upon expiration, these contracts will be worth either $5,000 (if the event occurs) or they will be worthless. Again, assuming risk-aversion or risk-neutrality, one can impute to the potential buyer a belief that the event will occur with probability at least 62%.

But this imputation will be an underestimate for at least two reasons. First, the greater the degree of risk-aversion, the more compensation a trader will demand to enter a risky position. Since these positions are indeed very risky, it is likely that many of those placing large standing bids have significantly positive expected returns, and hence believe that the probability of the event occurring exceeds by some measure the imputed value.

Second, traders placing large bids are aware that conditional on their order being met, it is likely that some news will have emerged that makes the event less likely to occur. That is, they understand that a trade against their order is more likely to occur in the event of bad news (from their perspective) than good news. Taken together, these factors imply that traders placing large bids must be considerably more optimistic about the occurrence of the event than the naive imputation of 62% would suggest.

The same reasoning applies to those taking positions on the sell side: traders placing large limit orders must believe that the event is considerably less likely to occur than a naive reading of their posted price would suggest.

What, then, can one say about the distribution of beliefs in the market? To begin with, there is considerable disagreement about the outcome. Second, this disagreement itself is public information: it persists despite the fact that it is commonly known to exist. That is, traders don't attribute differences in beliefs simply to differences in information applied rationally to a common prior. (This follows from Aumann's famous theorem which states that individuals who have common priors and are commonly known to be rational cannot agree to disagree no matter how different their private information may be.) As a result, the fact of disagreement is not itself considered to be informative, and does not lead to further belief revision. The most likely explanation for this is that traders harbor doubts about the rationality or objectivity of other market participants.

Third, there is a cluster of large buy orders at around 62, and a cluster of large sell orders in the 64-65 range. Hence there are some traders who believe quite confidently that Democrats will hold the White House with probability considerably greater than 62%. And there is another group who believe, also confidently, that the chances of this occurring are quite a bit below 64%. As things stand, the former group appear to be either more numerous or more confident in their judgments.

More generally, it is entirely possible that beliefs are distributed in a manner that is highly skewed around the price at last trade. That is, it could be the case that most traders (or the most confident traders) all fall on one side of the order book. In this case the arrival of seemingly minor pieces of information can cause a large swing in the market price. Of course, such swings may draw into the market other participants whose beliefs are not currently represented in the order book. But the bottom line is this: there is no meaningful sense in which one can interpret the price at last trade as an average or representative belief among the trading population.

Sunday, February 20, 2011

Market Ecology

The erudite and very readable RT Leuchtkafer has posted yet another comment for the Securities and Exchange Commission to digest. This one was prompted by a paper by Andrei Kirilenko, Albert Kyle, Mehrdad Samadi and Tugkan Tuzun that provides a fascinating glimpse into the kinds of trading strategies that are common in asset markets today and the manner in which they interact to determine the dynamics of asset prices.

As I have argued on a couple of earlier occasions, the stability of a market depends on the composition of trading strategies, which in turn evolves over time under pressure of differential performance. Since performance itself depends on market stability, and destabilizing strategies prosper most when they are rare, this process can give rise to switching regimes: the market alternates between periods of stability and instability, giving rise to empirical patterns such as fat tails and clustered volatility in asset returns.

But the underlying strategies that are at the heart of this evolutionary process are generally unobservable. Since traders have no incentive to reveal successful strategies, these can only be inferred if individual orders can be traced to specific accounts.

This is what Kirilenko and his co-authors have been able to do, on the basis of "audit-trail, transaction-level data for all regular transactions in the June 2010 E-mini S&P 500 futures contract (E-mini) during May 3-6, 2010 between 8:30 a.m. CT and 3:15 p.m. CT." While their primary concern is with the flash crash that materialized on the afternoon of the 6th, their analysis also sheds light on the composition and behavior of strategies over the period that led up to this event. Their analysis accordingly provides broader insight into the ecology of financial markets.

The authors classify accounts into six categories based on patterns exhibited in their trading behavior, such as horizon length, order size, and the willingness to accumulate significant net positions.  The categories are High Frequency Traders (HFTs), Intermediaries, Fundamental Buyers, Fundamental Sellers, Opportunistic Traders and Small Traders:
[Different] categories of traders occupy quite distinct, albeit overlapping, positions in the “ecosystem” of a liquid, fully electronic market. HFTs, while very small in number, account for a large share of total transactions and trading volume. Intermediaries leave a market footprint qualitatively similar, but smaller to that of HFTs. Opportunistic Traders at times act like Intermediaries (buying a selling around a given inventory target) and at other times act like Fundamental Traders (accumulating a directional position). Some Fundamental Traders accumulate directional positions by executing many small-size orders, while others execute a few larger-size orders. Fundamental Traders which accumulate net positions by executing just a few orders look like Small Traders, while Fundamental Traders who trade a lot resemble Opportunistic Traders. In fact, it is quite possible that in order not to be taken advantage of by the market, some Fundamental Traders deliberately pursue execution strategies that make them appear as though they are Small or Opportunistic Traders. In contrast, HFTs appear to play a very distinct role in the market and do not disguise their market activity.
Based on this taxonomy, the authors examine the manner in which the strategies vary with respect to trading volume, liquidity provision, directional exposure, and profitability. Although high-frequency traders constitute a minuscule proportion (about one-tenth of one percent) of total accounts, they are responsible for more than a third of aggregate trading volume in this market. They have extremely short trading horizons and maintain low levels of directional exposure. Under normal market conditions they are net providers of liquidity but their desire to avoid significant exposure means that they can become liquidity takers very quickly and on a large scale.

The extent to which different trading strategies provide liquidity to the market is assessed by the authors on the basis of a measure of order aggression. An order is said to be aggressive if it is marketable against a resting order in the limit order book (and is therefore executed immediately.) The resting order with which it is matched is said to be passive:
From a liquidity standpoint, a passive order (either to buy or to sell) has provided visible liquidity to the market and an aggressive order has taken liquidity from the market. Aggressiveness ratio is the ratio of aggressive trade executions to total trade executions... weighted either by the number of transactions or trading volume... HFTs and Intermediaries have aggressiveness ratios of 45.68% and 41.62%, respectively. In contrast, Fundamental Buyers and Sellers have aggressiveness ratios of 64.09% and 61.13%, respectively.
This is consistent with a view that HFTs and Intermediaries generally provide liquidity while Fundamental Traders generally take liquidity. The aggressiveness ratio of High Frequency Traders, however, is higher than what a conventional definition of passive liquidity provision would predict.
Moreover, the aggressiveness ratio of HFTs is not stable over time and can spike in times of market stress as they compete for liquidity with other market participants:
During the Flash Crash, the trading behavior of HFTs, appears to have exacerbated the downward move in prices. High Frequency Traders who initially bought contracts from Fundamental Sellers, proceeded to sell contracts and compete for liquidity with Fundamental Sellers. In addition, HFTs appeared to rapidly buy and [sell] contracts from one another many times, generating a “hot potato” effect before Opportunistic or Fundamental Buyers were attracted by the rapidly falling prices to step in and take these contracts off the market.
To my mind, the most revealing findings in the paper pertain to the profitability of the various strategies, and the ability of some traders to anticipate price movements over very short horizons (emphasis added):
High Frequency Traders effectively predict and react to price changes... [they] are consistently profitable although they never accumulate a large net position. This does not change on May 6 as they appear to have been even more successful despite the market volatility observed on that day... Intermediaries appear to be relatively less profitable than HFTs. During the Flash Crash, Intermediaries also appeared to have incurred significant losses... consistent with the notion that the relatively slower Intermediaries were unable to liquidate their position immediately, and were subsequently run over by the decrease in price...

HFTs appear to trade in the same direction as the contemporaneous price and prices of the past five seconds. In other words, they buy... if the immediate prices are rising. However, after about ten seconds, they appear to reverse the direction of their trading... possibly due to their speed advantage or superior ability to predict price changes, HFTs are able to buy right as the prices are about to increase... In marked contrast... Intermediaries buy when the prices are already falling and sell when the prices are already rising...

We consider Intermediaries and HFTs to be very short term investors. They do not hold positions over long periods of time and revert to their target inventory level quickly... HFTs very quickly reduce their inventories by submitting marketable orders. They also aggressively trade when prices are about to change. Over slightly longer time horizons, however, HFTs sometimes act as providers of liquidity. In contrast... unlike HFTs, Intermediaries provide liquidity over very short horizons and rebalance their portfolios over longer horizons.
What appears to have happened during the crash is that the fastest moving market makers with the most effective algorithms for short run price prediction were able to trade ahead of their slower and less effective brethren, imposing significant losses on the latter. In Leuchtkafer's colorful language, this was a case of interdealer panic and market maker fratricide.

But regardless of how the gains or losses were distributed in this instance, the fact remains that an overwhelming share of trading activity is based short-run price forecasts rather than fundamental research. Under these conditions, how can one expect prices to track changes in the fundamental values of the income streams to which the assets give title?

Markets have always been based on a shifting balance between information augmenting and information extracting strategies, but a computational arms race coupled with changes in institutions and regulation seem to have shifted the balance markedly towards the latter. Unless the structure of incentives is altered to favor longer holding periods, I suspect that we shall continue to see major market disruptions and spikes in volatility.

This is not just a matter of academic interest. To the extent that changes in the perceived volatility of stocks gives rise to changes in asset allocations by institutional and retail investors, there will be consequences for the extent and distribution of risk-bearing, and ultimately for rates of job creation and economic growth.


Update (2/21). Yves Smith has generously allowed me to crosspost freely on naked capitalism, where this entry has attracted a couple of interesting comments. Here is Peripheral Visionary:
With respect to May 6... the faster algorithms may have caused the damage, but I think they also suffered from it. From the data I reviewed, the traditional market makers had huge numbers of buys at the bottom and huge numbers of sells through the recovery, and so may have come out net positive on the day, while the faster algorithms panicked when the market moved outside the range of expected behavior, and many were shut down, effectively locking in losses. In fact, I suspect that losses for HFT algorithms would have been much larger had not the exchanges canceled so many trades, with many, even most, of the sells at the bottom being algorithm trades.
This was also my initial reaction to the crash, which is why I argued against the cancellation of trades on grounds of stability. The Kirilenko paper does not really settle the question because it focuses only on the E-mini futures market where no trades were broken.

The comment by financial matters is also worth a look; this one links to a CNBC interview with Jim McCaughan in which the exit of institutional and retail investors from the market is documented.

Saturday, February 12, 2011

Belief Heterogeneity

There was an interesting conference at Columbia yesterday (though not nearly as interesting as the momentous events unfolding elsewhere at the time). The theme was "Heterogeneous Expectations and Economic Stability" and this is how the organizers (Ricardo Reis and Mike Woodford) described the goal of the meeting:
Conventional models in both macroeconomics and finance are based on the hypothesis of rational expectations, under which all agents are assumed to have common expectations, corresponding to the probabilities implied by the economist’s model. The adequacy of this familiar hypothesis has been called into question by recent events, however, notably the instability resulting from the boom and bust in real estate prices. The purpose of this conference is to bring together researchers exploring alternative approaches to modeling the dynamics of expectations, with particular attention to applications in macroeconomics and finance. We have sought to bring together proponents of a variety of approaches, who may not frequently engage one another, in the hope of reaching conclusions about which directions are most promising at this time.
And, indeed, the collection of papers presented were methodologically diverse. Although any such classification is bound to be coarse and imperfect, there seem to be four different directions in which research on expectations is proceeding. First, there is the approach of near-rational expectations, in which intertemporal optimization and Bayesian rationality are maintained but allowance is made for heterogeneous prior beliefs. Then there is the behavioral approach, which endows agents with heuristics based on regularities identified in laboratory experiments. Third, there is the evolutionary approach, which allows for a broad range of competing forecasting rules with the population composition shifting over time under pressure of performance differentials. And finally, the empirical approach, which treats expectations as a state variable to be measured using survey or market data and explained just as one would explain output or inflation. Each of these perspectives was on prominent display at the conference.

Regular readers of this blog (if there are any left, given the recent decline in my rate of posting) will know that I am deeply skeptical of the behavioral approach to trading strategies, for the simple reason that behavior in high stakes environments with strong selection pressures driving entry and exit is unlikely to be psychologically typical in the sense of reflecting outcomes of lab experiments with standard subject pools. What might be a common behavioral trait in the population at large could be extremely rare among traders, especially if such traits can be exploited with ease by other market participants. By the same token, behavior that is pathological in the lab could well become widespread in financial markets from time to time. As a result my favored approach to trading strategies in general and forecasting rules in particular is ecological.

Not surprisingly, then, the presentation I found most appealing was that of Blake LeBaron. Blake is a pioneer in the development of agent-based computational models of financial markets, and the paper he presented belonged to this class. A large number of different forecasting strategies, some based on fundamental information and others on technical data analysis, compete with each other and with a traditional buy-and-hold strategy in his model. The resulting trading dynamics give rise to asset price returns that exhibit both moderate levels of short-run momentum as well as mean reversion over longer horizons. Moreover, the long run population of forecasting rules is ecologically diverse, with both passive and active strategies well represented.

During the panel discussion at the end of the conference, Albert Marcet observed that the conference itself was symptomatic of a revolution in economic thought that is currently underway, prompted in large measure by the global financial crisis. If methodologies such as agent-based computational economics start to be published in major journals and attract attention from the most promising graduate students, then there really will be a revolution underway. But I'm not convinced that we're there yet.

One final thought. The conference organizers described the rational expectations hypothesis as one "under which all agents are assumed to have common expectations, corresponding to the probabilities implied by the economist’s model." This is an accurate characterization as far as the contemporary implementation of the hypothesis is concerned, but it is important to note that this is not the hypothesis originally advanced by John Muth in his classic paper. In fact, Muth cited survey data exhibiting "considerable cross-sectional differences of opinion" and was quite explicit in stating that his hypothesis "does not assert... that predictions of entrepreneurs are perfect or that their expectations are all the same.'' In Muth's version of rational expectations, each individual holds beliefs that are model inconsistent, although the distribution of these diverse beliefs is unbiased relative to the data generated by the actions resulting from these expectations. It is a wisdom of crowds argument, rather than one based on individual rationality.

Viewed in this manner, there a sense in which the heterogeneous prior models (with diverse beliefs centered on a model consistent mean) represent both a departure from the rational expectations hypothesis as currently understood, as well as a return to the original rational expectations hypothesis as formulated by Muth. The history of economic thought is full of such rather strange twists and turns.