Last month, I attended Manifest (NYTimes Writeup), a conference on prediction markets. The many failures of prediction markets loomed large: failure to succeed as regulated businesses, failure to inform public policy, failure to out-predict expert opinion. In this regard, I felt it was an odd event: many slightly deflated hype-(mostly)-men struggling to re-hype themselves.
As a committed hype-man for prediction markets, my pitch to the uninitiated is that the primary current value of prediction markets is to surface talent, not information. In politics and economics, I see limited evidence of prediction markets currently performing better than informed experts. The signal provided by a few savants is drowned out by noise from liquidity providers, arbitrageurs, and deep-pocketed degenerates.
At this stage of prediction market development, I see them as valuable training/hiring tools: they are training people to do valuable forecasting work and providing employers (hedge funds, reinsurers) with a possible pool of potential hires. In much the same way that the market-maker SIG uses poker to identify top trading talent, or data science teams use Kaggle to identify potential ML candidates, I think reinsurers, hedge funds, and econ teams at banks could use prediction markets like Kalshi or Polymarket to identify potential forecasting analysts. For Kalshi and Polymarket, a hiring platform could serve as a valuable diversifying revenue stream as they slowly build the liquidity necessary to generate reliably accurate average forecasts.
Evidence from Economics
For the last two years, the prediction market Kalshi has hosted markets on a variety of US economic stat releases: monthly core inflation, monthly headline inflation, Quarterly US GDP, monthly unemployment, monthly jobs reports. Only the inflation markets have proved to be reliably liquid (~100,000 bet per month). Despite Kalshi’s hype to the contrary, these markets have shown no more forecasting power than the median bank projection, as shown in the table below1. Their advantage over the mean projection comes from excluding the worst of the Bloomberg Terminal forecasting offenders, the names that reliably are wrong and miss the obvious micro dynamics in airlines, used cars, and other small, volatile categories.
Of course, the actual betting market price does not solely reflect the forecasts of the best market participants. I have talked with some of these people, and from understanding aspects of their methods, I have gained moderate confidence they are better than the top bank analysts. These people have also demonstrated skill in other domains like political gambling. These people’s signal, however, is mixed with the noise of three other types of market participants.
Kalshi’s liquidity provision arm, which provides liquidity (at a loss) to aid growth.
People who are losing money in these markets in order to do arbitrage in other prediction markets, or as hedging against highly correlated financial assets.
A steady stream of degenerate gamblers who (likely) outnumber the savants.
For financial firms interested in forecasting these markets, identifying the top people seems much more lucrative than investigating the pricing, which reflects a lot of noise and only a little signal.
Evidence from Politics
Several talks at Manifest highlighted the empirical failures of election prediction markets to out-forecast expert opinion (e.g. Jeremiah Johnson, Pratik Chougule). The best empirical analysis of this I have seen comes from Maxim Lott, of the managers of electionbettingodds.com, a website that aggregates prediction market forecasts. Maxim compared the aggregated betting odds to Nate Silver’s election forecasts at 538, and concluded that they were similarly accurate. This is poor performance given that Nate’s model outputs are public; prediction markets are adding no value on top of that free, public information.
Figure 2: Brier Scores (explanation of Brier Scores) for 538 vs. betting market averages.
However, as with economics betting, there is a group of political betters who clearly have superb track records and cross-domain skill at forecasting. The best writeup of some of these people I have seen is done by Brian Golden in the Washington Monthly
.
Evidence from The Weather/Mortgages
I do not mean to overstate my case; prediction markets have proven extremely accurate in some areas unrelated to core economic or political questions.
If you want to know the high temperature in New York City for example, Kalshi’s daily high temperature markets are easily better than the top weather sites like Weather.com. Woe is to the person (like me) who tried to see if you could beat pricing by hooking up an API to official government sources. You will be front run by someone else using more timely sources or their own weather data.
Kalshi’s annual global temperature market also clearly forecasted that 2023 would be the hottest year in recorded history, while official government webpages relied on inadequate models that did not incorporate the influence of El Nino. My Blog on Annual Temperature Markets.
For other markets, like Kalshi’s weekly mortgage rate market, people stopped participating because the pricing was so eerily accurate. I think it is likely but not certain the information was leaking before the reports and someone was trading on that basis: Dormant Weekly Mortgage Market (RIP).
Prediction Markets As HR Platforms
Were I running strategy for a major prediction market, I would attempt to monetize talent-identification, as Kaggle has done for Machine-Learning skill. Banks, hedge funds, reinsurers, and consultancies all need people with forecasting skill. The economic value of even slightly more accurate forecasts is enormous to investors and insurers. Given the struggles prediction markets seem to face in generating organic revenue growth, I think this route at least provides a diversifying revenue stream.
I picked a cutoff of 50k total traded volume as minimum threshold for analysis, which led to exclusion of January-August 2022. Volume jumped substantially last September.