Comparing the 2020 US election polls & predictions

Update: October 22nd 2020

Oraclum
7 min readOct 1, 2020

What the others are saying

In the 2016 election the accuracy of our prediction was all the more impressive given the failure of every single benchmark we compared ourselves to.

This election we will once again follow the same benchmarks and compare our predictions to theirs. We are looking at the most prominent and polling aggregators, models, prediction markets, and betting odds. We will update these on a weekly basis up until Election day (November 3rd).

If you’re interested in finding our why our methodology is superior to the regular polling models see here.

If you want to buy our 2020 predictions to stay ahead of the curve click here (we’re offering skin-in-the-game pricing models).

Polling aggregators

The benchmarks are separated into several categories. The first includes sites that use a particular polling aggregation mechanism. Namely, Nate Silver’s FiveThirtyEight, the Princeton Election Consortium, Real Clear Politics average of polls, PollyVote, the Upshot, and The Economist. For each site we track the probability of winning for each candidate (if given), their final electoral vote projection, and their projected vote share. The specific methodology for each of these can be found on their respective websites, with each of them employing a commendable effort in the election prediction game (except RCP which is just a simple average of polls).

As you can see from the table, each of these polling aggregators are giving Biden much higher chances of winning — on average 88% (an increase from 84% two weeks ago). Interestingly, last time these were slightly higher than they were for Clinton same time four years ago (around 79%), but this time they are slightly lower — this time four years ago Clinton had a 92% probability of winni. The electoral college distribution is also clearly in Biden’s favor, as is the estimated vote share.

Source: The Upshot

Last time there were more competitors. At least 4 of them dropped out, while others are much more careful this time. For example, the Upshot from New York Times is still tracking polls, but it is no longer building a prediction model as it did back in 2016. This time they’re hedging really well by presenting the readers with 3 electoral maps (see the side graphic): one according to current polls (for which they warn the readers are far from perfect), one according to a three-point lead in a state, and a final one that takes into account if the polls feature the same error as they had in 2016. Biden is leading in all three, but the final map suggests a somewhat closer race.

We will compare each of these with our BASON survey based on all three criteria (where applicable): chances of winning, electoral college vote, and final vote share. Our final comparison will be on a state-by-state basis. This comparison will be made on election day featuring, where applicable, the precision of each prediction made by the benchmarks we use.

Models

There are two kinds of election prediction models we look at. The first group are political-analyst based models done by reputable non-partisan websites analyzing US elections: the Cook Political Report and Sabato’s Crystal Ball. Each is based on a coherent and sensible political analysis of elections. Here we only report the electoral college predictions with the tossup seats as given in their report. These models do not give out probabilities or vote share predictions.

The second group of models are pure models based on a number of political, economic, or socio-demographic data. These are all conveniently assembled by the website PollyVote. This website does its own forecast aggregation (presented in the first group), however it also assembles a whole number of different prediction methods.

What we are interested in are the so-called index models based on political science models of prospective and retrospective voting; expectation models which compile betting markets, expert judgment, and citizen forecasters (wisdom of crowds); and finally the so-called naive models, models which exploit Occam’s razor and all the complexity of standard prediction models. According to this only the naive models are giving Trump a slight edge. It is worth saying that most of these don’t use any polling data.

The downside is that the pure models only predict the final outcome of who the winner will be (and by what margin) but they don’t offer a state-by-state prediction of electoral college votes.

Prediction markets & betting odds

Next are prediction markets. Prediction markets were historically shown to be even better than regular polls in predicting the outcome (except in the previous election where they were giving Clinton on average 75% probability of winning). Their success is often attributed to the fact that they use real money so that people actually “put their money where their mouth is”, meaning they are more likely to make better predictions.

Last time we followed nine such markets, but today only four are left: the ever-present Iowa Electronic Markets (IEM), PredictIt, Election betting odds, and Hypermind. Each market is given a weight based on the volume of trading, so that we can calculate and compare one single prediction from all the markets. The prediction markets, unlike the regular polls, don’t produce estimates of the total vote share, neither do they produce state-by-state predictions (at least not for all states). They instead offer probabilities that a given outcome will occur, so the comparison with the BASON survey will be done purely on the basis of the probability distributions of an outcome.

Interestingly, prediction markets are giving Biden lower chances of winning than they were giving Clinton this time 4 years ago. She was at 87% on average, whereas Biden is at 72%.

Betting firms are giving Biden even lower odds. Last time they were favoring Clinton at about 84%, today Biden’s probabilities are at 65% (up from 59% two weeks ago). This is the only big difference between today and 4 years ago. Back then every single benchmark was clearly in Clinton’s favor and by a large margin. Today, the polls are pulling stronger in favor of Biden, but the betting and prediction markets are more bearish. This is not surprising as these markets are purely based on expectations — and people tend to be more careful this time.

Superforcasters

Finally we compare our method against the Superforcaster crowd of the Good Judgement Project. Superforecasters are a colloquial term for participants in Phillip Tetlock’s Good Judgement Project (GJP). The GJP was a part of a wider forecasting tournament organized by the US government agency IARPA following the intelligence community fiasco regarding the WMDs in Iraq. The government wanted to find whether or not there exists a more formidable way of making predictions which would improve decision-making, particularly in foreign policy. The GJP crowd (all volunteers, regular people, seldom experts) significantly outperformed everyone else several years in a row. Hence the title — superforecasters (there’s a number of other interesting facts about them — read more here, or buy the book). However superforecatsers are only a subset of more than 5000 forecasters who participate in the GJP. Given that we cannot really calculate and average out the performance of the top predictors within that crowd, we have to take the collective consensus forecast of all the forecasters in the GJP.

Similar to the prediction markets, the GJP project doesn’t ask its participants to predict the actual voting percentage, nor does it ask the prediction of the total distribution of electoral college votes, it only asks them to gauge the probability of an event occurring. However it does ask predictions for some swing states, so we will at least report these for later comparison.

The superforecasters too are playing it safer this time. They are estimating about 70% in favor of Biden, compared to 75% for Clinton this time four years ago. The swing states are also interesting — only Ohio is predicted to be more in favor of a Trump win. These numbers across the key swing states (FL, OH, PA) are almost identical as they were 4 years ago.

Are you convinced by these numbers? If not wait for ours to come out, and don’t forget to pre-order your copy on our webiste!

Let the race begin!

--

--

Oraclum

Oraclum is a data company led by a team of scientists that builds prediction models. https://www.oraclum.co.uk