Showing posts with label Elections. Show all posts
Showing posts with label Elections. Show all posts

Saturday, March 13, 2021

Forecasting the 2020 election: a retrospective

What I did  

One of my hobbies is forecasting US presidential elections using opinion poll data. The election is over and Joe Biden has been sworn in, so this seems like a good time to look back on what I got right and what I got wrong. 

I built a computer model that ingests state-level opinion poll data and outputs a state-level forecast of the election results. My model aggregates polling data, using the previous election results as a starting point. It's written in Python and you can get it from my Github page. The polling data comes from the ever-wonderful 538.
(This pole works, unlike some other polls. Image source: Wikimedia Commons, License: Creative Commons, Author: Daniel FR.)

What I got right

My final model correctly predicted the results of 49 out of 51 states (including Washington D.C.). 

What I got wrong

The two states my model got wrong were Florida and North Carolina, and these were big misses - beyond my confidence interval. The cause in both cases was polling data. In both states, the polls were consistently wrong and way overstated Biden's vote share. 

My model also overstated Biden's margin of victory in many of the states he won. This is hidden because my model forecast a Biden victory and Biden won, but in several cases, his margin of victory was less than my model predicted - and significantly so.

The cause of the problem was opinion polls overstating Biden's vote share.

The polling industry and 2020

The polling industry as a whole over-stated Biden's support by several percentage points across many states. This is disguised because they got most states directionally correct, but it's still a wide miss. 

In the aftermath of 2016, the industry did a self-examination and promised it would do better next time, but 2020 was still way off. The industry is going to do a retrospective to find out what went wrong in 2020.

I've read a number of explanations of polling misses in the press but their motivation is selling advertising, not getting to the root cause. Polling is hard and 2020 was very different from previous years; there was a pandemic and Donald Trump was a highly polarizing candidate. This led to a higher voter turnout and many, many more absentee ballots. If the cause was easy to find, we'd have found it by now.

The 2020 investigation needs to be thorough and credible, which means it will be several months at least before we hear anything. My best guess is, there will be an industry paper in six months, and several independent research papers starting in a few months. I'm looking forward to the analysis: I'm convinced I'm going to learn something new.

Where next?

There are lots of tweaks I could make to my model, but I'm not going to do any of them until the underlying polling data improves. In other words, I'm going to forget about it all for three years. In fact, I'd quite like to forget about politics for a while.

If you liked this post, you might like these ones

Monday, February 1, 2021

What do Presidential approval polls really tell us?

This is a technical piece about the meaning of a type of polling. It is not political in favor of or against President Trump. I will remove any political comments.

What are presidential approval polls?

Presidential approval polls are a simple concept to grasp: do you approve or disapprove of President X? Because newspapers and TV channels can always use them for a headline or an on air-segment, they love to commission them. During President Trump's presidency, I counted 16,500 published approval polls.

But what do these polls mean and how should we interpret them? As it turns out, understanding what they're telling us is slippery. I'm going to offer you my guide for understanding what they mean.

(Image source: Wikimedia Commons. License: Public domain.)

My data comes from the ever-wonderful 538 which has a page showing the approval ratings for President Trump. Not only can you download the data from the page, you can also compare President Trump's approval ratings with many previous presidents' approval ratings.

Example approval results

On 2020-10-29, Fox News ran an approval poll for President Trump. Of the 1,246 people surveyed:

  • 46% approved of President Trump
  • 54% disapproved of President Trump
which seems fairly conclusive that the majority disapproves. But not so fast. On the same day, Rasmussen Reports/Pulse Opinion Research also ran an approval poll, this time of 1,500 people, their results were:
  • 51% approved of President Trump
  • 48% disapproved of President Trump.
These were both fairly large surveys. How could they be so different?

Actually, it gets worse because these other surveys were taken on the same day too:
  • Gravis Marketing, 1,281 respondents, 52% approve, 47% disapprove
  • Morning Consult, 31,920 respondents, 42% approve, 53% disapprove
Let's plot out the data and see what the spread is, but as with everything with polls, this is harder than it seems.

Plotting approval and disapproval over time

Plotting out the results of approval polls seems simple, the x-axis is the day of the poll and the y-axis is the approval or disapproval percentage. But polls are typically conducted over several days and there's uncertainty in the results. 

To take a typical example, Global Marketing Research Services conducted a poll over three days 2020-10-23 to 2020-10-27. It's misleading to just plot the last day of the poll; we should plot the results over all the days the poll was conducted. 

The actual approval or disapproval number is subject to sampling error. If we assume random sampling (I'm going to come back to this later), we can work out the uncertainty in the results, more formally, we can work out a confidence interval. Here's how this works out in practice. YouGov did a poll over three days (2020-10-25 to 2020-10-27) and recorded 42% approval and 56% disapproval for 1,365 respondents. Using some math I won't explain here, we can write these results as:
  • 2020-10-25, approval 42 ± 2.6%, disapproval 56 ± 2.6%, undecided 2 ± 0.7%
  • 2020-10-26, approval 42 ± 2.6%, disapproval 56 ± 2.6%, undecided 2 ± 0.7%
  • 2020-10-27, approval 42 ± 2.6%, disapproval 56 ± 2.6%, undecided 2 ± 0.7%
We can plot this poll result like this:
Before we get to the plot of all approval ratings, let's do one last thing. If you're plotting large amounts of data, it's helpful to set a transparency level for the points you're plotting (often called alpha). There are 16,500 polls and we'll be plotting approve, disapprove, and undecided, which is a lot of data. By setting the transparency level appropriately, the plot will have the property where the more intense the color is, the more the poll results overlap. With this addition, let's see the plot of approval, disapproval, and undecided over time.
Wow. There's quite a lot going on here. It's hard to get a sense of changes over time. I've added a trend line for approval, disapproval, and undecided so you can get a better sense of the aggregate behavior of the data.

Variation between pollsters

There's wide variation between opinion pollsters. I've picked out just two, Rasmussen Reports/Pulse Opinion Research and Morning Consult. To see the variation more clearly, I'll just show approvals for President Trump and just show these two pollsters and the average for all polls.
To state the obvious, the difference is huge and way above random sampling error. Who's right, Rasmussen Reports or Morning Consult? How can we tell?

To understand what this chart means, we have to know a little bit more about how these polls are conducted.

How might you run an approval poll?

There are two types of approval polls.

  • One-off polls. You select your sample of subjects and ask them your questions. You only do it once.
  • Tracking polls. Technically, this is also called a longitudinal study. You select your population sample and ask them questions. You then ask the same group the same questions at a later date. The idea is, you can see how opinions change over time using the same group.
Different polling organizations use different methods for population sampling. It's almost never entirely random sampling. Bear in mind, subjects can say no to being involved, and can in principle drop out any time they choose. 

It's very, very easy to introduce bias by the people you select, slight differences in selection may give big differences in results. Let's say you're trying to measure President Trump's approval. Some people will approve of everything he does while others will disapprove of everything he does. There's very little point in measuring how either of these groups approves or disapproves over time. If your group includes a big measure of either of these groups, you're not going to see much variation. However, are you selecting for population representation or selecting to measure change over time? 

For these reasons, the sampling error in the polls is likely to be larger than random sampling error alone and may have different characteristics.

How accurate are approval polls?

This is the big question. For polls related to voting intention, you can compare what the polls said and the election result. But there's no such moment of truth for approval polls. I might disapprove of a President, but vote for them anyway (because of party affiliations or because I hate the other candidate more), so election results are a poor indicator of success.

One measure of accuracy might be agreement among approval polls from a number of organizations, but it's possible that the other pollsters could be wrong too. There's a polling industry problem called herding which has been a big issue in UK political polls. Herding means pollsters choose methodologies similar to other pollsters to avoid being outliers, which leads to polling results from different pollsters herding together. In a couple of notorious cases in the UK, they herded together and herded wrongly. A poll's similarity to other polls does not mean it's more accurate.

What about averaging?

What about aggregating polls? Even this isn't simple. In your aggregation:
  • Do you include tracking polls or all polls?
  • Do you weight polls by their size?
  • Do you weight polls by accuracy or partisan bias?
  • Do you remove 'don't knows'?
  • If a poll took place over more than one day, do you average results over each day the poll took place?
I'm sure you could add your own factors. The bottom line is, even aggregation isn't straightforward.

What all this means

Is Rasmussen Reports more accurate than Morning Consult? I can't say. There is no external source of truth for measuring who's more correct.

Even worse, we can see changes in the Rasmussen Reports approval that don't occur in the Morning Consult data (and vice versa). Was the data Rasmussen Reports saw real and Morning Consult missed it, or were Morning Consult correct? I can't say.

It's not just these two pollsters. The Pew Research Center claims their data, showing a decline in President's Trump approval rating at the end of his presidency, is real. This may well be correct, but what external sources can we use to say for sure?

What can I conclude for President Trump's approval rating?

Here's my takeaway story after all this. 

President Trump had an approval rating above 50% from most polling organizations when he took office. Most, but not all, polling organizations reported a drop below 50% soon after the start of his presidency. After that, his approval ratings stayed pretty flat through his entire presidency, except for a drop at the very end. 

The remarkable story is how steady his approval ratings were. For most presidents, there are ups and downs through their presidency, but not so much for President Trump. It seems that people made their minds up very quickly and didn't change their opinions much. 

Despite the large number of approval polls, the headline for most of the last four years should have been: "President Trump's approval rating: very little change".

What about President Biden?

At a guess, the polls will start positive and decline. I'm not going to get excited about any one poll. I want to see averages, and I want to see a sustained trend over time. Only then do I think the polls might tell us something worth listening to.

If you liked this post, you might like these ones

Monday, October 12, 2020

Fundamentally wrong? Using economic data as an election predictor

What were you thinking?

Think back to the last time you voted. Why did you vote the way you did? Here are some popular reasons, how many apply to you?

  • The country's going in the wrong direction, we need something new.
  • My kind of people vote for party X, or my kind of people never vote for party Y.
  • I'm a lifelong party X voter.
  • Candidate X or party X is best suited to running the country right now.
  • Candidate Y or party Y will ruin the country.
  • Candidate Y or party X are the best for defense/the economy/my children's education and that's what's important to me right now.

(Ballot drop box. Image Source: Wikimedia Commons. Author: Paul Sableman. License: Creative Commons.)

Using fundamentals to forecast elections

In political science circles, there's been a movement to use economic data to forecast election results. The idea is, homo economicus is a rational being whose voting behavior depends on his or her economic conditions. If the economy is going well, then incumbents (or incumbent parties) are reelected, if things are going badly, then challengers are elected instead. If this assertion is true, then people will respond rationally and predictably to changing economic circumstances. If we understand how the economy is changing, we can forecast who will win elections.

Building models based on fundamentals follows a straightforward process:
  1. Choose an economic indicator (e.g. inflation, unemployment, GDP) and see how well it forecasts elections.
  2. Get it wrong for an election.
  3. Add another economic indicator to the forecast to correctly predict the wrong election.
  4. Get it wrong for an election.
  5. Either re-adjust the model weights or go to 3.
These models can get very sophisticated. In the United States, some of the models include state-level data and make state-level forecasts of results.

What happens in practice

Two University of Colorado professors, Berry and Bickers, followed this approach to forecast the 2012 presidential election.  They very carefully analyzed elections back to 1980 using state-level economic data.  Their model was detailed and thorough and they helpfully included various statistical metrics to guide the reader to understand the model uncertainties. Their forecast was very clear: Romney would win 330 electoral college votes - a very strong victory. As a result, they became darlings for the Republican party.

Unfortunately for them, things didn't work out that way. The actual result was 332 electoral college votes for Obama and 206 for Romney, an almost complete reversal of their forecast.

In a subsequent follow-up (much shorter than their original paper), the professors argued in essence that although the economy had performed poorly, voters didn't blame Obama for it. In other words, the state of the economy was not a useful indicator for the 2012 election, even considering state-level effects.

This kind of failure is very common for fundamentals. While Nate Silver was at the New York Times, he published a long piece on why and how these models fail. To cut to the chase, there is no evidence voters are homo economicus when it comes to voting. All kinds of factors affect how someone votes, not just economic ones. There are cultural, social class, educational, and many other factors at work.

Why these models fail - post hoc ergo propter hoc and spurious correlations

The post hoc fallacy is to assume that because X follows Y, Y must cause X. In election terms, the fundamentalists assume that an improving or declining economy leads to certain types of election results. However, as we've said, there are many factors that affect voting. Take George Bush's approval rating, in the aftermath of 9/11 it peaked around 88% and he won re-election in 2004. Factors other than the economy were clearly at work.

A related phenomenon is spurious correlations which I've blogged about before. Spurious correlations occur when two unrelated phenomena show the same trend and are correlated, but one does not cause the other. Tyler Vigen has a great website that shows many spurious correlations.

Let's imaging you're a political science researcher. You have access to large amounts of economic data and you can direct your graduate students to find more. What you can do is trawl through your data set to find economic or other indicators that correlate with election results. To build your model, you weight each factor differently, for example, inflation might have a weighting of 0.7 and unemployment 0.9. Or you could even have time-varying weights. You can then test your model against existing election results and publish your forecast for the next election cycle. This process is almost guaranteed to find spurious correlations and produce models that don't forecast very accurately. 

Forecasting using odd data happens elsewhere, but usually, more entertainingly. Paul the Octopus had a good track record of forecasting the 2010 World Cup and other football results - Wikipedia says he had an 85.7% success rate. How was he so successful? Probably dumb luck. Bear in mind, many animals have been used for forecasting and we only hear about the successful ones.


(Paul the Octopus at work. Image source: Wikimedia Commons. License: Creative Commons.)

To put it simply, models build with economic data alone are highly susceptible to error because there is no evidence voters consider economic factors in the way that proponents of these models suggest. 

All models are wrong - some are useful

The statistician George Box is supposed to have said, "all models are wrong, some are useful". The idea is simple, the simplifications involved in model building often reduce their fidelity, but some models produce useful (actionable) results. All election forecast models are just that, forecast models that may be right or wrong. The question is, how useful are they? 

Let's imagine that a fundamental model was an accurate forecaster. We would have to accept that campaigns had little or no effect on the outcome. But this is clearly at odds with reality. The polling data indicates that the course of the 2016 US presidential election changed course in the closing weeks of the campaign. Perhaps most famously, the same thing happened in 1948. One of the key issues in the 2004 US presidential election was the 'war on terror'. This isn't an economic effect and it's not at all clear how it could be reduced to a number.

In other words, election results depend on more than economic effects and may depend on factors that are hard to quantify.

To attempt to quantify these effects, we could turn to opinion polls. In 2004, we could have asked voters about their view of the war on terror and we could have factored that into a fundamentals model. But why not just ask them how they intend to vote?


(Paul the Octopus died and was memorialized by a statue. How many other forecasters will get statues? Image Source: Wikimedia Commons. Author: Christophe95. License: Creative Commons.)

Where I stand

I'm reluctant to throw the baby out with the bathwater. I think fundamentals may have some effect, but it's heavily moderated by other factors and what happens during the campaign. Maybe their best use might be to give politicians some idea of factors that might be important in a campaign. But as the UK Highway Code says of the green traffic light, it doesn't mean go, it means "proceed with caution".

If you liked this post, you might like these ones

Wednesday, October 7, 2020

Opinion polling blog posts

Why a 'greatest hits' polling blog post?

Over the past few months, I've blogged about elections and opinion polling several times. On October 8th, 2020, I gave a talk at PyData Boston on forecasting US presidential elections, and I thought I would bring these blog posts together into one convenient place so the people at the talk could more easily find them.

(Mexican bird men dancing on a pole. I subtitled my talk on opinion polls 'poll dancing' - and I'm sure I disappointed my audience as a result. Image credit: Wikimedia Commons. License: Creative Commons. Author: Juan Felipe Rios.)

Polling

Can you believe the polls? - fake polls, leading questions, and other sins of opinion polling.

President Hilary Clinton: what the polls got wrong in 2016 and why they got it wrong - why the polls said Clinton would win and why Trump did.

Poll-axed: disastrously wrong opinion polls - a brief romp through some disastrously wrong opinion poll results.

Sampling the goods: how opinion polls are made - my experiences working for an opinion polling company as a street interviewer.

Probability theory

Who will win the election? Election victory probabilities from opinion polls - a quick derivation of a key formula and an explanation of why random sampling alone underestimates the uncertainty.

US democracy

These blog posts provided some background on US presidential elections.

The Electoral College for beginners - the post explains how the electoral college works and how it came to be.

Finding electoral fraud - the democracy data deficit - the post looks at evidence (or the lack of it) for vote fraud and suggests a way citizen-analysts can contribute to American democracy.

Silkworm - lessons learned from a BI app in Python

Faster Python BI app development through code generation - how I generated the code for the Silkworm project and why I did it.

Tuesday, September 8, 2020

Can you believe the polls?

Opinion polls have known sin

Polling companies have run into trouble over the years in ways that render some poll results doubtful at best. Here are just a few of the problems:

  • Fraud allegations.
  • Leading questions
  • Choosing not to publish results/picking methodologies so that polls agree.

Running reliable polls is hard work that takes a lot of expertise and commitment. Sadly, companies sometimes get it wrong for several reasons:

  • Ineptitude.
  • Lack of money. 
  • Telling people what they want to hear. 
  • Fakery.

In this blog post, I'm going to look at some high-profile cases of dodgy polling and I'm going to draw some lessons from what happened.

(Are some polls real or fake? Image source: Wikimedia Commons. Image credit: Basile Morin. License: Creative Commons.)

Allegations of fraud part 1 - Research 2000

Backstory

Research 2000 started operating around 1999 and gained some solid early clients. In 2008, The Daily Kos contracted with Research 2000 for polling during the upcoming US elections. In early 2010, Nate Silver at FiveThirtyEight rated Research 2000 as an F and stopped using their polls. As a direct result, The Daily Kos terminated their contract and later took legal action to reclaim fees, alleging fraud.

Nate Silver's and others' analysis

After the 2010 Senate elections, Nate Silver analyzed polling results for 'house effects' and found a bias towards the Democratic party for Research 2000. These kinds of biases appear all the time and vary from election to election. The Research 2000 bias was large (at 4.4%), but not crazy; the Rasmussen Republican bias was larger for example. Nonetheless, for many reasons, he graded Research 2000 an F and stopped using their polling data.

In June of 2010, The Daily Kos publicly dismissed Research 2000 as their pollster based on Nate Silver's ranking and more detailed discussions with him. Three weeks later, The Daily Kos sued Research 2000 for fraud. After the legal action was public, Nate Silver blogged some more details of his misgivings about Research 2000's results, which led to a cease and desist letter from Research 2000's lawyers. Subsequent to the cease-and-desist letter, Silver published yet more details of his misgivings. To summarize his results, he was seeing data inconsistent with real polling - the distribution of the numbers was wrong. As it turned out, Research 2000 was having financial trouble around the time of the polling allegations and was negotiating low-cost or free polling with The Daily Kos in exchange for accelerated payments. 

Others were onto Research 2000 too. Three statisticians analyzed some of the polling data and found patterns inconsistent with real polling - again, real polls tend to have results distributed in certain ways and some of the Research 2000 polls did not.

The result

The lawsuit progressed with strong evidence in favor of The Daily Kos. Perhaps unsurprisingly, the parties agreed a settlement, with Research 2000 agreeing to pay The Daily Kos a settlement fee. Research 2000 effectively shut down after the agreement.

Allegations of fraud part 2 - Strategic Vision, LLC

Backstory

This story requires some care in the telling. At the time of the story, there were two companies called Strategic Vision, one company is well-respected and wholly innocent, the other not so much. The innocent and well-respected company is Strategic Vision based in San Diego. They have nothing to do with this story. The other company is Strategic Vision, LLC based in Atlanta. When I talk about Strategic Vision, LLC from now on, it will be solely about the Atlanta company.

To maintain trust in the polling industry, the American Association for Public Opinion Research (AAPOR) has guidelines and asks polling companies to disclose some details of their polling methodologies. They rarely censure companies, and their censures don't have the force of law, but public shaming is effective as we'll see. 

What happened

In 2008, the AAPOR asked 21 polling organizations for details of their 2008 pre-election polling, including polling for the New Hampshire Democratic primary. Their goal was to quality check the state of polling in the industry.

One polling company didn't respond for a year, despite repeated requests to do so. As a result, in September 2009, the AAPOR published a public censure of Strategic Vision, LLC which you can read here

It's very unusual for the AAPOR to issue a censure, so the story was widely reported at the time, for example in the New York Times, The Hill, and The Wall Street Journal. Strategic Vision LLC's public response to the press coverage was that they were complying but didn't have time to submit their data. They denied any wrongdoing.

Subsequent to the censure, Nate Silver looked more closely at Strategic Vision LLC's results. Initially, he asked some very pointed and blunt questions. In a subsequent post, Nate Silver used Benford's Law to investigate Strategic Vision LLC's data, and based on his analysis he stated there was a suggestion of fraud - more specifically, that the data had been made up. In a post the following day, Nate Silver offered some more analysis and a great example of using Benford's Law in practice. Again, Strategic Vision LLC vigorously denied any wrongdoing.

One of the most entertaining parts of this story is a citizenship poll conducted by Strategic Vision, LLC among high school students in Oklahoma. The poll was commissioned by the Oklahoma Council on Public Affairs, a think tank. The poll asked eight various straightforward questions, for example:

  • who was the first US president? 
  • what are the two main political parties in the US?  

and so on. The results were dismal: only 23% of students answered George Washington and only 43% of students knew Democratic and Republican. Not one student in 1,000 got all questions correct - which is extraordinary. These types of polls are beloved of the press; there are easy headlines to be squeezed from students doing poorly, especially on issues around citizenship. Unfortunately, the poll results looked odd at best. Nate Silver analyzed the distribution of the results and concluded that something didn't seem right - the data was not distributed as you might expect. To their great credit, when the Oklahoma Council on Public Affairs became aware of problems with the poll, they removed it from their website and put up a page explaining what happened. They subsequently terminated their relationship with Strategic Vision, LLC.

In 2010, a University of Cincinnati professor awarded Strategic Vision LLC the ''Phantom of the Soap Opera" award on the Media Ethics site. This site has a little more back story on the odd story of Strategic Vision LLC's offices or lack of them.

The results

Strategic Vision, LLC continued to deny any wrongdoing. They never supplied their data to the AAPOR and they stopped publishing polls in late 2009. They've disappeared from the polling scene.

Other polling companies

Nate Silver rated other pollsters an F and stopped using them. Not all of the tales are as lurid as the ones I've described here, but there are accusations of fraud and fakery in some cases, in other cases, there are methodology disputes and no suggestion of impropriety. Here's a list of pollsters Nate Silver rates an F.

Anarchy in the UK

It's time to cross the Atlantic and look at polling shenanigans in the UK. The UK hasn't seen the rise and fall of dodgy polling companies, but it has seen dodgy polling methodologies.

Herding

Let's imagine you commission a poll on who will win the UK general election. You get a result different from the other polls. Do you publish your result? Now imagine you're a polling analyst, you have a choice of methodologies for analyzing your results, do you do what everyone else does and get similar results, or do you do your own thing and maybe get different results from everyone else?

Sadly, there are many cases when contrarian polls weren't published and there is evidence that polling companies made very similar analysis choices to deliberately give similar results. This leads to the phenomenon called herding where published poll results tend to herd together. Sometimes, this is OK, but sometimes it can lead to multiple companies calling an election wrongly.

In 2015, the UK polls predicted a hung parliament, but the result was a working majority for the Conservative party. The subsequent industry poll analysis identified herding as one of the causes of the polling miss. 

This isn't the first time herding has been an issue with UK polling and it's occasionally happened in the US too.

Leading questions

The old British TV show 'Yes, Prime Minister' has a great piece of dialog neatly showing how leading questions work in surveys. 'Yes, Prime Minister' is a comedy, but UK polls have suffered from leading questions for a while.

The oldest example I've come across dates from the 1970's and the original European Economic Community membership referendum. Apparently, one poll asked the following questions to two different groups:

  • France, Germany, Italy, Holland, Belgium and Luxembourg approved their membership of the EEC by a vote of their national parliaments. Do you think Britain should do the same?
  • Ireland, Denmark and Norway are voting in a referendum to decide whether to join the EEC. Do you think Britain should do the same?

These questions are highly leading and unsurprisingly elicited the expected positive result in both (contradictory) cases.

Moving forward in time to 2012, leading questions or artful question wording, came up again. The background is press regulation. After a series of scandals where the press behaved shockingly badly, the UK government considered press regulation to curb abuses. Various parties were for or against various aspects of press regulation and they commissioned polls to support their viewpoints. 

The polling company YouGov published a poll, paid for by The Media Standards Trust, that showed 79% of people thought there should be an independent government-sanctioned regulator to investigate complaints against the press. Sounds comprehensive and definitive. 

But there was another poll at about the same time, this time paid for by The Sun newspaper,  that found that only 24% of the British public wanted a government regulator for the press - the polling company here was also YouGov! 

The difference between the 79% and 24% came through careful question wording - a nuance that was lost in the subsequent press reporting of the results. You can listen to the story on the BBC's More Or Less program that gives the wording of the question used.

What does all this mean?

The quality of the polling company is everything

The established, reputable companies got that way through high-quality reliable work over a period of years. They will make mistakes from time to time, but they learn from them. When you're considering whether or not to believe a poll,  you should ask who conducted the poll and consider the reputation of the company behind it.

With some exceptions, the press are unreliable

None of the cases of polling impropriety were caught by the press. In fact, the press has a perverse incentive to promote the wild and outlandish, which favors results from dodgy pollsters. Be aware that a newspaper that paid for a poll is not going to criticize its own paid-for product, especially when it's getting headlines out of it.

Most press coverage of polls focuses on discussing what the poll results mean, not how accurate they are and sources of bias. If these things are discussed, they're discussed in a partisan manner (disagreeing with a poll because the writer holds a different political view). I've never seen the kind of analysis Nate Silver does elsewhere - and this is to the great detriment of the press and their credibility.

Vested interests

A great way to get press coverage is by commissioning polls and publishing the results; especially if you can ask leading questions. Sometimes, the press gets very lazy and doesn't even report who commissioned a poll, even when there's plainly a vested interest.

Anytime you read a survey, ask who paid for it and what the exact questions were.

Outliers are outliers, not trends

Outlier poll results get more play than results in line with other pollsters. As I write this in early September 2020, Biden is about 7% ahead in the polls. Let's imagine two survey results coming in early September:

  • Biden ahead by 8%.
  • Trump ahead by 3%

Which do you think would get more space in the media? Probably the shocking result, even though the dull result may be more likely. Trump-supporting journalists might start writing articles on a campaign resurgence while Biden supporting journalists might talk about his lead slipping and losing momentum. In reality, the 3% poll might be an anomaly and probably doesn't justify consideration until it's backed by other polls. 

Bottom line: outlier polls are probably outliers and you shouldn't set too much store by them.

There's only one Nate Silver

Nate Silver seems like a one-man army, routing out false polling and pollsters. He's stood up to various legal threats over the years. It's a good thing that he exists, but it's a bad thing that there's only one of him. It would be great if the press could take inspiration from him and take a more nuanced, skeptical, and statistical view of polls. 

Can you believe the polls?

Let me close by answering my own question: yes you can believe the polls, but within limits and depending on who the pollster is.

Reading more

This blog post is one of a series of blog posts about opinion polls. 

Wednesday, August 19, 2020

President Hilary Clinton: what the polls got wrong in 2016 and why they got it wrong

What the pollsters got wrong

Had the US presidential polls been correct in 2016, Nate Silver and other forecasters would be anointed oracles and the polling companies would be viewed as soothsayers revealing fundamental truths about society. None of these things happened. Instead, forecasters were embarrassed and polling companies got a bloody nose. If we want to understand if things will go any differently in 2020, we have to understand what happened in 2016 and why.

What happened in 2016

The simple narrative is: "the polls got it wrong in 2016", but this is a reductio ad absurdum. Let's look at what actually happened.

Generally speaking, there are two types of US presidential election opinion polls: national and state. National polls are conducted across the US and are intended to give a sense of national intentions. Prediction-wise, they are most closely related to the national electoral vote. State polls are conducted within a state and are meant to predict the election in the state.

All pollsters recognize uncertainty in their measurement and most of them quote a margin of error, which is usually a 95% confidence interval. For example, I might say candidate cat has 49% and candidate dog has 51% with a 4% margin of error. This means you should read my results as cat: 49±4% and dog: 51±4%, or more simply, that I think candidate dog will get between 47% and 55% of the vote and candidate cat between 45% and 53%. If the actual results are cat 52% and dog 48%, technically, that's within the margin of error and is a successful forecast. You can also work out a probability of a candidate winning based on opinion poll data.

The 2016 national polling was largely correct. Clinton won the popular vote with a 2.1% margin over Trump. Wikipedia has a list of 2016 national polls, and it's apparent that the polls conducted closer to the election gave better results than those conducted earlier (unsurprisingly) as I've shown in the chart below. Of course, the US does not elect presidents on the popular vote, so this point is of academic interest.
(Based on data from Wikipedia.)

The state polls are a different matter. First off, we have to understand that polls aren't conducted in every state. Wyoming is very, very Republican and as a result, few people would pay for a poll there - no newspaper is going to get a headline from "Republican leads in Wyoming". Obviously, the same thing applies for very, very Democratic states. Polls are conducted more often in hotly contested areas with plenty of electoral college votes. So how did the state polls actually do in 2016? To keep things simple, I'll look at the results from the poll aggregator Sam Wang and compare them to the actual results. The poll aggregation missed in these states:

State Election spread 
(Trump - Clinton)
Poll aggregator spread
(Trump - Clinton)
Florida 1.2% -1.5%
North Carolina 3.66% -1%
Pennsylvania 0.72% -2.5%
Michigan 0.23% -2.5%
Wisconsin 0.77% < -5%

Poll aggregators use different error models for calculating their aggregated margin of error, but typically they'll vary from 2-3%. A few of these results are outside of the margin of error, but more tellingly, they're all in the same direction.  A wider analysis looking at all the state results shows the same pattern. The polls were biased in favor of Clinton, but why?

Why they got it wrong

In the aftermath of the election, the American Association for Public Opinion Research created an ad-hoc commission to understand what went wrong. The AAPOR published their findings and I'm going to provide a summary here.

Quoting directly from the report, late changes in voter decisions led earlier polls to overestimate Clinton's support: 
"Real change in vote preference during the final week or so of the campaign. About 13 percent of voters in Wisconsin, Florida and Pennsylvania decided on their presidential vote choice in the final week, according to the best available data. These voters broke for Trump by near 30 points in Wisconsin and by 17 points in Florida and Pennsylvania."
The polls oversampled those with college degrees and undersampled those without: "In 2016 there was a strong correlation between education and presidential vote in key states. Voters with higher education levels were more likely to support Clinton. Furthermore, recent studies are clear that people with more formal education are significantly more likely to participate in surveys than those with less education. Many polls – especially at the state level – did not adjust their weights to correct for the over-representation of college graduates in their surveys, and the result was over-estimation of support for Clinton."

The report also suggests that the "shy Trump voter" effect may have played a part.

Others also investigated the result, and a very helpful paper by Kennedy et al provides some key supporting data. Kennedy also states that voter education was a key factor, and shows charts that illustrated the connection between education and voting in 2016 and in 2012. As you might expect, there was little influence in 2012, but in 2016, education was a strong influence. In 2016, most state-level polls did not adjust for education.

Although the polls in New Hampshire called the results correctly, they predicted a much larger win for Clinton. Kennedy quotes Andrew Smith, a UNH pollster, and I'm going to repeat the quote here because it's so important: "We have not weighted by level of education in our election polling in the past and we have consistently been the most accurate poll in NH (it hasn’t made any difference and I prefer to use as few weights as possible), but we think it was a major factor this year. When we include a weight for level of education, our predictions match the final number."

Kennedy also found good evidence of a late swing to Trump that was not caught by polls conducted earlier in the campaign.

On the whole, there does seem to be agreement that two factors were important in 2016:
  • Voter education. In previous elections, it didn't matter, in this one it did. State-level polls on the whole didn't control for it.
  • Late swing to Trump missed by earlier polls.

2020 and beyond

The pollsters business depends on making accurate forecasts and elections are the ultimate high-profile test of the predictive power of polls. There's good evidence that at least some pollsters will correct for education in this election, but what if there's some other factor that's important, for example, housing type, or diet, or something else? How will we be able to spot bias during an election campaign? The answer is, we can't. What we can do is assume the result is a lot less certain than the pollsters, or the poll aggregators, claim.

Commentary

In the run-up to the 2016 election, I created an opinion poll-aggregation model. My model was based on the work of Sam Wang and used election probabilities. I was disturbed by how quickly a small spread in favor of a candidate gave a very high probability of winning; the election results always seemed more uncertain to me. Textbook poll aggregation models reduced the uncertainty still further.

The margin of error quoted by pollsters is just the sampling error assuming random sampling. But sampling isn't wholly random and there may be house effects or election-specific effects that bias the results. Pollsters and others make the assumption that these effects are zero, which isn't the case. Of course, pollsters change their methodology with each election to avoid previous mistakes. The upshot is, it's almost impossible to assess the size of these non-random bias effects during an election. My feeling is, opinion poll results are a lot less certain than the quoted margin of error, and a 'real' margin of error may be much greater.

The lesson for poll aggregators like me is to allow for other biases and uncertainty in our models. To his great credit, Nate Silver is ahead here as he is in so many other areas.

If you liked this post, you might like these ones

Monday, August 17, 2020

Poll-axed: disastrously wrong opinion polls

Getting it really, really wrong

On occasions, election opinion polls have got it very, very wrong. I'm going to talk about some of their biggest blunders and analyze why they messed up so very badly. There are lessons about methodology, hubris, and humility in forecasting.

 (Image credit: secretlondon123, Source: Wikimedia Commons, License: Creative Commons)

The Literary Digest - size isn't important

The biggest, badest, and boldest polling debacle happened in 1936, but it still has lessons for today. The Literary Digest was a mass-circulation US magazine published from 1890-1938. In 1920, it started printing presidential opinion polls, which over the years acquired a good reputation for accuracy [Squire], so much so that they boosted the magazine's subscriptions. Unfortunately, its 1936 opinion poll sank the ship.

(Source: Wikimedia Commons. License: Public Domain)

The 1936 presidential election was fought between Franklin D. Roosevelt (Democrat), running for re-election, and his challenger Alf Landon (Republican).  The backdrop was the ongoing Great Depression and the specter of war in Europe. 

The Literary Digest conducted the largest ever poll up to that time, sending surveys to 10 million people and receiving 2.3 million responses; even today, this is orders of magnitude larger than typical opinion polls. Through the Fall of 1936, they published results as their respondents returned surveys; the magazine didn't interpret or weight the surveys in any way [Squire]. After 'digesting' the responses, the Literary Digest confidently predicted that Landon would easily beat Roosevelt. Their reasoning was, the poll was so big it couldn’t possibly be wrong, after all the statistical margin of error was tiny

Unfortunately for them, Roosevelt won handily. In reality, handily is putting it mildly, he won a landslide victory (523 electoral college votes to 8).

So what went wrong? The Literary Digest sampled its own readers, people who were on lists of car owners, and people who had telephones at home. In the Great Depression, this meant their sample was not representative of the US voting population; the people they sampled were much wealthier. The poll also suffered from non-response bias; the people in favor of Landon were enthusiastic and filled in the surveys and returned them, the Roosevelt supporters less so. Unfortunately for the Literary Digest, Roosevelt's supporters weren't so lethargic on election day and turned up in force for him [Lusinchi, Squire]. No matter what the size of the Literary Digest's sample, their methodology baked in bias, so it was never going to give an accurate forecast. 

Bottom line: survey size can't make up for sampling bias.

Sampling bias is an ongoing issue for pollsters. Factors that matter a great deal in one election might not matter in another, and pollsters have to estimate what will be important for voting so they know who to select. For example, having a car or a phone might not correlate with voting intention for most elections, until for one election they do correlate very strongly. The Literary Digest's sampling method was crude, but worked fine in previous elections. Unfortunately, in 1936 the flaws in their methodology made a big difference and they called the election wrong as a result. Fast-forwarding to 2016, flaws in sampling methodology led to pollsters underestimating support for Donald Trump.

Sadly, the Literary Digest never recovered from this misstep and folded two years later. 

Dewey defeats Truman - or not

The spectacular implosion of the 1936 Literary Digest poll gave impetus to the more 'scientific' polling methods of George Gallup and others [Igo]. But even these scientific polls came undone in the 1948 US presidential election. 

The election was held not long after the end of World War II and was between the incumbent, Harry S. Truman (Democrat), and his main challenger, Thomas E. Dewey (Republican). At the start of the election campaign, Dewey was the favorite over the increasingly unpopular Truman. While Dewey ran a low-key campaign, Truman led a high-energy, high-intensity campaign.

The main opinion polling companies of the time, Gallup, Roper, and Crossley firmly predicted a Dewey victory. The Crossley Poll of 15 October 1948 put Dewey ahead in 27 states [Topping]. In fact, their results were so strongly in favor of Dewey that some polling organizations stopped polling altogether before the election. 

The election result? Truman won convincingly. 

A few newspapers were so convinced that Dewy had won that they went to press with a Dewey victory announcement, leading to one of the most famous election pictures of all time.

(Source: Truman Library)

What went wrong?

As far as I can tell, there were two main causes of the pollsters' errors:

  • Undecided voters breaking for Truman. Pollsters had assumed that undecided voters would split their votes evenly between the candidates, which wasn't true then, and probably isn't true today.
  • Voters changing their minds or deciding who to vote for later in the campaign. If you stop polling late in the campaign, you're not going to pick up last-minute electoral changes. 

Just as in 1936, there was a commercial fallout, for example, 30 newspapers threatened to cancel their contracts with Gallup. 

As a result of this fiasco, the polling industry regrouped and moved towards random sampling and polling late into the election campaign.

US presidential election 2016

For the general public, this is the best-known example of polls getting the result wrong. There's a lot to say about what happened in 2016, so much in fact, that I'm going to write a blog post on this topic alone. It's not the clear-cut case of wrongness it first appears to be.

(Imaged credit: Michael Vadon, Source: Wikimedia Commons, License: Creative Commons)

For now, I'll just give you some hints: like the Literary Digest example, sampling was one of the principal causes, exacerbated by late changes in the electorate's voting decisions. White voters without college degrees voted much more heavily for Donald Trump than Hilary Clinton and in 2016, opinion pollsters didn't control for education, leading them to underestimate Trump's support in key states. Polling organizations are learning from this mistake and changing their methodology for 2020. Back in 2016, a significant chunk of the electorate seemed to make up their minds in the last few weeks of the election which was missed by earlier polling.

It seems the more things change, the more they remain the same.

Anarchy in the UK?

There are several properties of the US electoral system that make it very well suited for opinion polling but other electoral systems don't have these properties. To understand why polling is harder in the UK than in the US, we have to understand the differences between a US presidential election and a UK general election.

  • The US is a national two-party system, the UK is a multi-party democracy with regional parties. In some constituencies, there are three or more parties that could win.
  • In the US, the president is elected and there are only two candidates, in the UK, the electorate vote for Members of Parliament (MPs) who select the prime minister. This means the candidates are different in each constituency and local factors can matter a great deal.
  • There are 50 states plus Washington DC, meaning 51 geographical areas. In the UK, there are currently 650 constituencies, meaning 650 geographies area to survey.

These factors make forecasting UK elections harder than US elections, so perhaps we should be a bit more forgiving. But before we forgive, let's have a look at some of the UK's greatest election polling misses.

General elections

The 1992 UK general election was a complete disaster for the opinion polling companies in the UK [Smith]. Every poll in the run-up to the election forecast either a hung parliament (meaning, no single party has a majority) or a slim majority for the Labour party. Even the exit polls forecast a hung parliament. Unfortunately for the pollsters, the Conservative party won a comfortable working majority of seats. Bob Worcester, the best known UK pollster at the time, said the polls were more wrong "...than in any time since their invention 50 years ago" [Jowell].

Academics proposed several possible causes [Jowell, Smith]:
  • "Shy Tories". The idea here is that people were too ashamed to admit they intended to vote Conservative, so they lied or didn't respond at all. 
  • Don't knows/won't say. In any poll, some people are undecided or won't reveal their preference. To predict an election, you have to model how these people will vote, or at least have a reliable way of dealing with them, and that wasn't the case in 1992 [Lynn].
  • Voter turnout. Different groups of people actually turn out to vote at different proportions. The pollsters didn't handle differential turnout very well, leading them to overstate the proportion of Labour votes.
  • Quota sampling methods. Polling organizations use quota-based sampling to try and get a representative sample of the population. If the sampling is biased, then the results will be biased [Lynn, Smith]. 

As in the US in 1948, the pollsters re-grouped, licked their wounds, and revised their methodologies.

After the disaster of 1992, surely the UK pollsters wouldn't get it wrong again? Moving forward 2015, the pollsters got it wrong again!

In the 2015 election, the Conservative party won a working majority. This was a complex, multi-party election with strong regional effects, all of which was well-known at the time. As in 1992, the pollsters predicted a hung parliament and their subsequent humiliation was very public. Once again, there were various inquiries into what went wrong [Sturgis]. Shockingly, the "official" post-mortem once again found that sampling was the cause of the problem. The polls over-represented Labour supporters and under-represented Conservative supporters, and the techniques used by pollsters to correct for sampling issues were inadequate [Sturgis]. The official finding was backed up by independent research which further suggested pollsters had under-represented non-voters and over-estimated support for the Liberal Democrats [Melon].

Once again, the industry had a re-think.

There was another election in 2019. This time, the pollsters got it almost exactly right.

It's nice to see the polling industry getting a big win, but part of me was hoping Lord Buckethead or Count Binface would sweep to victory in 2019.

(Count Binface. Source: https://www.countbinface.com/)

(Lord Buckethead. Source: https://twitter.com/LordBuckethead/status/1273601785094078464/photo/1. Not the hero we need, but the one we deserve.)

EU referendum

This was the other great electoral shock of 2016. The polls forecast a narrow 'Remain' victory, but the reality was a narrow 'Leave' win. Very little has been published on why the pollsters got it wrong in 2016, but what little that was published suggests that the survey method may have been important. The industry didn't initiate a broad inquiry, instead, individual polling companies were asked to investigate their own processes

Other countries

There have been a series of polling failures in other countries. Here are just a few:

Takeaways

In university classrooms around the world, students are taught probability theory and statistics. It's usually an antiseptic view of the world, and opinion poll examples are often presented as straightforward math problems, stripped of the complex realities of sampling. Unfortunately, this leaves students unprepared for the chaos and uncertainty of the real world.

Polling is a complex, messy issue. Sampling governs the success or failure of polls, but sampling is something of a dark art and it's hard to assess its accuracy during a campaign. In 2020, do you know the sampling methodologies used by the different polling companies? Do you know who's more accurate than who?

Every so often, the polling companies take a beating. They re-group, fix the issues, and survey again. They get more accurate, and after a while, the press forgets about the failures and talks in glowing terms about polling accuracy, and maybe even doing away with the expensive business of elections in favor of polls. Then another debacle happens. The reality is, the polls are both more accurate and less accurate than the press would have you believe. 

As Yogi Berra didn't say, "it's tough to make predictions, especially about the future".  

If you liked this post, you might like these ones

References

[Igo] '"A gold mine and a tool for democracy": George Gallup, Elmo Roper, and the business of scientific polling,1935-1955', Sarah Igo, J Hist Behav Sci. 2006;42(2):109-134

[Jowell] "The 1992 British Election: The Failure of the Polls", Roger Jowell, Barry Hedges, Peter Lynn, Graham Farrant and Anthony Heath, The Public Opinion Quarterly, Vol. 57, No. 2 (Summer, 1993), pp. 238-263

[Lusinchi] '“President” Landon and the 1936 Literary Digest Poll: Were Automobile and Telephone Owners to Blame?', Dominic Lusinchi, Social Science History 36:1 (Spring 2012)

[Lynn] "How Might Opinion Polls be Improved?: The Case for Probability Sampling", Peter Lynn and Roger Jowell, Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 159, No. 1 (1996), pp. 21-28 

[Melon] "Missing Nonvoters and Misweighted Samples: Explaining the 2015 Great British Polling Miss", Jonathan Mellon, Christopher Prosser, Public Opinion Quarterly, Volume 81, Issue 3, Fall 2017, Pages 661–687

[Smith] "Public Opinion Polls: The UK General Election, 1992",  T. M. F. Smith, Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 159, No. 3 (1996), pp. 535-545

[Squire] "Why the 1936 Literary Digest poll failed", Peverill Squire, Public Opinion Quarterly, 52, 125-133, 1988

[Sturgis] "Report of the Inquiry into the 2015 British general election opinion polls", Patrick Sturgis,  Nick Baker, Mario Callegaro, Stephen Fisher, Jane Green, Will Jennings, Jouni Kuha, Ben Lauderdale, Patten Smith

[Topping] '‘‘Never argue with the Gallup Poll’’: Thomas Dewey, Civil Rights and the Election of 1948', Simon Topping, Journal of American Studies, 38 (2004), 2, 179–198

Wednesday, August 12, 2020

Who will win the election? Election victory probabilities from opinion polls

Polls to probabilities

How likely is it that your favorite candidate will win the election? If your candidate is ahead of their opponent by 5%, are they certain to win? What about 10%? Or if they're down by 2%, are they out of the race? Victory probabilities are related to how far ahead or behind a candidate is in the polls, but the relationship isn't a simple one and has some surprising consequences as we'll see.

Opinion poll example

Let's imagine there's a hard-fought election between candidates A and B. A newspaper publishes an opinion poll a few days before the election:

  • Candidate A: 52%
  • Candidate B: 48%
  • Sample size: 1,000

Should candidate A's supporters pop the champagne and candidate B's supporters start crying?

The spread and standard error

Let's use some standard notation. From the theory of proportions, the mean and standard error for the proportion of respondents who chose A is:

\[ p_a = {n_a \over n} \] \[ \sigma_a = { \sqrt {{p_a(1-p_a)} \over n}} \]

where \( n_a \) is the number of respondents who chose A and \( n \) is the total number of respondents. If the proportion of people who answered candidate B is \(p_b\), then obviously, \( p_a + p_b = 1\).

Election probability theory usually uses the spread, \(d\), which is the difference between the candidates: \[d = p_a - p_b = 2p_a - 1 \] From statistics theory, the standard error of \( d \)  is: \[\sigma_d = 2\sigma_a\] (these relationships are easy to prove, but a bit tedious, if anyone asks, I'll show the proof.)

Obviously, for a candidate to win, their spread, \(d\), must be > 0.

Everything is normal

From the central limit theorem (CLT), we know \(p_a\) and \(p_b\) are normally distributed, and also from the CLT, we know \(d\) is normally distributed. The next step to probability is viewing the normal distribution for candidate A's spread. The chart below shows the normal distribution with mean \(d\) and standard error \(\sigma_d\).

As with most things with the normal distribution, it's easier if we transform everything to the standard normal using the transformation: \[z = {(x - d) \over \sigma_d}\] The chart below is the standard normal representation of the same data.

The standard normal form of this distribution is a probability density function. We want the probability that \(d>0\) which is the light green shaded area, so it's time to turn to the cumulative distribution function (CDF), and its complement, the complementary cumulative distribution function (CCDF).

CDF and CCDF

The CDF gives us the probability that we will get a result less than or equal to some value I'll label \(z_c\). We can write this as: \[P(z \leq z_c) = CDF(z_c) = \phi(z_c) \] The CCDF is defined so that: \[1 = P(z \leq z_c) + P(z > z_c)= CDF(z_c) + CCDF(z_c) = \phi(z_c) + \phi_c(z_c)\] Which is a long-winded way of saying the CCDF is defined as:  \[CCDF(z_c) = P(z_c \gt 0) = \phi_c(z_c)\]

The CDF is the integral of the PDF, and from standard textbooks: \[ \phi(z_c) = {1 \over 2} \left( 1 + erf\left( {z_c \over \sqrt2} \right) \right) \] We want the CCDF,  \(P(z > z_c)\), which is simply 1 - CDF.

Our critical value occurs when the spread is zero. The transformation to the standard normal in this case is: \[z_c = {(x - d) \over \sigma_d} = {-d \over \sigma_d}\] We can write the CCDF as: \[\phi_c(z_c) = 1 - \phi(z_c) = 1- {1 \over 2} \left( 1 + erf\left( {z_c \over \sqrt2} \right) \right)\ \] \[= 1 - {1 \over 2} \left( 1 + erf\left( {-d \over {\sigma_d\sqrt2}} \right) \right)\] We can easily show that: \[erf(x) = -erf(-x)\] Using this relationship, we can rewrite the above equation as: \[ P(d > 0) = {1 \over 2} \left( 1 + erf\left( {d \over {\sigma_d\sqrt2}} \right) \right)\]

What we have is an equation that takes data we've derived from an opinion poll and gives us a probability of a candidate winning.

Probabilities for our example

For candidate A:

  • \(n=1000\)
  • \( p_a = {520 \over 1000} = 0.52 \)
  • \(\alpha_a = 0.016 \)
  • \(d = {{520 - 480} \over 1000} = 0.04\)
  • \(\alpha_d = 0.032\)
  • \(P(d > 0) = 90\%\)

For candidate B:

  • \(n=1000\)
  • \( p_b = {480 \over 1000} = 0.48 \)
  • \(\alpha_b = 0.016 \)
  • \(d = {{480 - 520} \over 1000} = -0.04\)
  • \(\alpha_d = 0.032\)
  • \(P(d > 0) = 10\%\)

Obviously, the two probabilities add up to 1. But note the probability for candidate A. Did you expect a number like this? A 4% point lead in the polls giving a 90% chance of victory?

Some consequences

Because the probability is based on \( erf \), you can quite quickly get to highly probable events as I'm going to show in an example. I've plotted the probability for candidate A for various leads (spreads) in the polls. Most polls nowadays tend to have about 800 or so respondents (some are more and some are a lot less), so I've taken 800 as my poll size. Obviously, if the spread is zero, the election is 50%:50%. Note how quickly the probability of victory increases as the spread increases.

What about the size of the poll, how does that change things? Let's fix the spread to 2% and vary the size of the poll from 200 to 2,000 (the usual upper and lower bounds on poll sizes). Here's how the probability varies with poll size for a spread of 2%.

Now imagine you're a cynical and seasoned poll analyst working on candidate A's campaign. The young and excitable intern comes rushing in, shouting to everyone that A is ahead in the polls! You ask the intern two questions, and then, like the Oracle at Delphi, you predict happiness or not. What two questions do you ask?

  • What's the spread?
  • What's the size of the poll?

What's missing

There are two elephants in the room, and I've been avoiding talking about them. Can you guess what they are?

All of this analysis assumes the only source of error is random noise. In other words, that there's no systemic bias. In the real world, that's not true. Polls aren't wholly based on random sampling, and the sampling method can introduce bias. I haven't modeled it at all in this analysis. There are at least two systemic biases:

  • Pollster house effects arising from house sampling methods
  • Election effects arising from different population groups voting in different ways compared to previous elections.

Understanding and allowing for bias is key to making a successful election forecast. This is an advanced topic for another blog post.

The other missing item is more subtle. It's undecided voters. Imagine there are two elections and two opinion polls. Both polls have 1,000 respondents.

Election 1:

  • Candidate A chosen by 20%
  • Candidate B chosen by 10%
  • Undecided voters are 70%
  • Spread is 10%
Election 2:

  • Candidate A chosen by 55%
  • Candidate B chosen by 45%
  • Undecided voters are 0%
  • Spread is 10%
In both elections, the spread from the polls is 10%, so candidate A has the same higher chance of winning in both elections, but this doesn't seem right. Intuitively, we should be less certain about an election with a high number of undecided voters. Modeling undecided voters is a topic for another blog post!

Reading more

The best source of election analysis I've read is in the book "Introduction to data science" and the associated edX course "Inference and modeling", both by Rafael Irizarry. The analysis in this blog post was culled from multiple books and websites, each of which only gave part of the story.

If you liked this post, you might like these ones