# How you say it is important

As a speaker, you have two instruments to influence an audience: your voice and your body. In this blog post, I'm going to talk about how you can use your voice to become a more engaging and compelling speaker.

To be successful on stage, you need to be more you, which means exaggerating who you are through your voice. It doesn't mean being manic, but it does mean being more expressive than you usually are.

Everyone knows that speaking in a monotone can put audiences to sleep, which means it's important to have meaningful variation in your voice. You can use the need for variation to your advantage: you can use your voice to grab attention (in at least two different ways), to soothe an audience, or to rile them up.

Let's look at three techniques you have at your disposal: volume, pitch, and speed.

(Margaret Thatcher was an expert in her use of voice pitch. Image source: Wikimedia Commons, Photographer: Rob Bogaerts / Anefo, License: Public domain)

# Volume

This is the most obvious method; you can vary how loudly you speak. You should do it mindfully and be aware that you can use softness and loudness for the same goal.

Obviously, speaking loudly, or very loudly, grabs an audience's attention, but it quickly becomes tiring for both the audience and the speaker and the effect wears off quickly. You should use louder speech as you might use bold text in a document - to draw attention to a point. For example, a CEO might say something like:

"...our results for 2020 were at the lower end of expectations, in 2021, we changed our approach and saw a 10% improvement, in 2022, we must continue to focus on our core business."

The CEO would say the words in bold louder than the other words, and the underlined word louder still.

I was at a comedy club when the compere had to revive an audience after the previous comic had died on stage. He used various techniques to do it, but the most obvious one was loudness; he spoke closer to the microphone which had the effect of amplifying his voice. At times, it was so loud it was almost painful. However, it worked, he got the audience's attention and he continued with his set at a more reasonable volume. If you're the first speaker on stage after lunch, you can do the same to get your audience's attention, but don't do it for too long

You can also use a softer voice to get attention. One power move I've seen is for executives to speak softly to force people to listen more closely. Of course, this only works if the room is right and there are no background noises, but when it works, it works really well.

The ultimate in speaking softly is silence. I've used silence in my talks to powerful effect and I've seen other use it too. Mostly, it takes the form of a longer than usual pause in the lead-up to some crucial part of the talk. The silence works to build audience tension to a reveal and it serves to amplify the message. I've seen comedians use it to add power to a punchline - some comedians can get a huge laugh from weak material by effective use of silences to build anticipation. Of course, you have to use silences judiciously and not drag them out too long; extended silences can become excruciating for audiences. My suggestion is to use silences that last no longer than a count of 3 or 4.

Let's imagine you're VP of Engineering and you wanted to grab your audiences' attention during a talk on 2021 objectives. Here's how you might emphasize a point using a silence:

"In 2020, we had a problem that plagued several teams. It caused us to miss deadlines. It caused us additional expenses. It caused us all greater personal time and effort. The problems was..." <SILENCE FOR A COUPLE OF BEATS>"... staff retention."

Silence draws attention to your point.

Volume control is also partly dependent on your microphone technique. Very, very few speakers practice using a microphone and working with the sound team, and that's a shame. Although lapel mics are easy and very popular, a handheld mic enables you to play sound games, for example, to increase volume through moving the microphone closer to your mouth. This has the advantage of increasing volume and not distorting your voice in the way that shouting does. If you intend to use the microphone like this, for heaven's sake, practice and speak to the sound person if there is one. A sound person will immediately drop your volume if you use the microphone like this - you have to tell them what you're going to do so they don't work against you.

# Pitch (frequency)

In normal speech, you vary the pitch of your speech for different reasons:
• high pitch represents energy and excitement and high emotion
• low pitch represents calm seriousness
You also vary pitch within sentences; you use increasing pitch to indicate a question at the end of sentences.

Even business speeches have emotional content if you play them right. One company I worked for had a meaningful commitment to Corporate Social Responsibility and people spoke about their experiences in disadvantaged communities around the world. The best speakers used high pitch to give a forceful power when talking about their experiences and coupled it with lower pitch to talk about the effect it had on them. More generally, executives can use higher pitch for energetic parts of a speech, for example, talking about beating the competition or exceeding quota, and then use lower pitch for serious parts, like discussing training and staff development.

Like all abilities, your vocal range is limited, which is why certain types of speech may suit your voice better than others, but there are things you can do to extend or even change your vocal range.

There's a famous story about Margaret Thatcher and her voice training. Before she became Prime Minister, one of the criticisms she faced was that she sounded like a shrill housewife. Obviously, this was a long time ago and it's a deeply sexist comment, but even today, it's a criticism of female politicians. Margaret Thatcher dealt with it by lowering the pitch of her voice. She had voice training and performed voice exercises to speak more deeply. It worked for her - if you get time, listen to some of her speeches - they're masterclasses in the use of voice for oratory.

If you do have a higher-pitched voice, you need to be careful about your vocal range in speeches. Yes, it is deeply unfair that women are labeled shrill, but the labeling occurs. At the very least, you should be aware it will happen.

# Speed

Speed is similar to pitch:
• High speed represents energy, excitement, high emotion
• Low speed represents seriousness
in fact, speed and pitch often work together to emphasize a point.

If you need to represent urgency or energy, speak more quickly. The risk is that some of your audience might not catch every word, especially for non-native speakers, but there is a way round this that can work to your advantage. Many rhetorical devices use repetition in some form or other (e.g. anaphora, epimone, epistrophe etc.) - using one of these devices plus speed means some members of your audience can miss words but still take away the meaning. Very few audiences and speakers can maintain a high level of energy for extended periods, as a consequence, you need to use higher speed with consideration.

If you need to convey seriousness, speak more slowly. The classic case is a national leader speaking during a time of crisis; they all tend to speak more slowly and deliberately to convey gravity.

Of course, you can overdo speaking slowly. The risk is, you put your audience to sleep, so use it sparingly.

# Putting it all together

You should use these techniques like spices in cooking; use them to bring out the features of your talk but not as the main element. You want people to remember your message, not your technique. Some techniques, like silence, are very powerful and should only be used sparingly. Others, like speed, you can use to add variety and interest and to emphasize your point. To keep your audience engaged, you need variety.

To see masters of voice control at work, I suggest you listen to the speeches of Martin Luther King or Margaret Thatcher - two people with very different styles and very different politics, but both very, very effective.

There are other voice techniques that I haven't gone into here. I haven't talked about communicating emotion or using rhythm in speeches. These are more advanced topics I might write about in the future.

Like any physical skill, you need to practice to get good at using these techniques. The next time you're giving a talk, try to add volume, pitch, and speed to add variety and emphasize the important points.

# Other blog posts in the series

This post is part of a series of posts on rhetoric for managers. Here are the other posts in the series:

## Monday, August 24, 2020

### Everything stops for tea

I was told this story years ago by those who were supposedly involved. I worked for the company concerned, but I'm not sure if it makes the story truer or not. In any case, it's a fun story with a subtle moral.

There was an IT department in a big company who were installing servers in an older building that didn't have dedicated server rooms or closets. Because there was nowhere else to put it, they installed a server in an office. The server was a typical innocuous beige box.

After a few weeks, there were reports of trouble in the building. Fairly regularly, e-mail and other services would go down for five minutes at about the same time of day. The IT department investigated. They checked the server configurations, but the configurations weren't to blame. They checked the cabling, but that was just fine. They checked network cards and routers, but everything seemed to be working as expected. During the whole investigation, the network kept on going down at around 10:30am for about 5 minutes, but there seemed to be no hardware or software cause.

In desperation, the IT department posted someone to sit by the server all day to watch what happened.

At about 10:30am, a secretary filled an electric kettle with water. She walked into the office, unplugged the server, and plugged in her kettle. She made herself and her boss a nice pot of tea. When the tea was brewing, she unplugged the kettle and plugged the server back in. She then went to enjoy her break and have her cup of tea.

(Image source: Wikimedia Commons Artist: Ian Smith License: Creative Commons)

So the mystery was solved. The IT department put a notice on the server plug not to unplug it and identified the server as a server. The secretary found another, less convenient place to plug in her kettle, and the world moved on.

I was told this story by the IT department. In their telling, the villain of the piece was the secretary, who they thought should have known better. At the time, I accepted this and laughed with them. Now, I disagree. In my view, the villain was the IT department and the innocent party was the secretary.

No-one told the secretary that the server was important; it wasn't marked in any way. She wasn't a technical person and she had no way of knowing what the box was or what it did. Because of the age of the building, the server was in an office instead of in a server closet, so there were lots of non-technical people in the area near the server. The IT department did know what the server was, they knew that there were non-technical people around, but they chose not to mark the server or communicate to anyone what it was or how important it was to keep it plugged in.

Bottom line: don't blame people for not being psychic - it's your responsibility to communicate.

# What the pollsters got wrong

Had the US presidential polls been correct in 2016, Nate Silver and other forecasters would be anointed oracles and the polling companies would be viewed as soothsayers revealing fundamental truths about society. None of these things happened. Instead, forecasters were embarrassed and polling companies got a bloody nose. If we want to understand if things will go any differently in 2020, we have to understand what happened in 2016 and why.

# What happened in 2016

The simple narrative is: "the polls got it wrong in 2016", but this is a reductio ad absurdum. Let's look at what actually happened.

Generally speaking, there are two types of US presidential election opinion polls: national and state. National polls are conducted across the US and are intended to give a sense of national intentions. Prediction-wise, they are most closely related to the national electoral vote. State polls are conducted within a state and are meant to predict the election in the state.

All pollsters recognize uncertainty in their measurement and most of them quote a margin of error, which is usually a 95% confidence interval. For example, I might say candidate cat has 49% and candidate dog has 51% with a 4% margin of error. This means you should read my results as cat: 49±4% and dog: 51±4%, or more simply, that I think candidate dog will get between 47% and 55% of the vote and candidate cat between 45% and 53%. If the actual results are cat 52% and dog 48%, technically, that's within the margin of error and is a successful forecast. You can also work out a probability of a candidate winning based on opinion poll data.

The 2016 national polling was largely correct. Clinton won the popular vote with a 2.1% margin over Trump. Wikipedia has a list of 2016 national polls, and it's apparent that the polls conducted closer to the election gave better results than those conducted earlier (unsurprisingly) as I've shown in the chart below. Of course, the US does not elect presidents on the popular vote, so this point is of academic interest.
(Based on data from Wikipedia.)

The state polls are a different matter. First off, we have to understand that polls aren't conducted in every state. Wyoming is very, very Republican and as a result, few people would pay for a poll there - no newspaper is going to get a headline from "Republican leads in Wyoming". Obviously, the same thing applies for very, very Democratic states. Polls are conducted more often in hotly contested areas with plenty of electoral college votes. So how did the state polls actually do in 2016? To keep things simple, I'll look at the results from the poll aggregator Sam Wang and compare them to the actual results. The poll aggregation missed in these states:

(Trump - Clinton)
(Trump - Clinton)
Florida 1.2% -1.5%
North Carolina 3.66% -1%
Pennsylvania 0.72% -2.5%
Michigan 0.23% -2.5%
Wisconsin 0.77% < -5%

Poll aggregators use different error models for calculating their aggregated margin of error, but typically they'll vary from 2-3%. A few of these results are outside of the margin of error, but more tellingly, they're all in the same direction.  A wider analysis looking at all the state results shows the same pattern. The polls were biased in favor of Clinton, but why?

# Why they got it wrong

In the aftermath of the election, the American Association for Public Opinion Research created an ad-hoc commission to understand what went wrong. The AAPOR published their findings and I'm going to provide a summary here.

Quoting directly from the report, late changes in voter decisions led earlier polls to overestimate Clinton's support:
"Real change in vote preference during the final week or so of the campaign. About 13 percent of voters in Wisconsin, Florida and Pennsylvania decided on their presidential vote choice in the final week, according to the best available data. These voters broke for Trump by near 30 points in Wisconsin and by 17 points in Florida and Pennsylvania."
The polls oversampled those with college degrees and undersampled those without: "In 2016 there was a strong correlation between education and presidential vote in key states. Voters with higher education levels were more likely to support Clinton. Furthermore, recent studies are clear that people with more formal education are significantly more likely to participate in surveys than those with less education. Many polls – especially at the state level – did not adjust their weights to correct for the over-representation of college graduates in their surveys, and the result was over-estimation of support for Clinton."

The report also suggests that the "shy Trump voter" effect may have played a part.

Others also investigated the result, and a very helpful paper by Kennedy et al provides some key supporting data. Kennedy also states that voter education was a key factor, and shows charts that illustrated the connection between education and voting in 2016 and in 2012. As you might expect, there was little influence in 2012, but in 2016, education was a strong influence. In 2016, most state-level polls did not adjust for education.

Although the polls in New Hampshire called the results correctly, they predicted a much larger win for Clinton. Kennedy quotes Andrew Smith, a UNH pollster, and I'm going to repeat the quote here because it's so important: "We have not weighted by level of education in our election polling in the past and we have consistently been the most accurate poll in NH (it hasn’t made any difference and I prefer to use as few weights as possible), but we think it was a major factor this year. When we include a weight for level of education, our predictions match the final number."

Kennedy also found good evidence of a late swing to Trump that was not caught by polls conducted earlier in the campaign.

On the whole, there does seem to be agreement that two factors were important in 2016:
• Voter education. In previous elections, it didn't matter, in this one it did. State-level polls on the whole didn't control for it.
• Late swing to Trump missed by earlier polls.

# 2020 and beyond

The pollsters business depends on making accurate forecasts and elections are the ultimate high-profile test of the predictive power of polls. There's good evidence that at least some pollsters will correct for education in this election, but what if there's some other factor that's important, for example, housing type, or diet, or something else? How will we be able to spot bias during an election campaign? The answer is, we can't. What we can do is assume the result is a lot less certain than the pollsters, or the poll aggregators, claim.

# Commentary

In the run-up to the 2016 election, I created an opinion poll-aggregation model. My model was based on the work of Sam Wang and used election probabilities. I was disturbed by how quickly a small spread in favor of a candidate gave a very high probability of winning; the election results always seemed more uncertain to me. Textbook poll aggregation models reduced the uncertainty still further.

The margin of error quoted by pollsters is just the sampling error assuming random sampling. But sampling isn't wholly random and there may be house effects or election-specific effects that bias the results. Pollsters and others make the assumption that these effects are zero, which isn't the case. Of course, pollsters change their methodology with each election to avoid previous mistakes. The upshot is, it's almost impossible to assess the size of these non-random bias effects during an election. My feeling is, opinion poll results are a lot less certain than the quoted margin of error, and a 'real' margin of error may be much greater.

The lesson for poll aggregators like me is to allow for other biases and uncertainty in our models. To his great credit, Nate Silver is ahead here as he is in so many other areas.

# Getting it really, really wrong

On occasions, election opinion polls have got it very, very wrong. I'm going to talk about some of their biggest blunders and analyze why they messed up so very badly. There are lessons about methodology, hubris, and humility in forecasting.

(Image credit: secretlondon123, Source: Wikimedia Commons, License: Creative Commons)

# The Literary Digest - size isn't important

The biggest, badest, and boldest polling debacle happened in 1936, but it still has lessons for today. The Literary Digest was a mass-circulation US magazine published from 1890-1938. In 1920, it started printing presidential opinion polls, which over the years acquired a good reputation for accuracy [Squire], so much so that they boosted the magazine's subscriptions. Unfortunately, its 1936 opinion poll sank the ship.

(Source: Wikimedia Commons. License: Public Domain)

The 1936 presidential election was fought between Franklin D. Roosevelt (Democrat), running for re-election, and his challenger Alf Landon (Republican).  The backdrop was the ongoing Great Depression and the specter of war in Europe.

The Literary Digest conducted the largest ever poll up to that time, sending surveys to 10 million people and receiving 2.3 million responses; even today, this is orders of magnitude larger than typical opinion polls. Through the Fall of 1936, they published results as their respondents returned surveys; the magazine didn't interpret or weight the surveys in any way [Squire]. After 'digesting' the responses, the Literary Digest confidently predicted that Landon would easily beat Roosevelt. Their reasoning was, the poll was so big it couldn’t possibly be wrong, after all the statistical margin of error was tiny

Unfortunately for them, Roosevelt won handily. In reality, handily is putting it mildly, he won a landslide victory (523 electoral college votes to 8).

So what went wrong? The Literary Digest sampled its own readers, people who were on lists of car owners, and people who had telephones at home. In the Great Depression, this meant their sample was not representative of the US voting population; the people they sampled were much wealthier. The poll also suffered from non-response bias; the people in favor of Landon were enthusiastic and filled in the surveys and returned them, the Roosevelt supporters less so. Unfortunately for the Literary Digest, Roosevelt's supporters weren't so lethargic on election day and turned up in force for him [Lusinchi, Squire]. No matter what the size of the Literary Digest's sample, their methodology baked in bias, so it was never going to give an accurate forecast.

Bottom line: survey size can't make up for sampling bias.

Sampling bias is an ongoing issue for pollsters. Factors that matter a great deal in one election might not matter in another, and pollsters have to estimate what will be important for voting so they know who to select. For example, having a car or a phone might not correlate with voting intention for most elections, until for one election they do correlate very strongly. The Literary Digest's sampling method was crude, but worked fine in previous elections. Unfortunately, in 1936 the flaws in their methodology made a big difference and they called the election wrong as a result. Fast-forwarding to 2016, flaws in sampling methodology led to pollsters underestimating support for Donald Trump.

Sadly, the Literary Digest never recovered from this misstep and folded two years later.

# Dewey defeats Truman - or not

The spectacular implosion of the 1936 Literary Digest poll gave impetus to the more 'scientific' polling methods of George Gallup and others [Igo]. But even these scientific polls came undone in the 1948 US presidential election.

The election was held not long after the end of World War II and was between the incumbent, Harry S. Truman (Democrat), and his main challenger, Thomas E. Dewey (Republican). At the start of the election campaign, Dewey was the favorite over the increasingly unpopular Truman. While Dewey ran a low-key campaign, Truman led a high-energy, high-intensity campaign.

The main opinion polling companies of the time, Gallup, Roper, and Crossley firmly predicted a Dewey victory. The Crossley Poll of 15 October 1948 put Dewey ahead in 27 states [Topping]. In fact, their results were so strongly in favor of Dewey that some polling organizations stopped polling altogether before the election.

The election result? Truman won convincingly.

A few newspapers were so convinced that Dewy had won that they went to press with a Dewey victory announcement, leading to one of the most famous election pictures of all time.

(Source: Truman Library)

What went wrong?

As far as I can tell, there were two main causes of the pollsters' errors:

• Undecided voters breaking for Truman. Pollsters had assumed that undecided voters would split their votes evenly between the candidates, which wasn't true then, and probably isn't true today.
• Voters changing their minds or deciding who to vote for later in the campaign. If you stop polling late in the campaign, you're not going to pick up last-minute electoral changes.

Just as in 1936, there was a commercial fallout, for example, 30 newspapers threatened to cancel their contracts with Gallup.

As a result of this fiasco, the polling industry regrouped and moved towards random sampling and polling late into the election campaign.

# US presidential election 2016

For the general public, this is the best-known example of polls getting the result wrong. There's a lot to say about what happened in 2016, so much in fact, that I'm going to write a blog post on this topic alone. It's not the clear-cut case of wrongness it first appears to be.

For now, I'll just give you some hints: like the Literary Digest example, sampling was one of the principal causes, exacerbated by late changes in the electorate's voting decisions. White voters without college degrees voted much more heavily for Donald Trump than Hilary Clinton and in 2016, opinion pollsters didn't control for education, leading them to underestimate Trump's support in key states. Polling organizations are learning from this mistake and changing their methodology for 2020. Back in 2016, a significant chunk of the electorate seemed to make up their minds in the last few weeks of the election which was missed by earlier polling.

It seems the more things change, the more they remain the same.

# Anarchy in the UK?

There are several properties of the US electoral system that make it very well suited for opinion polling but other electoral systems don't have these properties. To understand why polling is harder in the UK than in the US, we have to understand the differences between a US presidential election and a UK general election.

• The US is a national two-party system, the UK is a multi-party democracy with regional parties. In some constituencies, there are three or more parties that could win.
• In the US, the president is elected and there are only two candidates, in the UK, the electorate vote for Members of Parliament (MPs) who select the prime minister. This means the candidates are different in each constituency and local factors can matter a great deal.
• There are 50 states plus Washington DC, meaning 51 geographical areas. In the UK, there are currently 650 constituencies, meaning 650 geographies area to survey.

These factors make forecasting UK elections harder than US elections, so perhaps we should be a bit more forgiving. But before we forgive, let's have a look at some of the UK's greatest election polling misses.

## General elections

The 1992 UK general election was a complete disaster for the opinion polling companies in the UK [Smith]. Every poll in the run-up to the election forecast either a hung parliament (meaning, no single party has a majority) or a slim majority for the Labour party. Even the exit polls forecast a hung parliament. Unfortunately for the pollsters, the Conservative party won a comfortable working majority of seats. Bob Worcester, the best known UK pollster at the time, said the polls were more wrong "...than in any time since their invention 50 years ago" [Jowell].

Academics proposed several possible causes [Jowell, Smith]:
• "Shy Tories". The idea here is that people were too ashamed to admit they intended to vote Conservative, so they lied or didn't respond at all.
• Don't knows/won't say. In any poll, some people are undecided or won't reveal their preference. To predict an election, you have to model how these people will vote, or at least have a reliable way of dealing with them, and that wasn't the case in 1992 [Lynn].
• Voter turnout. Different groups of people actually turn out to vote at different proportions. The pollsters didn't handle differential turnout very well, leading them to overstate the proportion of Labour votes.
• Quota sampling methods. Polling organizations use quota-based sampling to try and get a representative sample of the population. If the sampling is biased, then the results will be biased [Lynn, Smith].

As in the US in 1948, the pollsters re-grouped, licked their wounds, and revised their methodologies.

After the disaster of 1992, surely the UK pollsters wouldn't get it wrong again? Moving forward 2015, the pollsters got it wrong again!

In the 2015 election, the Conservative party won a working majority. This was a complex, multi-party election with strong regional effects, all of which was well-known at the time. As in 1992, the pollsters predicted a hung parliament and their subsequent humiliation was very public. Once again, there were various inquiries into what went wrong [Sturgis]. Shockingly, the "official" post-mortem once again found that sampling was the cause of the problem. The polls over-represented Labour supporters and under-represented Conservative supporters, and the techniques used by pollsters to correct for sampling issues were inadequate [Sturgis]. The official finding was backed up by independent research which further suggested pollsters had under-represented non-voters and over-estimated support for the Liberal Democrats [Melon].

Once again, the industry had a re-think.

There was another election in 2019. This time, the pollsters got it almost exactly right.

It's nice to see the polling industry getting a big win, but part of me was hoping Lord Buckethead or Count Binface would sweep to victory in 2019.

(Count Binface. Source: https://www.countbinface.com/)

## EU referendum

This was the other great electoral shock of 2016. The polls forecast a narrow 'Remain' victory, but the reality was a narrow 'Leave' win. Very little has been published on why the pollsters got it wrong in 2016, but what little that was published suggests that the survey method may have been important. The industry didn't initiate a broad inquiry, instead, individual polling companies were asked to investigate their own processes

# Other countries

There have been a series of polling failures in other countries. Here are just a few:

# Takeaways

In university classrooms around the world, students are taught probability theory and statistics. It's usually an antiseptic view of the world, and opinion poll examples are often presented as straightforward math problems, stripped of the complex realities of sampling. Unfortunately, this leaves students unprepared for the chaos and uncertainty of the real world.

Polling is a complex, messy issue. Sampling governs the success or failure of polls, but sampling is something of a dark art and it's hard to assess its accuracy during a campaign. In 2020, do you know the sampling methodologies used by the different polling companies? Do you know who's more accurate than who?

Every so often, the polling companies take a beating. They re-group, fix the issues, and survey again. They get more accurate, and after a while, the press forgets about the failures and talks in glowing terms about polling accuracy, and maybe even doing away with the expensive business of elections in favor of polls. Then another debacle happens. The reality is, the polls are both more accurate and less accurate than the press would have you believe.

As Yogi Berra didn't say, "it's tough to make predictions, especially about the future".

# References

[Igo] '"A gold mine and a tool for democracy": George Gallup, Elmo Roper, and the business of scientific polling,1935-1955', Sarah Igo, J Hist Behav Sci. 2006;42(2):109-134

[Jowell] "The 1992 British Election: The Failure of the Polls", Roger Jowell, Barry Hedges, Peter Lynn, Graham Farrant and Anthony Heath, The Public Opinion Quarterly, Vol. 57, No. 2 (Summer, 1993), pp. 238-263

[Lusinchi] '“President” Landon and the 1936 Literary Digest Poll: Were Automobile and Telephone Owners to Blame?', Dominic Lusinchi, Social Science History 36:1 (Spring 2012)

[Lynn] "How Might Opinion Polls be Improved?: The Case for Probability Sampling", Peter Lynn and Roger Jowell, Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 159, No. 1 (1996), pp. 21-28

[Melon] "Missing Nonvoters and Misweighted Samples: Explaining the 2015 Great British Polling Miss", Jonathan Mellon, Christopher Prosser, Public Opinion Quarterly, Volume 81, Issue 3, Fall 2017, Pages 661–687

[Smith] "Public Opinion Polls: The UK General Election, 1992",  T. M. F. Smith, Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 159, No. 3 (1996), pp. 535-545

[Squire] "Why the 1936 Literary Digest poll failed", Peverill Squire, Public Opinion Quarterly, 52, 125-133, 1988

[Sturgis] "Report of the Inquiry into the 2015 British general election opinion polls", Patrick Sturgis,  Nick Baker, Mario Callegaro, Stephen Fisher, Jane Green, Will Jennings, Jouni Kuha, Ben Lauderdale, Patten Smith

[Topping] '‘‘Never argue with the Gallup Poll’’: Thomas Dewey, Civil Rights and the Election of 1948', Simon Topping, Journal of American Studies, 38 (2004), 2, 179–198

# Polls to probabilities

How likely is it that your favorite candidate will win the election? If your candidate is ahead of their opponent by 5%, are they certain to win? What about 10%? Or if they're down by 2%, are they out of the race? Victory probabilities are related to how far ahead or behind a candidate is in the polls, but the relationship isn't a simple one and has some surprising consequences as we'll see.

# Opinion poll example

Let's imagine there's a hard-fought election between candidates A and B. A newspaper publishes an opinion poll a few days before the election:

• Candidate A: 52%
• Candidate B: 48%
• Sample size: 1,000

Should candidate A's supporters pop the champagne and candidate B's supporters start crying?

# The spread and standard error

Let's use some standard notation. From the theory of proportions, the mean and standard error for the proportion of respondents who chose A is:

$p_a = {n_a \over n}$ $\sigma_a = { \sqrt {{p_a(1-p_a)} \over n}}$

where $$n_a$$ is the number of respondents who chose A and $$n$$ is the total number of respondents. If the proportion of people who answered candidate B is $$p_b$$, then obviously, $$p_a + p_b = 1$$.

Election probability theory usually uses the spread, $$d$$, which is the difference between the candidates: $d = p_a - p_b = 2p_a - 1$ From statistics theory, the standard error of $$d$$  is: $\sigma_d = 2\sigma_a$ (these relationships are easy to prove, but a bit tedious, if anyone asks, I'll show the proof.)

Obviously, for a candidate to win, their spread, $$d$$, must be > 0.

# Everything is normal

From the central limit theorem (CLT), we know $$p_a$$ and $$p_b$$ are normally distributed, and also from the CLT, we know $$d$$ is normally distributed. The next step to probability is viewing the normal distribution for candidate A's spread. The chart below shows the normal distribution with mean $$d$$ and standard error $$\sigma_d$$.

As with most things with the normal distribution, it's easier if we transform everything to the standard normal using the transformation: $z = {(x - d) \over \sigma_d}$ The chart below is the standard normal representation of the same data.

The standard normal form of this distribution is a probability density function. We want the probability that $$d>0$$ which is the light green shaded area, so it's time to turn to the cumulative distribution function (CDF), and its complement, the complementary cumulative distribution function (CCDF).

# CDF and CCDF

The CDF gives us the probability that we will get a result less than or equal to some value I'll label $$z_c$$. We can write this as: $P(z \leq z_c) = CDF(z_c) = \phi(z_c)$ The CCDF is defined so that: $1 = P(z \leq z_c) + P(z > z_c)= CDF(z_c) + CCDF(z_c) = \phi(z_c) + \phi_c(z_c)$ Which is a long-winded way of saying the CCDF is defined as:  $CCDF(z_c) = P(z_c \gt 0) = \phi_c(z_c)$

The CDF is the integral of the PDF, and from standard textbooks: $\phi(z_c) = {1 \over 2} \left( 1 + erf\left( {z_c \over \sqrt2} \right) \right)$ We want the CCDF,  $$P(z > z_c)$$, which is simply 1 - CDF.

Our critical value occurs when the spread is zero. The transformation to the standard normal in this case is: $z_c = {(x - d) \over \sigma_d} = {-d \over \sigma_d}$ We can write the CCDF as: $\phi_c(z_c) = 1 - \phi(z_c) = 1- {1 \over 2} \left( 1 + erf\left( {z_c \over \sqrt2} \right) \right)\$ $= 1 - {1 \over 2} \left( 1 + erf\left( {-d \over {\sigma_d\sqrt2}} \right) \right)$ We can easily show that: $erf(x) = -erf(-x)$ Using this relationship, we can rewrite the above equation as: $P(d > 0) = {1 \over 2} \left( 1 + erf\left( {d \over {\sigma_d\sqrt2}} \right) \right)$

What we have is an equation that takes data we've derived from an opinion poll and gives us a probability of a candidate winning.

# Probabilities for our example

For candidate A:

• $$n=1000$$
• $$p_a = {520 \over 1000} = 0.52$$
• $$\alpha_a = 0.016$$
• $$d = {{520 - 480} \over 1000} = 0.04$$
• $$\alpha_d = 0.032$$
• $$P(d > 0) = 90\%$$

For candidate B:

• $$n=1000$$
• $$p_b = {480 \over 1000} = 0.48$$
• $$\alpha_b = 0.016$$
• $$d = {{480 - 520} \over 1000} = -0.04$$
• $$\alpha_d = 0.032$$
• $$P(d > 0) = 10\%$$

Obviously, the two probabilities add up to 1. But note the probability for candidate A. Did you expect a number like this? A 4% point lead in the polls giving a 90% chance of victory?

# Some consequences

Because the probability is based on $$erf$$, you can quite quickly get to highly probable events as I'm going to show in an example. I've plotted the probability for candidate A for various leads (spreads) in the polls. Most polls nowadays tend to have about 800 or so respondents (some are more and some are a lot less), so I've taken 800 as my poll size. Obviously, if the spread is zero, the election is 50%:50%. Note how quickly the probability of victory increases as the spread increases.

What about the size of the poll, how does that change things? Let's fix the spread to 2% and vary the size of the poll from 200 to 2,000 (the usual upper and lower bounds on poll sizes). Here's how the probability varies with poll size for a spread of 2%.

Now imagine you're a cynical and seasoned poll analyst working on candidate A's campaign. The young and excitable intern comes rushing in, shouting to everyone that A is ahead in the polls! You ask the intern two questions, and then, like the Oracle at Delphi, you predict happiness or not. What two questions do you ask?

• What's the size of the poll?

# What's missing

There are two elephants in the room, and I've been avoiding talking about them. Can you guess what they are?

All of this analysis assumes the only source of error is random noise. In other words, that there's no systemic bias. In the real world, that's not true. Polls aren't wholly based on random sampling, and the sampling method can introduce bias. I haven't modeled it at all in this analysis. There are at least two systemic biases:

• Pollster house effects arising from house sampling methods
• Election effects arising from different population groups voting in different ways compared to previous elections.

Understanding and allowing for bias is key to making a successful election forecast. This is an advanced topic for another blog post.

The other missing item is more subtle. It's undecided voters. Imagine there are two elections and two opinion polls. Both polls have 1,000 respondents.

Election 1:

• Candidate A chosen by 20%
• Candidate B chosen by 10%
• Undecided voters are 70%
Election 2:

• Candidate A chosen by 55%
• Candidate B chosen by 45%
• Undecided voters are 0%
In both elections, the spread from the polls is 10%, so candidate A has the same higher chance of winning in both elections, but this doesn't seem right. Intuitively, we should be less certain about an election with a high number of undecided voters. Modeling undecided voters is a topic for another blog post!

The best source of election analysis I've read is in the book "Introduction to data science" and the associated edX course "Inference and modeling", both by Rafael Irizarry. The analysis in this blog post was culled from multiple books and websites, each of which only gave part of the story.

# Serial numbers and losing business

Here's a story about how something innocuous and low-level like serial numbers can damage your reputation and lose you business. I have advice on how to avoid it too!

(Serial numbers can give away more than you think. Image source: Wikimedia Commons. License: Public Domain.)

# Numbered by design

Years ago, I worked for a specialty manufacturing company, its products were high precision, low-volume, and expensive. The industry was cut-throat competitive, and commentary in the press was that not every manufacturer would survive; as a consequence, customer confidence was critical.

An overseas customer team came to us to design a specialty item. The company spent a week training them and helping them design what they wanted. Of course, the design was all on a CAD system with some templated and automated features. That's where the trouble started.

One of the overseas engineers spotted that a customer-based serial number was automatically included in the design. Unfortunately, the serial number was 16, implying that the overseas team was only the 16th customer (which was true). This immediately set off their alarm bells - a company with only 16 customers was probably not going to survive the coming industry shake-out. The executive team had to smooth things over, which included lying about the serial numbers. As soon as the overseas team left, the company changed their system to start counting serial numbers from some high, but believable number (something like 857).

Here's the point: customers can infer a surprising amount from your serial numbers, especially your volume of business.

# Invoices

Years later, I was in a position where I was approving vendor invoices. Some of my vendors didn't realize what serial numbers could reveal, and I ended up gaining insight into their financial state. Here are the rules I used to figure out what was going on financially, which was very helpful when it came to negotiating contract renewals.

• If the invoice is unnumbered, the vendor is very small and they're likely to have only a handful of customers. All accounting systems offer invoice generation and they all number/identify individual invoices. If the invoice doesn't have a serial number, the vendor's business is probably too small to warrant buying an accounting system, which means a very small number of customers.
• Naive vendors will start invoice numbering from 1, or from a number like 1,000. You can infer size if they do this.
• Many accounting systems will increment invoice numbers by 1 by default. If you're receiving regular invoices from a vendor, you can use this to infer their size too. If this month's invoice is 123456 and next month's is 123466, this might indicate 10 invoices in a month and therefore 10 customers. You can do this for a while and spot trends in a vendor's customer base, for example, if you see invoices incrementing by 100 and later by 110, this may be because the vendor has added 10 customers.

The accounting tool suppliers are wise to this, and many tools offer options for invoice numbering that stop this kind of analysis (e.g. starting invoices from a random number, random invoice increments, etc.). But not all vendors use these features and serial number analysis works surprisingly often.

(Destroyed German Tank. Image source: Wikimedia Commons. License: Public Domain)

# The German tank problem

Serial number analysis has been used in wartime too. In World War II, the allied powers wanted to understand the capacity of Nazi industry to build tanks. Fortunately, German tanks were given consecutive serial numbers (this is a simplification, but it was mostly true). Allied troops were given the job of recording the serial numbers of captured or destroyed tanks which they reported back. Statisticians were able to infer changes in Nazi tank production capabilities through serial number analysis, which after the war was found to be mostly correct. This is known as the German tank problem and you can read a lot more about it online.

# Simple things say a lot

The bottom line is simple: serial numbers can give away more about your business than you think. They can tell your customers how big your customer base is, and whether it's expanding or contracting; crucial information when it comes to renegotiating contracts. Pay attention to your serial numbers and invoices!

# How opinion polls work on the ground

I worked as a street interviewer for an opinion polling organization and I know how opinion polls are made and executed. In this blog post, I'm going to explain how opinion polls were run on the ground and educate you on why polls can go wrong and how difficult it is to run a valid poll. I'm also going to tell you why everything you learned from statistical textbooks about polling is wrong.

(Image Credit: Wikimedia Commons, License: Public Domain)

# Random sampling is impossible

In my experience, this is something that's almost never mentioned in statistics textbooks but is a huge issue in polling. If they talk about sampling at all, textbooks assume random sampling, but that's not what happens.

Random sampling sounds wonderful in theory, but in practice, it can be very hard; people aren't beads in an urn. How do you randomly select people on the street or on the phone - what's the selection mechanism? How do you guard against bias? Let me give you some real examples.

Imagine you're a street interviewer. Where do you stand to take your random sample? If you take your sample outside the library, you'll get a biased sample. If you take it outside the factory gates, or outside a school, or outside a large office complex, or outside a playground, you'll get another set of biases. What about time of day? The people out on the streets at 7am are different from the people at 10am and different from the people at 11pm.

Similar logic applies to phone polls. If you call landlines only, you'll get one set of biases. If you call people during working hours, your sample will be biased; is the mechanic fixing a car going to put down their power tool to talk to you? But calling outside of office hours means you might not get shift workers or parents putting their kids to bed. The list goes on.

You might be tempted to say, do all the things: sample at 10am, 3pm, and 11pm; sample outside the library, factory, and school; call on landlines and mobile phones, and so on, but what about the cost? How can you keep opinion polls affordable? How do you balance calls at 10am with calls at 3pm?

Because there are very subtle biases in "random" samples, most of the time, polling organizations don't do wholly 'random' sampling.

# Sampling and quotas

If you can't get a random sample, you'd like your sample to be representative of a population. Here, representative means that it will behave in the same way as the population for the topics you're interested in, for example, voting in the same way or buying butter in the same way. The most obvious way of sampling is demographics: age and gender etc.

Let's say you were conducting a poll in a town to find out residents' views on a tax increase. You might find out the age and gender demographics of the town and sample people in a representative way so that the demographics of your sample match the demographics of the town. In other words, the proportion of men and women in your sample matches that of the town, the age distribution matches that of the town, and so on.

(US demographics. Image credit: Wikimedia Commons. License: Public domain)

In practice, polling organizations use a number of sampling factors depending on the survey. They might include sampling by:
• Gender
• Age
• Ethnicity
• Income
• Social class or employment category
• Education
but more likely, some combination of them.

In practice, interviewers may be given a sheet outlining the people they should interview, for example, so many women aged 45-50, so many people with degrees, so many people earning over \$100,000, and so on. This is often called a quota. Phone interviews might be conducted on a pre-selected list of numbers, with guidance on how many times to call back, etc.

Some groups of people can be very hard to reach, and of course, not everyone answers questions. When it comes to analysis time, the results are weighted to correct bias.  For example, if the survey could only reach 75% of its target for men aged 20-25, the results for men in this category might be weighted by 4/3.

# Who do you talk to?

Let's imagine you're a street interviewer,  you have your quota to fulfill, and you're interviewing people on the street, who do you talk to? Let me give you a real example from my polling days; I needed a man aged 20-25 for my quota. On the street, I saw what looked like a typical and innocuous student, but I also saw an aggressive-looking skinhead in full skinhead clothing and boots. Who would you choose to interview?

(Image credit: XxxBaloooxxx via Wikimedia Commons. License: Creative Commons.)

Most people would choose the innocuous student, but that's introducing bias. You can imagine multiple interviewers making similar decisions resulting in a heavily biased sample. To counter this problem, we were given guidance on who to select, for example, we were told to sample every seventh person or to take the first person who met our quota regardless of their appearance. This at least meant we were supposed to ask the skinhead, but of course, whether he chose to reply or not is another matter.

The rules sometimes led to absurdity. I did a survey where I was supposed to ask every 10th person who passed by. One man volunteered, but I said no because he was the 5th person. He hung around so long, eventually he became the 10th person to pass me by. Should I have interviewed him? He met the rules and he met my sampling quota.

I came across a woman who was exactly what I needed for my quota. She was a care worker who had been on a day trip with severely mentally handicapped children and was in the process of moving them from the bus to the care home. Would you take her time to interview her? What about the young parent holding his child when I knocked on the door? The apartment was clearly used for recent drug-taking. Would you interview him?

As you might expect, interviewers interpreted the rules more flexibly as the deadline approached and as it got later in the day. I once interviewed a very old man whose wife answered all the questions for him. This is against the rules, but he agreed with her answers, it was getting late, and I needed his gender/age group/employment status for my quota.

The company sent out supervisors to check our work on the streets, but of course, supervisors weren't there all the time, and they tended to vanish after 5pm anyway.

The point is, when it comes to it, there's no such thing as random sampling. Even with quotas and other guided selection methods, there are a thousand ways for bias to creep into sampling and the biases can be subtle. The sampling methodology one company uses will be different from another company's, which means their biases will not be the same.

# What does the question mean?

One of the biggest lessons I learned was the importance of clear and unambiguous questions, and the unfortunate creativity of the public. All of the surveys I worked on had clearly worded questions, and to me, they always seemed unambiguous. But once you hit the streets, it's a different world. I've had people answer questions with the most astonishing levels of interpretation and creativity - regrettably, their interpretations were almost never what the survey wanted.

What surprised me was how willing people were to answer difficult questions about salary and other topics. If the question is worded well (and I know all the techniques now!), you can get strangers to tell you all kinds of things. In almost all cases, I got people to tell me their age, and when required, I got salary levels from almost everyone.

A well-worded question led to a revelation that shocked me and shook me out of my complacency.  A candidate had unexpectedly just lost an election in the East End of London and the polling organization I worked for had been contracted to find out why. To help people answer one of the questions, I had a card with a list of reasons why the candidate lost, including the option: "The candidate was not suitable for the area." A lot of people chose that as their reason. I was naive and didn't know what it meant, but at the end of the day, I interviewed a white man in pseudo-skinhead clothes, who told me exactly what it meant. He selected it as his answer and added: "She was black, weren't she?".

The question setters weren't naive. They knew that people would hesitate before admitting racism was the cause, but by carefully wording the question and having people choose from options, they provided a socially acceptable way for people to answer the question.

Question setting requires real skill and thought.

(Oddly. there are very few technical resources on wording questions well. The best I've found is: "The Art of Asking Questions", by Stanley Le Baron Payne, but the book has been out of print for a long time.)

# Order, order

Question order isn't accidental either, you can bias a survey by the order you ask questions. Of course, you have to avoid leading questions. The textbook example is survey questions on gun control. Let's imagine there were two surveys with these questions:

Survey 1:
• Do you think people should be able to protect their families?
• Do you believe in gun control?
Survey 2:
• Are you concerned about the number of weapons in society?
• Do you think all gun owners secure their weapons?
• Do you believe in gun control?
What answers do you think you might get?

As well as avoiding bias, question order is important to build trust, especially if the topic is a sensitive one. The political survey I did in the East End of London was very carefully constructed to build the respondent's trust to get to the key 'why' question. This was necessary in other surveys too. I did a survey on police recruitment, but as I'm sure you're aware, some people are very suspicious of the police. Once again, the survey was constructed so the questions that revealed it was about police recruitment came later on after the interviewer (me!) had built some trust with the respondent.

# How long is the survey?

This is my favorite story from my polling days. I was doing a survey on bus transport in London and I was asked to interview people waiting for a bus. The goal of the survey was to find out where people were going so London could plan for new or changed bus routes. For obvious reasons, the set of questions were shorter than usual, but in practice, not short enough; a big fraction of my interviews were cut short because the bus turned up! In several cases, I was asking questions as people were getting on the bus, and in a few cases, we had a shouted back and forth to finish the survey before their bus pulled off out of earshot.

(Image credit: David McKay via Wikimedia Commons. License: Creative Commons)

To avoid exactly this sort of problem, most polling organizations use pilot surveys. These are test surveys done on a handful of people to debug the survey. In this case, the pilot should have uncovered the fact that the survey was too long, but regrettably, it didn't.

(Some time later, I designed and executed a survey in Boston. I did a pilot survey and found that some of my questions were confusing and I could shorten the survey by using a freeform question rather than asking for people to choose from a list. In any survey of more than a handful of respondents, I strongly recommend running a pilot - especially if you don't have a background in polling.)

The general lesson for any survey is, keep it as short as possible and understand the circumstances people will be in when you're asking them questions.

# What it all means - advice for running surveys

Surveys are hard. It's hard to sample right, it's hard to write questions well, and it's hard to order questions to avoid bias.

Over the years, I've sat in meetings when someone has enthusiastically suggested a survey. The survey could be a HR survey of employees, or a marketing survey of customers. Usually, the level of enthusiasm is inversely related to survey experience. The most enthusiastic people are often very resistant to advice about question phrasing and order, and most resistant of all to the idea of a pilot survey. I've seen a lot of enthusiastic people come to grief because they didn't listen.