Saturday, February 27, 2021

Simpson's paradox: a trap for the naive analyst

Simpson's paradox can mess up your business

Let's imagine you're the Chief Revenue Officer at a manufacturing company that sells tubes and cylinders. You're having trouble with European sales reps discounting, so you offer a spif: the country team that sells at the highest price gets a week-long vacation somewhere warm and sunny with free food and drink. The Italian and German sales teams are raring to go.

At the end of the quarter, you have these results [Wang]:

Product type
Cylinder Tube
Sales team No sales Average price No sales Average price
German 80 €100 20 €70
Italian 20 €120 80 €80

This looks like a clear victory for the Italians! They maintained a higher price for both cylinders and tubes! If they have a higher price for every item, then obviously, they've won. The Italians start packing their swimsuits.

Not so fast, say the Germans, let's look at the overall results.

Sales team Average price
German €94
Italian €88

Despite having a lower selling price for both cylinders and tubes, the Germans have maintained a higher selling price overall!

How did this happen? It's an instance of Simpon's paradox.

Why the results reversed

Here's how this happened: the Germans sold more of the expensive cylinders and the Italians sold more of the cheaper tubes. The average price is the ratio of the total monetary amount/total sales quantity. To put it very simply, ratios (prices) can behave oddly.

Let's look at a plot of the selling prices for the Germans and Italians.

German and Italian prices

The blue circles are tubes and the orange circles are cylinders. The size of the circles represents the number of sales. The little red dot in the center of the circles is the price. 

Let's look at cylinders. Plainly, the Italians sold them at a higher price, but they're the most expensive item and the Germans sold more of them. Now, let's look at tubes, once again, the Italians sold them at a higher price than the Germans, but they're cheaper than cylinders and the Italians sold more of them.

You can probably see where this is going. Because the Italians sold more of the cheaper items, their average (or pooled) price is dragged down, despite maintaining a higher price on a per-item basis. I've re-drawn the chart, but this time I've added a horizontal black line that represents the average.

The product type (cylinders or tubes) is known in statistics as a confounder because it confounds the results. It's also known as a conditioning variable.

A disturbing example - does this drug work?

The sales example is simple and you can see the cause of the trouble immediately. Let's look at some data from a (pretend) clinical trial.

Imagine there's some disease that impacts men and women and that some people get better on their own without any treatment at all. Now let's imagine we have a drug that might improve patient outcomes. Here's the data [Lindley].

Female Male
Recovered Not Recovered Rate Recovered Not Recovered Rate
Took drug 8 2 80% 12 18 40%
Not take drug 21 9 70% 3 7 30%

Wow! The drug gives everyone an added 10% on their recovery rate. Surely we need to prescribe this for everyone? Let's have a look at the overall data.

Everyone
Recovered Not Recovered Rate
Took drug 20 20 50%
Not take drug 24 16 60%

What this data is saying is, the drug reduces the recovery rate by 10%.

Let me say this again. 

  • For men, the drug improves recovery by 10%.
  • For women, the drug improves recovery by 10%.
  • For everyone, the drug reduces recovery by 10%. 

If I'm a clinician, and I know you have the disease, if you're a woman, I would recommend you take the drug, if you're a man I would recommend you take the drug, but if I don't know your gender, I would advise you not to take the drug. What!!!!!

This is exactly the same math as the sales example I gave you above. The explanation is the same. The only thing different is the words I'm using and the context.

Simpson and COVID

In the United States, it's pretty well-established that black and Hispanic people have suffered disproportionately from COVID. Not only is their risk of getting COVID higher, but their health outcomes are worse too. This has been extensively covered in the press and on the TV news.

In the middle of 2020, the CDC published data that showed fatality rates by race/ethnicity. The fatality rate means the fraction of patients with COVID who die. The data showed a clear result: white people had the worst fatality rate of the racial groups they studied.

Doesn't this contradict the press stories? 

No.

There are three factors at work:

  • The fatality rate increases with age for all ethnic groups. It's much higher for older people (75+) than younger people.
  • The white population is older than the black and Hispanic populations.
  • Whites have lower fatality rates in almost all age groups.

This is exactly the same as the German and Italian sales team example I started with. As a fraction of their population, there are more old white people than old black and Hispanic people, so the fatality rates for the white population are dominated by the older age group in a way that doesn't happen for blacks and Hispanics.

In this case, the overall numbers are highly misleading and the more meaningful comparison is at the age-group level. Mathematically, we can remove the effect of different demographics to make an apples-to-apples comparison of fatality rates, and that's what the CDC has done.

In pictures

Wikipedia has a nice article on Simpson's paradox and I particularly like the animation that's used to accompany it, so I'm copying it here.

(Simpson's paradox animated. Image source: Wikipedia, Credit: Pace~svwiki, License: Creative Commons)

Each of the dots represents a measurement, for example, it could be price. The colors represent categories, for example, German or Italian sales teams, etc. if we look at the results overall, the trend is negative (shown by the black dots and black line). If we look at the individual categories, the trend is positive (colors). In other words, the aggregation reverses the individual trends.

The classic example - sex discrimination at Berkeley

The Simpson's paradox example that's nearly always quoted is the Berkeley sex discrimination case [Bickel]. I'm not going to quote it here for two reasons: it's thoroughly discussed elsewhere, and the presentation of the results can be confusing. I've stuck to simpler examples to make my point.

American politics

A version of Simpson's paradox can occur in American presidential elections, and it very nicely illustrates the cause of the problem.

In 2016, Hilary Clinton won the popular vote by 48.2% to 46.1%, but Donald Trump won the electoral college by 304 to 227. The reason for the reversal is simple, it's the population spread among the states and the relative electoral college votes allocated to the states. As in the case of the rollup with the sales and medical data I showed you earlier, exactly how the data rolls up can reverse the result.

The question, "who won the 2016 presidential election" sounds simple, but it can have several meanings:

  • who was elected president
  • who got the most votes
  • who got the most electoral college votes

The most obvious meaning, in this case, is, "who was elected president". But when you're analyzing data, it's not always obvious what the right question really is.

The root cause of the problem

The problem occurs because we're using an imprecise language (English) to interpret mathematical results. In the sales and medical data cases, we need to define what we want. 

In the sales price example, do we mean the overall price or the price for each category? The contest was ambiguous, but to be fair to our CRO, this wasn't obvious initially. Probably, the fairest result is to take the overall price.

For the medical data case, we're probably better off taking the male and female data separately. A similar argument applies for the COVID example. The clarifying question is, what are you using the statistics for? In the drug data case, we're trying to understand the efficacy of a drug, and plainly, gender is a factor, so we should use the gendered data. In the COVID data case, if we're trying to understand the comparative impact of COVID on different races/ethnicities, we need to remove demographic differences.

If this was the 1980s, we'd be stuck. We can't use statistics alone to tell us what the answer is, we'd have to use data from outside the analysis to help us [Pearl]. But this isn't the 1980s anymore, and there are techniques to show the presence of Simpson's paradox. The answer lies in using something called a directed acyclic graph, usually called a DAG. But DAGs are a complex area and too complex for this blog post that I'm aiming at business people.

What this means in practice

There's a very old sales joke that says, "we'll lose money on every sale but make it up in volume". It's something sales managers like to quote to their salespeople when they come asking for permission to discount beyond the rules. I laughed along too, but now I'm not so quick to laugh. Simpson's paradox has taught me to think before I speak. Things can get weird.

Interpreting large amounts of data is hard. You need training and practice to get it right and there's a reason why seasoned data scientists are sought after. But even experienced analysts can struggle with issues like Simpson's paradox and multi-comparison problems.

The red alert danger for businesses occurs when people who don't have the training and expertise start to interpret complex data. Let's imagine someone who didn't know about Simpson's paradox had the sales or medical data problem I've described here. Do you think they could reach the 'right' conclusion?

The bottom line is simple: you've got to know what you're doing when it comes to analysis.

References

[Bickel] Sex Bias in Graduate Admissions: Data from Berkeley, By P. J. Bickel, E. A. Hammel, J. W. O'Connell, Science, 07 Feb 1975: 398-404
[Lindley] Lindley, D. and Novick, M. (1981). The role of exchangeability in inference. The Annals
of Statistics 9 45–58.
[Pearl] Judea Pearl, Comment: Understanding Simpson’s Paradox, The American Statistician, 68(1):8-13, February 2014.
[Wang] Wang B, Wu P, Kwan B, Tu XM, Feng C. Simpson's Paradox: Examples. Shanghai Arch Psychiatry. 2018;30(2):139-143. doi:10.11919/j.issn.1002-0829.218026

Sunday, February 21, 2021

The amazing gamma function

It blew my mind

A long time ago, I was a pure math student sitting in a lecture theater. The lecturer derived the gamma function (\(\Gamma(x)\)) and talked about its properties. It blew my mind. I love this stuff and I want to share my enjoyment with you.

(Leonhard Euler - who discovered e and the Gamma function. Image source: Wikimedia Commons. License: Public domain)

It must be important, it has an exclamation!

Factorials are denoted by a !, for example, \(6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720\). The numbers get big very quickly, as we'll see, so the use of the ! sign seems appropriate. More generally, we can write:

\[n! = n \times (n-1) \times ...1 \]
where:
\[n \in \Bbb Z*\]
\(\Bbb Z*\) is the integers 0, 1,...

Let's plot the function \(y(n) = n!\) so we can see how quickly it grows.


I stopped at n = 6 because the numbers got too big to show what I want to show. 

To state the obvious, \(n!\) is defined for positive integers only. It doesn't make sense to talk about -1.3!  or does it?

Integration is fun

Leonard Euler is a huge figure in mathematics; the number \(e\) is named after him, as is the iconic identity \(e^{i\pi} + 1 = 0\). In my career, I've worked in a number of areas and used different forms of math, in most places, I've handled something that Euler had a hand in. It's sad that outside of the technical world his name isn't better known.

One of the many, many things Euler did was investigate the properties of series involving \(e\). In turn, this led to the creation of the gamma function, which has a startling property related to factorials. I'm going to show you what it is, but let's start with some calculus to get us to the gamma function. 

We're going to build up a sequence of integrations. hopefully, the pattern should be obvious to you:

\[ \int_0^\infty x^0 e^{-x} dx = -e^{-x} \vert_0^\infty= 1\]
\[ \int_0^\infty x^1 e^{-x} dx = 1\]
\[ \int_0^\infty x^2 e^{-x} dx = 2\]
\[ \int_0^\infty x^3 e^{-x} dx = 6\]

With some proof by induction, we can show that the general case is:

\[ \int_0^\infty x^n e^{-x} dx = n!\]

(The proof involves some calculus and some arithmetic. If I get some time, I might update this post with a full derivation, just because.)

Euler named a version of this relationship the gamma function and wrote it as:
\[\Gamma(n+1) = n!\] We have a relationship between integration and factorials. So what?

Go back and look at the integration. Where does it say in the integration that \(n\) has to be a positive integer? It's perfectly possible to evaluate \(\int_0^\infty x^{-1.356} e^{-x} dx\) for example. Can we evaluate the integral for positive real values of \(n\)? Yes, we can. What about negative numbers? Yes, we can. What about complex numbers? Yes, we can.

If we redefine factorial using the gamma function, it becomes meaningful to calculate \(2.321!\) or \(-0.5!\) or even \((1.1 + 2.2i)!\). To be clear, we now have a way of calculating factorials for real numbers and complex numbers, so:

\[n \in \Bbb C\]

or maybe we should write

\[x! \ where \ x \in \Bbb C\] 

The gamma function has a very curious property that struck me as being very cool. 

\[\Gamma \left( \frac{1}{2} \right) = \sqrt{\pi}\]

When I heard all this, my undergraduate mind was blown.

What Legendre did wrong

Euler defined the gamma function as:

\[\Pi(n) = n!\]

But for various reasons, Legendre re-formatted it as:

\[\Gamma(n+1) = n!\]

Sadly, this is the form universally used now. This form is inconvenient, but like the QWERTY layout of keys on a keyboard, we're stuck with it.

What does it look like?

The chart below shows the gamma function for a range of values.  I've limited the range of the x and y values so you can see its shape around zero.

For \(n > 0\), it's now a smooth curve instead of points. Below zero, it has poles (infinities) at negative integer values. 

What use is it?

Factorials are used in probability theory and any form of math involving combinations. They're one of the bedrock ideas you need to understand to do anything useful. The gamma function is used in statistics, number theory, and quantum physics. 

One cool use of the gamma function is calculating the volume and surface area of an n-dimensional sphere:

\[V = \frac {\ {\pi^{{1 \over 2} n}  r^n}}  {\Gamma(  {1\over 2} n + 1)}\]
\[S = \frac{n}{r} V\]

where:

  • r is the radius
  • n is the number of dimensions

(n-dimensional spheres crop up in information theory - as you're reading this, you're using something that relies on their consequences.)

But frankly, I don't care about uses in the real world. It's a great function with some really cool properties, and sometimes, that's enough for me.

Programmers are mathematicians too

My high school math teacher told us our calculators would give us an error if we tried calculating factorials for any non-positive integer number. She wanted us to know why it wouldn't work. The people who built my high school calculator had a very literal definition of factorial, but it looks like the good programmers at Google are mathematicians at heart. 

Type the word 'calculator' into the Google search box and you should see something like this.

Now type in -1.5! You should get -1.32934038818. Google has implemented the factorial key using the gamma function for numbers that are not just non-positive integers. I've heard that calculators on other systems do the same thing too. This makes me unreasonably happy.

Pure math - but...

Pure math has a very odd habit of becoming essential to business. The mathematicians who developed number theory or linear algebra or calculus didn't do so to make money, they did it to understand the world. But even some very abstract math has spawned huge businesses. The most obvious example is cryptography, but wireless communications rely on a healthy dose of pure math too, as I'll show in a future post.

Monday, February 15, 2021

Management degrees - how I went from a C to an A: buzzword bingo

How to do well on a management degree

I'm having a spring clean, I'm scanning old documents and throwing away the paper copies. It's a trip down memory lane as I'm reviewing old management essays and course notes. The management degree I did was part-time in the evenings and I did it over several years as well as doing a full-time job, so my notes built up over time and there's a lot to scan. Looking over it all, here's my guide to doing well on essays in a management master's degree program.

Sever Hall, Harvard
(A classroom in Sever Hall. I had several lectures in rooms just like this. Image source: Wikimedia Commons, License: Creative Commons, Author: Ario Barzan)

Why I did badly at first

I had been in the technology industry for a long time before I took management classes. I was used to coding and writing technical documentation and I'd become stuck in my ways. The thing about most technical documents is that no one reads them, and very rarely do you get feedback on your writing style. In the few years before I began the classes, I'd started to do more marketing work, and I found it challenging - for the first time, I was getting negative feedback on how I was writing, so I knew I had a problem.

My first course was accounting, which I did very well in. But of course I did well, accounting is another technical discipline. It's like coding, but with different rules and the added threat of lawsuits and jail time.

The second course I did was an HR course and we used the case study method in class. I was gung-ho for my first essay and I was convinced I was going to get a great mark for it. I got a C.

I did what every bad student does when they get a bad grade: I blamed the lecturer. Then I stopped and gave myself a talking to. I was determined to do better.

I did badly for two reasons:

  • A stilted, over-technical writing style.
  • I didn't understand what the lecturer wanted. The goal was to show that I had absorbed the terminology of HR and could appropriately apply it. The goal was not to solve the business problem. In my essay, I focused on solving the business problem and I didn't mention enough of the HR concepts we covered in class.

How I did well

The first order of business was fixing my writing style. I had a short period between essays, but fortunately, it was long enough to do some work. I did crash reading on how to write better in general and how to write better essays. Unashamedly, I went back to basics and read guides for undergraduates and even high school students.  I talked to other students online about writing. I realized I had some grammar and style issues, but I also knew I couldn't fix them all in one go, so I focused on the worst problems first. 

Next was understanding what the lecturer wanted. Once I understood that the essay was a means of checking my understanding of concepts, I had a clean way forward: buzzword bingo. Prior to beginning any essay, I made a list of all the relevant concepts we'd covered in class, and I added some that weren't covered but I'd found through reading around. My goal was to ensure that I applied every concept to the case study and make it clear I'd done so. The essays were a vehicle to show understanding of concepts.

The third step was a better essay plan. I figured out how I would apply my buzzwords to the case study and built my work into a narrative. I made sure that the logical steps made sense from one concept to another and I made sure to link ideas. Every essay has a maximum word (or page) count, so I developed a word budget for each idea, making sure the most important ideas got the most words. This also helps with a perennial student problem, spending too many words on the introduction and conclusion. The word budget idea was the biggest step forward for me, it made sure I focused my thoughts and it always led to my essays being too long. In the editing process, I chopped down the introduction and conclusion and removed extraneous words, I also cut down on the use of the passive voice, which is a real word hog.

My essay process

Buzzword bingo. Make a list of every concept you think is relevant to the case study, making sure to use the correct terminology. This list must cover everything mentioned in class but it also must cover ideas not mentioned in class, you have to go above and beyond.

Weighting buzzwords. Which concepts are more important? More important concepts get a higher word count, but you have to know what's more important.

What's the question? What precisely are the instructions for the essay? Make sure you follow the rules exactly. If necessary, make a tick list for the essay.

Word budget. You have a word count, now allocate the word count in proportion to the importance of the ideas, including the introduction and conclusion.

Link ideas. What ideas go together? If there are multiple linkages, what are the most important ones?

Essay plan. Plan the essay paragraph-by-paragraph and allocate a word budget for each paragraph.

Write the essay.

First-pass revision.  Are you under the word count? If so, you missed something. Does the written essay change your understanding of the problem? If so, re-allocate your word budget. Do you need to change the order of paragraphs or sentences for the narrative to make sense?

Rest. Leave the essay alone for a few days. You need some distance to critique it more.

Second-pass revision. Remove the passive voice as much as possible. Check for word repetition. Check the introduction and conclusion make sense and are coherent.

Rest. Leave the essay alone for a few days. You need some distance to critique it more.

Third pass revision. Have you missed any concepts? Does the essay hang together? Does it meet the instructions precisely?

Allocate plenty of time. This is a painstaking process. You can't do it at the last minute and you can't compress the timescales by doing it all in a day, you need time for reflection. You have to start work on your essay as soon as it's set. Realistically, this is at least two weeks of work.

What happened?

For the next essay, I got an A- and it went up from there. In pretty much every course I did after that, I got an A for my essays.

The degree program offered a writing module, which I took. Prior to the writing course, I read every writing book I could get my hands on, including many grammar books (most of which I didn't understand). Part of the writing course was writing an article for publication and I actually managed to get an article published in a magazine. The editor made minimal changes to my text, which was immensely satisfying. Bottom line: I fixed my writing problem.

Did my approach to essay writing help me learn? Yes, but only marginally so. It did result in a huge boost to my grades though, and that's the main thing. It taught me a lesson in humility too - just because you're an expert in one thing doesn't make you an expert in everything.

Of course, I did get my degree and I did graduate, I was on the Dean's list and I was the commencement speaker for my class. I got there partly because of a better approach to essay writing, and you can too.

Monday, February 8, 2021

Frequency hopping and the most beautiful woman in the world

Spread spectrum

Modern digital wireless systems rely on spread spectrum techniques. The story of how the most obvious of them, frequency hopping, was invented is not what you think. It involves a beautiful Hollywood actress (possibly the most beautiful ever), a music composer, and a dinner party. Let me tell you the story.

(The most beautiful girl in the world, and the inventor of modern communications.  Image source: Wikimedia Commons, License: Public Domain)

Hedy Lamar

This woman lived an incredible life, if you get the time, read some of her life story. I'm just going to summarize it here.

Hedy was born Eva Maria Kiesler in 1914, in Vienna. Her parents were both Jewish, which was to play a part in this story. Her father was an inventor, which was also to be important. 

She got her first film role in 1930, and her first starring role in 1932. However, her big break came in 1933 with the notorious movie Ecstasy. I've heard the movie described as soft porn and it has a number of notable cinematic firsts - even today, it's NSFW so don't look for it from your work computer. 

In 1933, Hedy married Friedrich Mandl, an arms dealer with strong connections to the Nazis and the Italian fascists. Mandl was controlling and domineering. By 1937, Hedy knew she had to escape, so she left Austria and headed for the United States via London. Of course, she headed for Hollywood.

In Hollywood, she appeared in a number of films, some very successful, others not so much. The studios labeled her 'the most beautiful girl in the world' and marketed movies based on her beauty. She also actively and successfully raised millions for the war effort.

George Antheil

George was born in Trenton, New Jersey in 1900 to German parents and grew up bilingual. As a musician, he was strongly influenced by the emerging avant-garde music coming out of Europe, in particular, 'mechanical' music. He wrote music for piano, films, and ballets.

The dinner party and the piano roll

Hedy and George met at a Hollywood dinner party. They talked about the problem of radio-controlled torpedoes. Although a good idea, the controlling radio signals could easily be intercepted and jammed, or even worse, the torpedo could be redirected. What was needed was some way of controlling a torpedo by radio that could not be jammed.

George knew about automatic piano players, Hedy knew about torpedoes from her ex-husband. Together, they came up with the idea of a radio control where the radio frequency changed very rapidly; so rapidly, a human trying to jam the signal couldn't do it because they wouldn't be able to keep up with the frequency changes.  Here's a fictitious timeline example:

  • 1.2s - transmitter transmits at 27.2 MHz, receiver receives at 27.2 MHz
  • 1.3s - transmitter transmits at 26.9 MHz, receiver receives at 26.9 MHz
  • 1.4s - transmitter transmits at 27.5 MHz, receiver receives at 27.5 MHz
  • etc.
(A piano roll for automatically playing the piano. Image source: Wikimedia Commons, License: Creative Commons, Author: Draconichiaro)

To keep the transmitter and receiver in sync, you could use the same technology that powers automatic piano players. In an automatic piano player, a perforated roll is fed through a reader, which in turn presses the appropriate key. The perforated roll is a list of which keys to press and when. 

In the torpedo case, instead of which keys to press, the piano roll could instruct the transmitter or receiver which frequency to use and when. The same piano roll would be inserted into the torpedo and controller and both roll readers would be synchronized. After the torpedo was launched, the controlling frequency would change dependent on the roll, and the transmitter and receiver would stay in sync so long as the piano roll readers stayed in sync. 

Using a mechanism like this, the controlling frequency would change, or hop, from one frequency to another, hence the name 'frequency hopping'. Frequency hopping takes up more radio spectrum than just transmitting on one frequency would, hence the more general name 'spread spectrum'.

Hedy and George patented the idea and you can read their patent here.

Although Hedy and George thought of torpedoes as their application, there's no reason why you couldn't use the same idea for more secure voice communications.

What happened next

The patent sat in obscurity for years. The idea was way ahead of the technology needed to implement it, so it expired before anyone used it. Hedy and George made no money from it.

By the 1960s, the technology did exist, and it was used by the US military for both voice communications and guided munitions. Notably, they used it in the disastrous Bay of Pigs Invasion and later in Vietnam.

Moving forwards to the end of the twentieth century, the technique was used in early WiFi versions and other commercial radio standards, for example, Bluetooth.

Frequency hopping isn't the only spread spectrum technology, it's the simplest (and first) of several that are out there. Interestingly, some of them make use of pure math methods developed over a hundred years ago. In any case, spread spectrum methods are at the heart of pretty much all but the most trivial wireless communication protocols.

Hedy and George lived out their lives and things continued for them as they had before.
George continued to write music and opera until his death at the age of 58.

Hedy's career had ups and downs. She had huge success in the 1940s, but by the 1950s, her star had waned considerably. She filmed her last role in 1958 and retired, spending much of the rest of her life in seclusion. She died at age 85.

When I first started to work in the radio communications industry, the Hedy Lamar story was known, but it was considered a bit of a joke. I'm pleased that over the last few years, her contribution has been re-assessed upwards. In 2014, she was inducted into the US National Inventors Hall of Fame - it would have been nice had this been done in her lifetime, but still, better late than never.

If you liked this post you might also like

Monday, February 1, 2021

What do Presidential approval polls really tell us?

This is a technical piece about the meaning of a type of polling. It is not political in favor of or against President Trump. I will remove any political comments.

What are presidential approval polls?

Presidential approval polls are a simple concept to grasp: do you approve or disapprove of President X? Because newspapers and TV channels can always use them for a headline or an on air-segment, they love to commission them. During President Trump's presidency, I counted 16,500 published approval polls.

But what do these polls mean and how should we interpret them? As it turns out, understanding what they're telling us is slippery. I'm going to offer you my guide for understanding what they mean.

(Image source: Wikimedia Commons. License: Public domain.)

My data comes from the ever-wonderful 538 which has a page showing the approval ratings for President Trump. Not only can you download the data from the page, but you can also compare President Trump's approval ratings with many previous presidents' approval ratings.

Example approval results

On 2020-10-29, Fox News ran an approval poll for President Trump. Of the 1,246 people surveyed:

  • 46% approved of President Trump
  • 54% disapproved of President Trump

which seems fairly conclusive that the majority disapproves. But not so fast. On the same day, Rasmussen Reports/Pulse Opinion Research also ran an approval poll, this time of 1,500 people, their results were:

  • 51% approved of President Trump
  • 48% disapproved of President Trump.

These were both fairly large surveys. How could they be so different?

Actually, it gets worse because these other surveys were taken on the same day too:

  • Gravis Marketing, 1,281 respondents, 52% approve, 47% disapprove
  • Morning Consult, 31,920 respondents, 42% approve, 53% disapprove

Let's plot out the data and see what the spread is, but as with everything with polls, this is harder than it seems.

Plotting approval and disapproval over time

Plotting out the results of approval polls seems simple, the x-axis is the day of the poll and the y-axis is the approval or disapproval percentage. But polls are typically conducted over several days and there's uncertainty in the results. 

To take a typical example, Global Marketing Research Services conducted a poll over three days 2020-10-23 to 2020-10-27. It's misleading to just plot the last day of the poll; we should plot the results over all the days the poll was conducted. 

The actual approval or disapproval number is subject to sampling error. If we assume random sampling (I'm going to come back to this later), we can work out the uncertainty in the results, more formally, we can work out a confidence interval. Here's how this works out in practice. YouGov did a poll over three days (2020-10-25 to 2020-10-27) and recorded 42% approval and 56% disapproval for 1,365 respondents. Using some math I won't explain here, we can write these results as:

  • 2020-10-25, approval 42 ± 2.6%, disapproval 56 ± 2.6%, undecided 2 ± 0.7%
  • 2020-10-26, approval 42 ± 2.6%, disapproval 56 ± 2.6%, undecided 2 ± 0.7%
  • 2020-10-27, approval 42 ± 2.6%, disapproval 56 ± 2.6%, undecided 2 ± 0.7%

We can plot this poll result like this:

Before we get to the plot of all approval ratings, let's do one last thing. If you're plotting large amounts of data, it's helpful to set a transparency level for the points you're plotting (often called alpha). There are 16,500 polls and we'll be plotting approve, disapprove, and undecided, which is a lot of data. By setting the transparency level appropriately, the plot will have the property where the more intense the color is, the more the poll results overlap. With this addition, let's see the plot of approval, disapproval, and undecided over time.

Wow. There's quite a lot going on here. It's hard to get a sense of changes over time. I've added a trend line for approval, disapproval, and undecided so you can get a better sense of the aggregate behavior of the data.

Variation between pollsters

There's wide variation between opinion pollsters. I've picked out just two, Rasmussen Reports/Pulse Opinion Research and Morning Consult. To see the variation more clearly, I'll just show approvals for President Trump and just show these two pollsters and the average for all polls.

To state the obvious, the difference is huge and way above random sampling error. Who's right, Rasmussen Reports or Morning Consult? How can we tell?

To understand what this chart means, we have to know a little bit more about how these polls are conducted.

How might you run an approval poll?

There are two types of approval polls.

  • One-off polls. You select your sample of subjects and ask them your questions. You only do it once.
  • Tracking polls. Technically, this is also called a longitudinal study. You select your population sample and ask them questions. You then ask the same group the same questions at a later date. The idea is, you can see how opinions change over time using the same group.

Different polling organizations use different methods for population sampling. It's almost never entirely random sampling. Bear in mind, subjects can say no to being involved, and can in principle drop out any time they choose. 

It's very, very easy to introduce bias by the people you select, slight differences in selection may give big differences in results. Let's say you're trying to measure President Trump's approval. Some people will approve of everything he does while others will disapprove of everything he does. There's very little point in measuring how either of these groups approves or disapproves over time. If your group includes a big measure of either of these groups, you're not going to see much variation. However, are you selecting for population representation or selecting to measure change over time? 

For these reasons, the sampling error in the polls is likely to be larger than random sampling error alone and may have different characteristics.

How accurate are approval polls?

This is the big question. For polls related to voting intention, you can compare what the polls said and the election result. But there's no such moment of truth for approval polls. I might disapprove of a President, but vote for them anyway (because of party affiliations or because I hate the other candidate more), so election results are a poor indicator of success.

One measure of accuracy might be agreement among approval polls from a number of organizations, but it's possible that the other pollsters could be wrong too. There's a polling industry problem called herding which has been a big issue in UK political polls. Herding means pollsters choose methodologies similar to other pollsters to avoid being outliers, which leads to polling results from different pollsters herding together. In a couple of notorious cases in the UK, they herded together and herded wrongly. A poll's similarity to other polls does not mean it's more accurate.

What about averaging?

What about aggregating polls? Even this isn't simple. In your aggregation:

  • Do you include tracking polls or all polls?
  • Do you weight polls by their size?
  • Do you weight polls by accuracy or partisan bias?
  • Do you remove 'don't knows'?
  • If a poll took place over more than one day, do you average results over each day the poll took place?

I'm sure you could add your own factors. The bottom line is, even aggregation isn't straightforward.

What all this means

Is Rasmussen Reports more accurate than Morning Consult? I can't say. There is no external source of truth for measuring who's more correct.

Even worse, we can see changes in the Rasmussen Reports approval that don't occur in the Morning Consult data (and vice versa). Was the effect Rasmussen Reports saw real and Morning Consult missed it, or was Morning Consult correct? I can't say.

It's not just these two pollsters. The Pew Research Center claims their data, showing a decline in President's Trump approval rating at the end of his presidency, is real. This may well be correct, but what external sources can we use to say for sure?

What can I conclude for President Trump's approval rating?

Here's my takeaway story after all this. 

President Trump had an approval rating above 50% from most polling organizations when he took office. Most, but not all, polling organizations reported a drop below 50% soon after the start of his presidency. After that, his approval ratings stayed pretty flat throughout his entire presidency, except for a drop at the very end. 

The remarkable story is how steady his approval ratings were. For most presidents, there are ups and downs throughout their presidency, but not so much for President Trump. It seems that people made their minds up very quickly and didn't change their opinions much. 

Despite the large number of approval polls, the headline for most of the last four years should have been: "President Trump's approval rating: very little change".

What about President Biden?

At a guess, the polls will start positive and decline. I'm not going to get excited about any one poll. I want to see averages, and I want to see a sustained trend over time. Only then do I think the polls might tell us something worth listening to.

If you liked this post, you might like these ones