Monday, November 2, 2020

The null hypothesis test

What's null hypothesis testing?

In business, as in many other fields, we have to make decisions in the face of uncertainty. Does this technology improve conversion? Is the new sales process working? Is the new machine tool improving quality? Almost never are the answers to these questions absolutely certain; there will be probabilities we have to trade off to make our decision.

(Two hypotheses battling it out for supremacy. Image source: Wikimedia Commons. Author: Pierdante Romei. License: Creative Commons.)

Null hypothesis tests are a set of techniques that enable us to reach probabilistic conclusions in an unbiased way. They provide a level playing field to decide if an effect is there or not.

Although null hypothesis tests are widely taught in statistics classes, many people who've come into data science from other disciplines aren't familiar with the core ideas. Similarly, people with business backgrounds sometimes end up evaluating A/B tests where the correct interpretation of null hypothesis tests is critical to understanding what's going on. 

I’m going to explain to you what null hypothesis testing is and some of the concepts needed to implement and understand it.

What result are you testing for?

To put it simply, a null hypothesis test is a test of whether there is an effect of a certain size present or not. The null hypothesis is that there is no effect, and the alternate hypothesis is that there is an effect. 

At its heart, the test is about probability and not certainty. We can’t say for sure if there is an effect or not, what we can say is the probability of there being an effect.  But probabilities are limited and we have to make binary go/no-go decisions - so null hypothesis tests include the idea of probability thresholds for deciding whether something is there or not.

To illustrate the use of a null hypothesis test, I’m going to use a famous example, that of the lady tasting tea. 

In a research lab, there was a woman who claimed she could tell the difference between cups of tea prepared in one of two ways:

  • The milk poured into the cup first and then the tea poured in
  • The tea poured first and then the milk poured in.
(Image source: Wikimedia Commons Artist: Ian Smith License: Creative Commons)

The researcher decided to do a test of her abilities by asking her to taste multiple cups of tea and state how she thought each cup had been prepared. Of course, it’s possible she could be 100% successful by chance alone. 

We can set up a null hypothesis test using these hypotheses:

  • The null hypothesis is the most conservative option. Here it’s that she can’t taste the difference. More specifically, her success rate is indistinguishable from random chance.
  • The alternative hypothesis is that she can tell the difference. More specifically, her success rate is significantly different from random chance.
Let's define some quantities:
  • \( p_T \) - the proportion of cups of tea she correctly got
  • \( p_C \)  - the proportion of cups of tea she would be expected to get by chance alone (by guessing)
We can write the null and alternative hypotheses as:

  • \( H_0: p_T = p_C\) 
  • \( H_1: p_T \neq p_C\) 

But – the hypotheses in this form aren't enough. Will we insist she has to be correct every single time? Is there some threshold we expect her to reach before we accept her claim?

The null hypothesis is the first step in setting up a statistical test, but to make it useful, we have to go a step further and set up thresholds. To do this, we have to understand different types of errors.

Error types

To make things easy, we’ll call 'milk first' a positive and 'milk second' a negative.

For our lady testing tea, there are four possibilities:

  • She can say ‘milk first’ when it was 'milk first' – a true positive
  • She can say ‘milk first’ when it wasn’t 'milk first' – a false positive (also known as a Type I error)
  • She can say ‘milk second’ when it was 'milk second' – a true negative
  • She can say ‘milk second’ when it wasn’t 'milk second' – a false negative (also known as a Type II error)

This is usually expressed as a table like the one below.

    Null Hypothesis is
    True False
Decision about null hypothesis  Fail to reject True negative
Correct inference
Probability threshold= 1 - \( \alpha \)
False negative
Type II error
Probability threshold= \( \beta \)
Reject False positive
Type I error
Probability threshold = \( \alpha \)
True positive
Correct inference
Probability threshold = Power = 1 - \( \beta \)

We can assign probabilities to each of these outcomes. As you can see, there are two numbers that are important here, \(\alpha\) and \(\beta\); however, in practice, we consider \(\alpha\) and 1-\(\beta\) as the numbers of importance. \(\alpha\) is called significance, and 1-\(\beta\) is called power. We can set values for each of them prior to the test. By convention, \(\alpha\) is usually 0.05, and 1-\(\beta \geq \) 0.80.

Test results, test size, and p-values

Our lady could guess correctly by chance alone. We have to set up the test so a positive conclusion due to randomness is unlikely, hence the use of thresholds. The easiest way to do this is to set the test size correctly, i.e. set the number of cups of tea. Through some math I won't go into, we can use \(\alpha\), (1-\(\beta\)), and the effect size to set the sample size. The effect size, in this case, is her ability to detect how the cup of tea was prepared above and beyond what would be expected by chance. For example, we might run a test to see if she was 20% better than chance.

To evaluate the test, we calculate a p-value from the test results. The p-value is the probability the test result was due to chance alone. Because this is so important, I'm going to explain it again using other words. Let's imagine the lady tasting tea was guessing. By guessing alone, she could get between 0% and 100% correct. We know the probability for each percentage. We know it's very unlikely she'll get 100% or 0% by guesswork, but more likely she'll get 50%. For the score she got, we can work out the probability of her getting this score (or higher) through chance alone. Let's say there was a 3% chance she could have gotten her score by guessing alone. Is this proof she's not guessing?

We compare the p-value to our \( \alpha\) threshold to decide which hypothesis is wrong. Let’s say our p-value was 0.03 and our \( \alpha \) value was 0.05, because 0.03 < 0.05 we reject the null hypothesis. In other words, we would accept that the lady was not guessing.

False negatives, false positives

Using \(\alpha\) and a p-value, we can work out the chance of us saying there's an effect when there is none (a false positive). But what about a false negative? We could say there's no effect when there really is one. That might be as damaging to a business as a false positive. The quantity \(\beta\) gives us the probability of a false negative. By convention, statisticians talk about the power (1-\(\beta\)) of a test which is the probability of detecting an effect of the size you think is there.

Single tail or two-tail tests

Technically, the way the null hypothesis is set up in the case of the lady tasting tea is a two-tailed test. To ‘succeed ’ she has to do a lot better than chance or she has to do a lot worse. That’s appropriate in this case because we’re trying to understand if she’s doing something else other than guessing.

We could set up the test differently so she has to only be right more often than chance suggests. This would be a one-tail test. One-tail tests are shorter than two-tail tests, but they’re more limited. 

In business, we tend to do two-tailed tests rather than one-tailed tests.

Fail to reject the null or rejecting the null

Remember, we’re talking about probabilities and not certainties. Even if we gave our lady 100 cups to taste, there’s still a possibility she gets them all right due to chance alone. So we can’t say either the null or the alternate is true, all we can do is reject them at some threshold, or fail to reject them. In the case of a p-value of 0.03, a statistician wouldn’t say the alternate is true (the lady can taste the difference), but they would say ‘we reject the null hypothesis’. If the p-value was 0.1, it would be higher than the \( \alpha \) value and we would ‘fail to reject the null hypothesis’. This language is complex, but statisticians are trying to capture the idea that results are about probabilities, not certainties.

Choice of significance and power

Significance and power affect test size, so maybe we should choose them to make the test short? If you want to do a valid test, you're not free to choose any values of \(\alpha\) and (1-\(\beta\)) you choose. Convention dictates that you stick to these ranges:

  • \(\alpha \geq 0.95\) - anything less than this is usually considered a junk test.
  • (1-\(\beta) \geq 0.8\) - anything less than this is not worth doing. 

The why behind these values is the subject of another blog post.

The null hypothesis test summarized

This has been a very high-level summary of what happens in a null hypothesis test, for the sake of simplicity there are several steps I've left out and I've greatly summarized some ideas. Here's a simple summary of the steps I've discussed.

  1. Decide if the test is one-tail or two-tail.
  2. Create a null and alternate hypothesis.
  3. Set values for \(\alpha\) and (1-\(\beta\)) prior to the test.
  4. After the test, calculate a p-value.
  5. Compare the p-value to \(\alpha\) to figure out a false positive probability
  6. Check \(\beta\) to figure out the probability of a false negative.

I've left out topics like the z-test and the t-test and a bunch of other important ideas. 

Your takeaway should be that this process is complex and there are no shortcuts. At its heart, hypothesis testing is about deciding what's true when the data is uncertain and you need to do it without bias.


(Justice is supposed to be blind and balanced - like a null hypothesis test. Image source: Wikimedia Commons. License:  GNU Free Documentation License.)

Problems with the null hypothesis test

Mathematically, there's controversy about the fundamentals of the procedure, but frankly, the controversy is too complex to discuss here - in any case, the controversy isn't over whether the procedures work or not.

A more serious problem is baked into the approach. At its heart, null hypothesis testing is about making a binary yes/no decision based on probabilistic data. The results are never certain. Unfortunately, test results are often taken as certain. For example, if we can't detect an effect in a test, it's often assumed there is no effect, but that's not true. This assumption that no detection = no effect has had tragic consequences in medical trials; there are high-profile cases where the negative side effects of a drug have been just below the threshold levels. Sadly, once the drugs have been released, the negative effects become well know with disastrous consequences, a good example being Vioxx.

You must be aware that a test failure doesn't mean there isn't an effect. It could mean there's an effect hovering just below your acceptance threshold.

Using the null hypothesis in business

This is all a bit abstract, so let's bring it back to business. What are some examples of null hypothesis tests in the business world?

A/B testing

Most of the time, we choose a two-tail test because we're interested in the possibility a change might make conversion or other metrics worse. The hypothesis test we use is usually of this form:

\(H_0 : CR_B = CR_A\)

\(H_1 : CR_B \neq CR_A\)

where CR is the conversion rate, or revenue per user per branch, or add to bag etc.

Manufacturing defects

Typically, these tests are one-tailed because we're only interested in an improvement. Here, the test might be:

\(H_0 : DR_B = DR_A\)

\(H_1 : DR_B < DR_A\)

where DR is the defect rate.

Closing thoughts

If all this seems a bit complex, arbitrary, and dependent on conventions, you're not alone. As it turns out, null hypothesis techniques are based on the shotgun marriage of two separate approaches to statistics. In a future blog post, I'll delve into this some more. 

For now, here's what you should take away:
  • You should understand that you need education and training to run these kinds of tests. A good grounding in statistics is vital.
  • The results are probabilistic and not certain. A negative test doesn't mean an effect isn't there, it might just be hovering underneath the threshold of detection.

Reading more

https://www.sagepub.com/sites/default/files/upm-binaries/40007_Chapter8.pdf

Saturday, October 24, 2020

Frankenstein, vampire, and volcano: dinner at Lake Geneva

Sometimes, there are events that ripple through history and have effects hundreds of years later. I'm sure you're thinking of battles, or assassinations, or elections, or something noisy or violent. But smaller and more peaceful events can have big impacts; even something as innocuous as a single dinner party can change the world. We're approaching Halloween, so I'm going to tell you how a dinner party over two hundred years ago gave us two iconic horror legends and how brilliant people can have an impact on the world that outlives them. Let's start with who was at this dinner party.

(Frankenstein's Monster and Dracula. Image credit: Wikimedia Commons. License: public domain)

The players

Lord Byron. 'Mad, bad, and dangerous to know.' Lord Byron is regarded as one of the leading English poets and his poetry is still widely read today. To say his life was full is something of an understatement; he was at times a theater director, poet, revolutionary, political radical, and a sexual adventurer. As we'll see, some of his behavior was quite shocking, even by modern standards. By 1816, his 'relationship' escapades made England too uncomfortable for him, so he left.

Percy Bysshe Shelley. An English romantic poet, still regarded as one of the country's finest. Shelley was a political radical and didn't follow the social codes of the day. Although his behavior was considered scandalous, it wasn't at Byron's level of hedonism. By 1816, Shelley had left his wife and was in a relationship with Mary Godwin (later, Mary Shelley).

Mary Shelley. The daughter of the early feminist and radical Mary Wollstonecraft and the political philosopher William Godwin. Despite this illustrious heritage, she received little in the way of formal education. When she was 17, she fell in love with Shelley and ran off with him to Europe. In February 1815, Mary gave birth to a baby girl (Shelley's daughter), but the child died soon after birth.

Claire Claremont. Mary Shelley's half-sister. She had a more formal education than Mary, including the ability to speak French (which she used to aid Mary and Percy Shelley's initial trip to Europe). She was pursuing an affair with Byron which was cooling by the time of the events I'm going to describe.

John Polidori. Byron's personal physician and only 20 years old during the trip and dinner party. By all accounts, Byron treated Polidori with contempt and constantly belittled him. Polidori had large gambling debts and was secretly in the pay of Byron's publisher, who gave him a £500 advance to keep notes on what went on; they were hoping for some salacious gossip. Polidori was also interested in Mary, who was not interested in him.

The volcano and its aftermath - the 'year without a summer'

In 1815, the volcano at Mount Tambora erupted. This was the largest volcanic eruption in modern times; it was heard 2,600 km away and pumped 41 cubic km of dust 43 km high into the atmosphere.

The huge amount of atmospheric dust had a dramatic impact; it reflected sunlight which resulted in global cooling. The loss of sunlight led to people calling 1816, 'the year without a summer', but it was worse than just bad summer holidays; crop failures led to famines worldwide which in turn led to political upheaval. Atmospheric dust gave spectacular sunsets which were captured by artists at the time.

The Napoleonic wars

Europe in the early part of the 19th century had convulsed with war. Napoleon has run hugely successful military campaigns across the continent, leaving devastation behind him. In 1815, there was a final battle for supremacy, with Napoleon on one side and a 'coalition of the willing' on the other - this was the famous battle of Waterloo. Napoleon was finally defeated, but at a huge cost. The people and infrastructure of continental Europe suffered the ravages of war.

Europe needed to recover, and that takes some good fortune and time. Unfortunately, the after-effects of the volcano led to crop failures and famines just at the time when good conditions were needed. 

Culturally, there was a strong feeling of the end of days; war, crop failures, wild weather, and outstanding sunsets. These were not normal times.

The Lake Geneva holiday

Our five adventurers had decided on taking a European vacation together. They'd traveled across Europe and met up in Lake Geneva, Switzerland, where they rented adjoining properties. The original intent was some pleasurable diversions like boating and sightseeing, but the miserable conditions meant they had to stay inside. Instead of warm, bright, summer evenings, they had instead conditions more like winter.

Tensions were high between the five of them. Claire was still pursuing Byron, who wasn't interested except when he was. Byron was busy demeaning and belittling Polidori. Polidori was chasing Mary who wasn't interested in being chased. To add to the fun, Shelley was trying hard to impress Byron. Of course, a general dread hung heavy in the air: everyone knew about war, crop failures, and political upheaval.

As you might expect with such a group of people, the conversation was wide-ranging, varying from folklore to politics to science. Shelley and Byron were prone to flights of fancy in their discussions; once, when Byron was reading a ghost story, Shelley imagined a woman with eyes in place of nipples and ran screaming from the room. Polidori was more scientific, and with him they spoke of Galvini's experiments on frog's legs, making them kick by applying electricity; had Galvani discovered the life force? Of course, there were also discussions of the latest political ideas and the concept of free will. All in all, a heady atmosphere.

One night, after a reading of ghost stories, they decided on a contest; who could write the best horror story? Of course, the expectation was that Byron and Shelley would win, but that's not what happened. The following morning, Byron had a so-so effort, John Polidori had something better, and the best of all came from the least experienced writer; Mary Shelley.

The birth of Frankenstein

The ideas coalesced in Mary's head: free will, animating life force, the desire for love, ghost stories, wild human behavior, and the gothic feel of central Europe. She created a scientist, Frankenstein, who ignores society's moral code to play God and create life itself. The monster he created was never named in the book, but it's telling that our sympathies are with the poor, mistreated person; a monster in appearance, but an articulate feeling being.  Shelley tells a story of the creator's neglect of his creation and its effect on the monster, of how the monster has to educate himself, and how he later comes looking for his maker to create a partner for him; the monster is looking for love. We might even say the monster wants life, liberty, and the pursuit of happiness. 

The group declared Mary the winner and encouraged her to publish her story, which she did after the Shelleys return to England.

Mary Shelley published the novel anonymously; it became a best-seller but received mixed reviews from literary critics. Once she was known as the author, some reviewers speculated that it might have been written by Percy Shelley rather than Mary. This seems like typical nineteenth-century sexism at first, but bear in mind that Percy Shelley was a well-known writer at the time and Mary was not. It does seem likely that she had some help with editing and maybe with writing suggestions, and why not when she had world-class writing talent on tap? The consensus today is that she was indeed the author.

The birth of vampires

Polidori's story was altogether different. He imagined a vampire, but not the vampire creatures of old, which were ugly, inhuman creatures. His vampire was a man, but a man who was physically attractive, deeply manipulative, and preyed on women - and a lord as well. His role model was obviously Byron himself ('mad, bad, and dangerous to know'). This story, 'Vampyre', is credited with creating the elements of the modern vampire legend and was one of the inspirations for Bram Stoker's 'Dracula' 70 years later.

'Vampyre' was published without Polidori's permission and was initially credited to Byron, though both Byron and Polidori later claimed it was Polidori's work.

The aftermath

What happened after that fateful evening?

Claire Claremont gave birth to Byron's daughter, but at the time as a single mother, she needed Byron to acknowledge and protect her child. Byron did acknowledge the child as his, and took their daughter into his 'care', he had Clare hand the baby over to him. He gave his daughter into the care of nuns in Italy and ignored the child for the rest of her life. Byron didn't see her again and he forbade Claire from doing so too. Their child died in Italy at the age of 5. Claire later said of Byron that he gave her a few minutes of pleasure, but a lifetime of trouble.

Byron never returned to England, he moved around Europe in search of entertainment and engagement, eventually fighting in the Greek war of independence (from the Ottoman empire). He died at age 36 from sepsis while preparing to fight for Greece. 

(As an aside, Byron already had a daughter from his wife (he was still married to her during the events of 1816) - whom he also ignored. His wife wanted nothing to do with Byron's wild extravagant ways and educated their daughter in science and mathematics.  Their daughter's name was Ada Lovelace, of computing fame.)

A little later after the dinner part, Byron fired Polidori who returned to London. Polidori didn't enjoy the success he thought he deserved and it all become too much for him. At the age of 25, he committed suicide by drinking cyanide.

Shelley continued writing poetry. In December 1816, the body of Shelley's wife was discovered floating in the Serpentine in London. Now free to marry, he married Mary just a few weeks later. At the age of 29, he went sailing on the Gulf of La Spezia, Italy and died during a storm.

After Percy Shelley's death, Mary Shelley became a professional writer to support herself and her son.  Notably, she wrote more horror fiction, including The Last Man, the first dystopian science fiction novel. She died of a brain tumor at age 53.

The echoes of history

Of course, vampires and Frankenstein's monster live on to this day. There have been numerous books, plays, comics, TV series, and movies featuring one or both of them. In a few days' time, children impersonating them will knock on my door, I'll give them candy, and I'll think of how it all started in Lake Geneva over two hundred years ago.

Monday, October 19, 2020

Stylish Pandas in the frame

The data can't be right, it's so ugly

Despite what many technical people want to believe, well-presented data is more convincing than badly presented data. Unfortunately, the default way Pandas outputs dataframes as tables is ugly. I'm going to show you how to make Pandas dataframes (tables) very pretty and hopefully more convincing.

(A very attractive panda. Image source: Wikimedia Commons. Author: Christian Mehlführer. License: Creative Commons.)

Ugly Betty

My dataset is the results of the 2019 UK general election: the number of MPs and voters per party. Here's my Pandas dataframe (I've called it parliament for some reason):

                           party  MPs     votes  MPs frac  votes frac

0                   Conservative  365  13966565  0.561538    0.452447

1                         Labour  202  10269076  0.310769    0.332667

2           Scottish Nationalist   48   1242380  0.073846    0.040247

3              Liberal Democrats   11   3696423  0.016923    0.119746

4           Democratic Unionists    8    244128  0.012308    0.007909

5                      Sinn Fein    7    181853  0.010769    0.005891

6                    Plaid Cymru    4    153265  0.006154    0.004965

7   Social Democratic and Labour    2    118737  0.003077    0.003846

8                          Green    1    835579  0.001538    0.027069

9                       Alliance    1    134115  0.001538    0.004345

10                       Speaker    1     26831  0.001538    0.000869

If we output the dataframe to HTML using parliament.to_html(), here's what we get by default. It looks amateurish. Let's make it nicer.

partyMPsvotesMPs fracvotes frac
0Conservative365139665650.5615380.452447
1Labour202102690760.3107690.332667
2Scottish Nationalist4812423800.0738460.040247
3Liberal Democrats1136964230.0169230.119746
4Democratic Unionists82441280.0123080.007909
5Sinn Fein71818530.0107690.005891
6Plaid Cymru41532650.0061540.004965
7Social Democratic and Labour21187370.0030770.003846
8Green18355790.0015380.027069
9Alliance11341150.0015380.004345
10Speaker1268310.0015380.000869

Adding style

Pandas dataframes have a style property we can use to customize the appearance of the dataframe and its HTML rendering too. The style property returns a Styler object we can use to make changes to the way the data is rendered as a HTML table.  I'm going to add style and show you what the rendered HTML looks like.

Precision, thousands, and hiding the index

The fraction of votes and MPs has six decimal places, which is the default for Python formatting. Let's change the fractional numbers to three decimal places, introduce thousand separators for the number of votes, and hide the index. Here's the code to do it:

parliament.style.format(
    {"MPs frac":"{:.3f}",
     "votes frac":"{:.3f}",
     "votes": "{:,}"}
    ).hide_index().render()

In this case, the style.format code takes a dict argument. The dict keys are the dataframe column names and the dict values are the formatting instructions. Most Python formatters work with this method but some don't; for example, the alignment Python formatters don't work. Here's what the rest of the code means:

  • {:.3f} truncates the floating point numbers to three decimal places
  • {:,} introduces the thousand separator
  • hide_index hides the index
  • render renders the table using HTML - it produces a string output of HTML text

The arguments to format don't have to be a dict, but using a dict makes it easier if you're changing several columns at once.

Here's the HTML output from the code above. It's a big improvement, but not quite what we want.

party MPs votes MPs frac votes frac
Conservative 365 13,966,565 0.562 0.452
Labour 202 10,269,076 0.311 0.333
Scottish Nationalist 48 1,242,380 0.074 0.040
Liberal Democrats 11 3,696,423 0.017 0.120
Democratic Unionists 8 244,128 0.012 0.008
Sinn Fein 7 181,853 0.011 0.006
Plaid Cymru 4 153,265 0.006 0.005
Social Democratic and Labour 2 118,737 0.003 0.004
Green 1 835,579 0.002 0.027
Alliance 1 134,115 0.002 0.004
Speaker 1 26,831 0.002 0.001

Column alignment and spacing

Let's right-align the columns and add a bit more spacing between columns.

parliament.style.format(
    {"MPs frac":"{:.3f}",
     "votes frac":"{:.3f}",
     "votes": "{:,}"})
    .set_properties(**{'text-align': 'right',
                       'padding':'0 15px'})
    .hide_index().render()

The set_properties method sets the CSS properties of the HTML object, in this case, the table. 

Here's the output:


party MPs votes MPs frac votes frac
Conservative 365 13,966,565 0.562 0.452
Labour 202 10,269,076 0.311 0.333
Scottish Nationalist 48 1,242,380 0.074 0.040
Liberal Democrats 11 3,696,423 0.017 0.120
Democratic Unionists 8 244,128 0.012 0.008
Sinn Fein 7 181,853 0.011 0.006
Plaid Cymru 4 153,265 0.006 0.005
Social Democratic and Labour 2 118,737 0.003 0.004
Green 1 835,579 0.002 0.027
Alliance 1 134,115 0.002 0.004
Speaker 1 26,831 0.002 0.001

Colors

The political parties have colors, so it would be nice to show their party colors as a background to their names, meaning we should change the background colors of the party column. It might also be nice to highlight the maximum results in a light gray. Maybe we can get really clever and add a bar chart representing the number of seats won. Here's the code to do all that:

styles = [dict(selector='.col1', 
               props=[('width', '50px')])]

def colors(value):
    partymap = {'Conservative': 'lightblue',
                'Labour': 'salmon',
                'Scottish Nationalist' : 'yellow',
                'Liberal Democrats': 'orange',
                'Democratic Unionists': 'orange',
                'Sinn Fein': 'lightgreen' ,
                'Plaid Cymru': 'lightgreen',
                'Social Democratic and Labour': 'salmon',
                'Green' : 'lightgreen',
                'Alliance': 'orange',
                'Speaker': 'lightgray'}
    return """background-color: {0}""".format(
        partymap[value])

parliament.style
          .format(
            {"MPs frac":"{:.3f}",
             "votes frac":"{:.3f}",
             "votes": "{:,}"})
          .set_properties(**{'text-align': 'right',
                             'padding':'0 15px'})
          .bar(subset=['MPs'], color='lightgray')
          .set_table_styles(styles)
          .applymap(colors, 
                    subset=['party'])
          .highlight_max(color='#F8F9F9')
          .hide_index().render()

Here's what this code does:

  • bar takes the data in the column and draws a bar chart based on it. It uses the full width of the column and expands the column if necessary, hence my need to restrict the column width to get the table to fit on the Blogger page correctly.
  • The party background colors are created with applymap method using the colors function applied to just the party column using the subset argument.
  • The maximum value highlighting I do with the highlight_max built-in method and I highlight the cells a very light gray. 
  • The method set_table_styles restricts the width of the MPs column so the page renders on Blogger; it uses a CSS selector to do it, and of course you could use the same approach for fine grain formatting using CSS. 
  • The subset argument restricts formatting to just the specified columns.

Here's what the final results look like:

party MPs votes MPs frac votes frac
Conservative 365 13,966,565 0.562 0.452
Labour 202 10,269,076 0.311 0.333
Scottish Nationalist 48 1,242,380 0.074 0.040
Liberal Democrats 11 3,696,423 0.017 0.120
Democratic Unionists 8 244,128 0.012 0.008
Sinn Fein 7 181,853 0.011 0.006
Plaid Cymru 4 153,265 0.006 0.005
Social Democratic and Labour 2 118,737 0.003 0.004
Green 1 835,579 0.002 0.027
Alliance 1 134,115 0.002 0.004
Speaker 1 26,831 0.002 0.001

Commentary

It's nice that Pandas has this functionality, and it's nice that it's as extensive as it is, but there's a problem. The way style is implemented is inconsistent and hard to understand, for example, some but not all of the string formatters work, and there are two methods that do very similar things (set_table_styles and set_properties). In practice, it takes more time and it's harder than it needs to be to get good results. The code looks ungainly too.  It is what it is for now.

Next steps

You can do some other clever things with style, like apply heatmaps, or apply clever conditional table formatting. You can really make your data output standout, but be careful, you can go overboard! To find out more, read the Pandas dataframe style documentation

Monday, October 12, 2020

Fundamentally wrong? Using economic data as an election predictor

What were you thinking?

Think back to the last time you voted. Why did you vote the way you did? Here are some popular reasons, how many apply to you?

  • The country's going in the wrong direction, we need something new.
  • My kind of people vote for party X, or my kind of people never vote for party Y.
  • I'm a lifelong party X voter.
  • Candidate X or party X is best suited to running the country right now.
  • Candidate Y or party Y will ruin the country.
  • Candidate Y or party X are the best for defense/the economy/my children's education and that's what's important to me right now.

(Ballot drop box. Image Source: Wikimedia Commons. Author: Paul Sableman. License: Creative Commons.)

Using fundamentals to forecast elections

In political science circles, there's been a movement to use economic data to forecast election results. The idea is, homo economicus is a rational being whose voting behavior depends on his or her economic conditions. If the economy is going well, then incumbents (or incumbent parties) are reelected, if things are going badly, then challengers are elected instead. If this assertion is true, then people will respond rationally and predictably to changing economic circumstances. If we understand how the economy is changing, we can forecast who will win elections.

Building models based on fundamentals follows a straightforward process:

  1. Choose an economic indicator (e.g. inflation, unemployment, GDP) and see how well it forecasts elections.
  2. Get it wrong for an election.
  3. Add another economic indicator to the forecast to correctly predict the wrong election.
  4. Get it wrong for an election.
  5. Either re-adjust the model weights or go to 3.

These models can get very sophisticated. In the United States, some of the models include state-level data and make state-level forecasts of results.

What happens in practice

Two University of Colorado professors, Berry and Bickers, followed this approach to forecast the 2012 presidential election.  They very carefully analyzed elections back to 1980 using state-level economic data.  Their model was detailed and thorough and they helpfully included various statistical metrics to guide the reader to understand the model uncertainties. Their forecast was very clear: Romney would win 330 electoral college votes - a very strong victory. As a result, they became darlings for the Republican party.

Unfortunately for them, things didn't work out that way. The actual result was 332 electoral college votes for Obama and 206 for Romney, an almost complete reversal of their forecast.

In a subsequent follow-up (much shorter than their original paper), the professors argued in essence that although the economy had performed poorly, voters didn't blame Obama for it. In other words, the state of the economy was not a useful indicator for the 2012 election, even considering state-level effects.

This kind of failure is very common for fundamentals. While Nate Silver was at the New York Times, he published a long piece on why and how these models fail. To cut to the chase, there is no evidence voters are homo economicus when it comes to voting. All kinds of factors affect how someone votes, not just economic ones. There are cultural, social class, educational, and many other factors at work.

Why these models fail - post hoc ergo propter hoc and spurious correlations

The post hoc fallacy is to assume that because X follows Y, Y must cause X. In election terms, the fundamentalists assume that an improving or declining economy leads to certain types of election results. However, as we've said, there are many factors that affect voting. Take George Bush's approval rating, in the aftermath of 9/11 it peaked around 88% and he won re-election in 2004. Factors other than the economy were clearly at work.

A related phenomenon is spurious correlations which I've blogged about before. Spurious correlations occur when two unrelated phenomena show the same trend and are correlated, but one does not cause the other. Tyler Vigen has a great website that shows many spurious correlations.

Let's imagine you're a political science researcher. You have access to large amounts of economic data and you can direct your graduate students to find more. What you can do is trawl through your data set to find economic or other indicators that correlate with election results. To build your model, you weigh each factor differently, for example, inflation might have a weighting of 0.7 and unemployment 0.9. Or you could even have time-varying weights. You can then test your model against existing election results and publish your forecast for the next election cycle. This process is almost guaranteed to find spurious correlations and produce models that don't forecast very accurately. 

Forecasting using odd data happens elsewhere, but usually, more entertainingly. Paul the Octopus had a good track record of forecasting the 2010 World Cup and other football results - Wikipedia says he had an 85.7% success rate. How was he so successful? Probably dumb luck. Bear in mind, many animals have been used for forecasting and we only hear about the successful ones.



(Paul the Octopus at work. Image source: Wikimedia Commons. License: Creative Commons.)

To put it simply, models built with economic data alone are highly susceptible to error because there is no evidence voters consider economic factors in the way that proponents of these models suggest. 

All models are wrong - some are useful

The statistician George Box is supposed to have said, "all models are wrong, some are useful". The idea is simple, the simplifications involved in model building often reduce their fidelity, but some models produce useful (actionable) results. All election forecast models are just that, forecast models that may be right or wrong. The question is, how useful are they? 

Let's imagine that a fundamental model was an accurate forecaster. We would have to accept that campaigns had little or no effect on the outcome. But this is clearly at odds with reality. The polling data indicates that the course of the 2016 US presidential election changed course in the closing weeks of the campaign. Perhaps most famously, the same thing happened in 1948. One of the key issues in the 2004 US presidential election was the 'war on terror'. This isn't an economic effect and it's not at all clear how it could be reduced to a number.

In other words, election results depend on more than economic effects and may depend on factors that are hard to quantify.

To attempt to quantify these effects, we could turn to opinion polls. In 2004, we could have asked voters about their view of the war on terror and we could have factored that into a fundamentals model. But why not just ask them how they intend to vote?


(Paul the Octopus died and was memorialized by a statue. How many other forecasters will get statues? Image Source: Wikimedia Commons. Author: Christophe95. License: Creative Commons.)

Where I stand

I'm reluctant to throw the baby out with the bathwater. I think fundamentals may have some effect, but it's heavily moderated by other factors and what happens during the campaign. Maybe their best use might be to give politicians some idea of factors that might be important in a campaign. But as the UK Highway Code says of the green traffic light, it doesn't mean go, it means "proceed with caution".

If you liked this post, you might like these ones

Wednesday, October 7, 2020

Opinion polling blog posts

Why a 'greatest hits' polling blog post?

Over the past few months, I've blogged about elections and opinion polling several times. On October 8th, 2020, I gave a talk at PyData Boston on forecasting US presidential elections, and I thought I would bring these blog posts together into one convenient place so the people at the talk could more easily find them.

(Mexican bird men dancing on a pole. I subtitled my talk on opinion polls 'poll dancing' - and I'm sure I disappointed my audience as a result. Image credit: Wikimedia Commons. License: Creative Commons. Author: Juan Felipe Rios.)

Polling

Can you believe the polls? - fake polls, leading questions, and other sins of opinion polling.

President Hilary Clinton: what the polls got wrong in 2016 and why they got it wrong - why the polls said Clinton would win and why Trump did.

Poll-axed: disastrously wrong opinion polls - a brief romp through some disastrously wrong opinion poll results.

Sampling the goods: how opinion polls are made - my experiences working for an opinion polling company as a street interviewer.

Probability theory

Who will win the election? Election victory probabilities from opinion polls - a quick derivation of a key formula and an explanation of why random sampling alone underestimates the uncertainty.

US democracy

These blog posts provided some background on US presidential elections.

The Electoral College for beginners - the post explains how the electoral college works and how it came to be.

Finding electoral fraud - the democracy data deficit - the post looks at the evidence (or the lack of it) for vote fraud and suggests a way citizen-analysts can contribute to American democracy.

Silkworm - lessons learned from a BI app in Python

Faster Python BI app development through code generation - how I generated the code for the Silkworm project and why I did it.