Tuesday, March 23, 2021

Know your rights: assignment of rights

Why assignment of rights matters

Let's imagine you're running a company that sells products or services to other companies (B2B). A big company wants to acquire you. They do their due diligence on your contracts to make sure they know what they're buying. They want all your contracts as part of the deal. What could possibly go wrong?

What could go very wrong is your contract terms, especially something called assignment of rights. In this blog post, I'm going to tell you what it is and what you should consider doing.


(Image source: Wikimedia Commons, License: Public Domain)

Obligatory disclosure

I am not a lawyer. Don't take legal advice from me. The goal of this blog entry is to inform you of a contractual term that's important for your business. Go and speak to a lawyer to find out more.

Everything I'm writing about assumes common law. Common law countries are countries that derive their legal system from the UK, which includes the US, Australia, Ireland, Canada, New Zealand, etc. If you're not in a common law country, this applies to you only so far as you do business with common law countries.

What is assignment of rights?

I'm going to simplify some things here. Let's say you're Company X and you're selling a product or service to Company Y. Who provides the service and who receives the money? In most cases, company X provides goods or services to Y and gets money from Y in return.  If something goes wrong, X and Y can sue each other. This is all very simple, and it's the basis of most contracts.

Let's look at two exceptions to this pattern:

  • Company X sub-contracts contract performance to another company, Company A. Company A could be a subsidiary of X or it could be an outsourcing company with no ownership relationship between X and A.
  • Company X is subsequently bought by Company B.

Some businesses have rules about who performs contract work; they won't allow outsourcing. In the contracts they write, they create a contract section, usually headed "Assignment of rights". This section says words to the effect of 'you can't assign the performance of this contract to another entity'. What this means is, Company X has to perform the contract, not some other legal entity.

If Company X is bought by Company B, in most cases, things are OK, but there can be exceptions that can badly hurt Company X. 

Most contracts have a section called something like "assignment of rights" that lays out the rules for who does the work and what happens in the case of a takeover.

What could go wrong?

Let's imagine the contract between X and Y states there can be no assignment of rights. X has to perform the contract.

Company X has a restructuring and wants to sell off a division to another company. Oops! It can sell off the division, but the contracts can't go with it. The new owners of the division will have to re-negotiate contracts, which could be disastrous. Customers now have the upper hand in any negotiation and can just say no. I can see some customers getting a nice discount to agree to the change.

What happens if Company X is bought? This is a change of control and could well invalidate the assignment of rights clause, depending on exactly how it's written. Customers could be within their contractual rights to terminate the contract because of a change of control. In the subsequent negotiation, they have the upper hand and could well demand a discount.

Here's another wrinkle. What if Company X is bought by a competitor to one of its customers? It's in the customers' interest to stop this from happening, so they should forbid it in the contract. In practice, this might mean specific language in the contract allowing termination in this case.

The final example is usually the simplest: bankruptcy. Most contracts have provisions that deal with the bankruptcy of one or both parties.

The consequences of not setting up assignment correctly

All the failure modes I'm talking about (and a lot more) are well-known. There's a reason why lawyers are experts. There are reasons why you need to have a lawyer review your contracts.

Let's say you wanted to buy Company X. One of the first things you would do in your due diligence is check out the contracts, especially the assignment of rights section. You're looking for language that allows the rights of the contract to be assigned to another entity (usually using terms like "successor entity", "change of control", etc.). A major problem is the existence of language that forbids the assignment of rights in a takeover or that requires permission from other parties. If this language exists, the acquisition costs go up and it may drive down the acquisition price.

In the case of a change of control (takeover), customers can suddenly get a windfall, they have an opportunity to negotiate their contracts downwards. To put it simply, Company X comes to them saying "we've been bought by Company B, we need to change our contract with you", customers can say, "we don't want to change, but we'll agree to it if you give us a 20% discount".

What should you do?

Go see a lawyer. Make sure a lawyer draws up your contract using standard contract templates. This is especially important if you think your company might be acquired.

In big deals, there's a back-and-forth on contract terms. In most cases, the bigger company gets the contract terms they want. In the excitement of the deal, sometimes companies agree to things they shouldn't. It's the end of the year, it's a marquee customer, and it's a huge deal that takes the sales team over quota. In a case like this, the temptation to agree is enormous. Don't do it, or at least, do it knowing the consequences.

In general, all contracts should be reviewed. You need to be very sure what your contracts say, and what an acquirer may find in due diligence.

Saturday, March 13, 2021

Forecasting the 2020 election: a retrospective

What I did  

One of my hobbies is forecasting US presidential elections using opinion poll data. The election is over and Joe Biden has been sworn in, so this seems like a good time to look back on what I got right and what I got wrong. 

I built a computer model that ingests state-level opinion poll data and outputs a state-level forecast of the election results. My model aggregates polling data, using the previous election results as a starting point. It's written in Python and you can get it from my GitHub page. The polling data comes from the ever-wonderful 538.
(This pole works, unlike some other polls. Image source: Wikimedia Commons, License: Creative Commons, Author: Daniel FR.)

What I got right

My final model correctly predicted the results of 49 out of 51 states (including Washington D.C.). 

What I got wrong

The two states my model got wrong were Florida and North Carolina, and these were big misses - beyond my confidence interval. The cause in both cases was polling data. In both states, the polls were consistently wrong and way overstated Biden's vote share. 

My model also overstated Biden's margin of victory in many of the states he won. This is hidden because my model forecast a Biden victory and Biden won, but in several cases, his margin of victory was less than my model predicted - and significantly so.

The cause of the problem was opinion polls overstating Biden's vote share.

The polling industry and 2020

The polling industry as a whole overstated Biden's support by several percentage points across many states. This is disguised because they got most states directionally correct, but it's still a wide miss. 

In the aftermath of 2016, the industry did a self-examination and promised it would do better next time, but 2020 was still way off. The industry is going to do a retrospective to find out what went wrong in 2020.

I've read a number of explanations of polling misses in the press but their motivation is selling advertising, not getting to the root cause. Polling is hard and 2020 was very different from previous years; there was a pandemic and Donald Trump was a highly polarizing candidate. This led to a higher voter turnout and many, many more absentee ballots. If the cause was easy to find, we'd have found it by now.

The 2020 investigation needs to be thorough and credible, which means it will be several months at least before we hear anything. My best guess is, there will be an industry paper in six months, and several independent research papers starting in a few months. I'm looking forward to the analysis: I'm convinced I'm going to learn something new.

Where next?

There are lots of tweaks I could make to my model, but I'm not going to do any of them until the underlying polling data improves. In other words, I'm going to forget about it all for three years. In fact, I'd quite like to forget about politics for a while.

If you liked this post, you might like these ones

Monday, March 8, 2021

A masterclass in information visualization: the tube map

Going underground

The London Underground tube map is a master class in information visualization. It's been described in detail in many, many places, so I'm just going to give you a summary of why it's so special and what we can learn from it. Some of the lessons are about good visual design principles, some are about the limitations of design, and some of them are about wealth and poverty and the unintended consequences of abstraction.

(London Underground map.)

The problem

From its start in 1863, the underground train system in London grew in a haphazard fashion. With different railway companies building different lines there was no sense of creating a coherent system. 

Despite the disorder, when it was first built it was viewed as a marvel and had a cultural impact beyond just transport; Conan Doyle wove it into Sherlock Holmes stories, H.G. Wells created science fiction involving it, and Virginia Woolf and others wrote about it too.

After various financial problems, the system was unified under government control. The government authority running it wanted to promote its use to reduce street-level congestion but the problem was, there were many different lines that only served part of the capital. Making it easy to use the system was hard.

Here's an early map of the system so you can see the problem.

1908 tube map

(1908 tube map. Image source: Wikimedia Commons.)

The map's hard to read and it's hard to follow. It's visually very cluttered and there are lots of distracting details; it's not clear why some things are marked on the map at all (why is ARMY & NAVY AND AUXILLARY STORES marked so prominently?). The font is hard to read, the text orientation is inconsistent, and the contrast of station names with the background isn't high enough.

The problem gets even worse when you zoom out to look at the entire system. Bear in mind, stations in central London are close together but they get further apart as you go into the suburbs. Here's an early map of the entire system, do you think you could navigate it?

(1931 whole system tube map.)

Of course, the printing technology of the time was more limited than it is now, which made information representation harder.

Design ideas in culture

To understand how the tube map as we know it was created, we have to understand a little of the design culture of the time (the early 1930s).

Electrical engineering was starting as a discipline and engineers were creating circuit diagrams for new electrical devices. These circuit diagrams showed the connection between electrical components, not how they were laid out on a circuit board. Circuit diagrams are examples of topological maps.

(Example circuit diagram. Show electrical connections between components, not how they're laid out on a circuit board. Image source: Wikimedia Commons, License: Public domain.)

The Bauhaus school in Germany was emphasizing art and design in mass-produced items, bringing high-quality design aesthetics into everyday goods. Ludwig Mies van der Rohe, the last director of the Bauhaus school, used a key aphorism that summarized much of their design philosophy: "less is more".

(Bauhaus kitchen design 1928 - they invented much of the modern design world. Image source: Wikimedia Commons, License: Public domain)

The modern art movement was in full swing, with the principles of abstraction coming very much to the fore. Artists were abstracting from reality in an attempt to represent an underlying truth about their subjects or about the world.

(Piet Mondrian, Composition 10. Image source: Wikimedia Commons, License: Public Domain.)

To put it simply, the early 1930s were a heyday of design that created much of our modern visual design language.

Harry Beck's solution - form follows function

In 1931, Harry Beck, a draughtsman for London Underground, proposed a new underground map. Beck's map was clearly based on circuit diagrams: it removed unnecessary detail to focus on what was needed. In Beck's view, what was necessary for the tube map was just the stations and the lines, plus a single underlying geographical detail, the river Thames.

Here's his original map. There's a lot here that's very, very different from the early geographical maps.

The design grammar of the tube map

The modern tube map is a much more complex beast, but it still retains the ideas Harry Beck created. For simplicity, I'm going to use the modern tube map to explain Beck's design innovations. There is one underlying and unifying idea behind everything I'm going to describe: consistency.

Topological not geographical. This is the key abstraction and it was key to the success of Beck's original map. On the ground, tube lines snake around and follow paths determined by geography and the urban landscape. This makes the relationship between tube lines confusing. Beck redrew the tube lines as straight lines without attempting to preserve the geographic relations of tube lines to one another. He made the stations more or less equidistant from each other, whereas, on the ground, the distance between stations varies widely. 

The two images below show the tube map and a geographical representation of the same map. Note how the tube map substantially distorts the underlying geography.

(The tube map. Image source: BBC.)

(A geographical view of the same system. Image source: Wikimedia Commons.)

Removal of almost all underlying geographical features. The only geographical feature on tube maps is the river Thames. Some versions of the tube map removed it, but the public wanted it put back in, so it's been a consistent feature for years now.

(The river Thames, in blue, is the only geographic feature on the map.)

A single consistent font.  Station names are written with the same orientation. Using the same font and the same text orientation makes reading the map easier. The tube has its own font, New Johnston, to give a sense of corporate identity.

(Same text orientation, same font everywhere.)

High contrast. This is something that's become easier with modern printing technology and good-quality white paper. But there are problems. The tube uses a system of fare zones which are often added to the map (you can see them in the first two maps in this section, they're the gray and white bands). Although this is important information if you're paying for your tube ticket, it does add visual clutter. Because of the number of stations on the system, many modern maps add a grid so you can locate stations. Gridlines are another cluttering feature.

Consistent symbols. The map uses a small set of symbols consistently. The symbol for a station is a 'tick' (for example, Goodge Street or Russell Square). The symbol for a station that connects two or more lines is a circle (for example, Warren Street or Holborn).

Graphical rules. Angles and curves are consistent throughout the map, with few exceptions - clearly, the map was constructed using a consistent set of layout rules. For example, tube lines are shown as horizontal, vertical, or 45-degree lines in almost all cases.

The challenge for the future

The demand for mass transit in London has been growing for very many years which means London Underground is likely to have more development over time (new lines, new stations). This poses challenges for map makers.

The latest underground maps are much more complicated than Harry Beck's original. Newer maps incorporate the south London tram system, some overground trains, and of course the new Elizabeth Line. At some point, a system becomes so complex that even an abstract simplification becomes too complex. Perhaps we'll need a map for the map.

A trap for the unwary

The tube map is topological, not geographical. On the map, tube stations are roughly the same distance apart, something that's very much not the case on the ground.

Let's imagine you had to go from Warren Street to Great Portland Street. How would you do it? Maybe you would get the Victoria Line southbound to Oxford Circus, change to the Bakerloo Line northbound, change again at Baker Street, and get the Circle Line eastbound to Great Portland Street. That's a lot of changes and trains. Why not just walk from Warren Street to Great Portland Street? They're less than 500m apart and you can do the walk in under 5 minutes. The tube map misleads people into doing stuff like this all the time.

Let's imagine it's a lovely spring day and you're traveling to Chesham on the Metropolitan Line. If Great Portland Street and Warren Street are only 482m apart, then it must be a nice walk between Chalfont & Latimer and Chesham, especially as they're out in the leafy suburbs. Is this a good idea? Maybe not. These stations are 6.19km apart.

Abstractions are great, but you need to understand that's exactly what they are and how they can mislead you.

Using the map to represent data

The tube map is an icon, not just of the tube system, but of London itself. Because of its iconic status, researchers have used it as a vehicle to represent different data about the city.

James Cheshire of University College London mapped life expectancy data to tube stations, the idea being, you can spot health disparities between different parts of the city. He produced a great map you can visit at tubecreature.com. Here's a screenshot of part of his map.


You go from a life expectancy of 78 at Stockwell to 89 at Green Park, but the two stations are just 4 stops apart. His map shows how disparities occur across very short distances.

Mark Green of the University of Sheffield had a similar idea, but this time using a more generic deprivation score. Here's his take on deprivation and the tube map, the bigger circles representing higher deprivation.

Once again, we see the same thing, big differences in deprivation over short distances.

What the tube map hides

Let me show you a geographical layout of the modern tube system courtesy of Wikimedia. Do you spot what's odd about it?

(Geographical arrangement of tube lines. Image source: Wikimedia Commons, License: Creative Commons.)

Look at the tube system in southeast London. What tube system? There are no tube trains in southeast London. North London has lots of tube trains, southwest London has some, and southeast London has none at all. What part of London do you think is the poorest?

The tube map was never designed to indicate wealth and poverty, but it does that. It clearly shows which parts of London were wealthy enough to warrant underground construction and which were not. Of course, not every area in London has a tube station, even outside the southeast of London. Cricklewood (population 80,000) in northwest London doesn't have a tube station and is nowhere to be seen on the tube map. 

The tube map leaves off underserved areas entirely, it's as if southeast London (and Cricklewood and other places) don't exist. An abstraction meant to aid the user makes whole communities invisible.

Now look back at the previous section and the use of the tube map to indicate poverty and inequality in London. If the tube map is an iconic representation of London, what does that say about the areas that aren't even on the map? Perhaps it's a case of 'out of sight, out of mind'.

This is a clear reminder that information design is a deeply human endeavor. A value-neutral expression of information doesn't exist, and maybe we shouldn't expect it to.

Takeaways for the data scientist

As data scientists, we have to visualize data, not just for our fellow data scientists, but more importantly for the businesses we serve. We have to make it easy to understand and easy to interpret data. The London Underground tube map shows how ideas from outside science (circuit diagrams, Bauhaus, modernism) can help; information representation is, after all, a human endeavor. But the map shows the limits to abstraction and how we can be unintentionally led astray. 

The map also shows the hidden effects of wealth inequality and the power of exclusion; what we do does not exist in a cultural vacuum, which is true for both the tube map and the charts we produce too.

Saturday, February 27, 2021

Simpson's paradox: a trap for the naive analyst

Simpson's paradox can mess up your business

Let's imagine you're the Chief Revenue Officer at a manufacturing company that sells tubes and cylinders. You're having trouble with European sales reps discounting, so you offer a spif: the country team that sells at the highest price gets a week-long vacation somewhere warm and sunny with free food and drink. The Italian and German sales teams are raring to go.

At the end of the quarter, you have these results [Wang]:

Product type
Cylinder Tube
Sales team No sales Average price No sales Average price
German 80 €100 20 €70
Italian 20 €120 80 €80

This looks like a clear victory for the Italians! They maintained a higher price for both cylinders and tubes! If they have a higher price for every item, then obviously, they've won. The Italians start packing their swimsuits.

Not so fast, say the Germans, let's look at the overall results.

Sales team Average price
German €94
Italian €88

Despite having a lower selling price for both cylinders and tubes, the Germans have maintained a higher selling price overall!

How did this happen? It's an instance of Simpon's paradox.

Why the results reversed

Here's how this happened: the Germans sold more of the expensive cylinders and the Italians sold more of the cheaper tubes. The average price is the ratio of the total monetary amount/total sales quantity. To put it very simply, ratios (prices) can behave oddly.

Let's look at a plot of the selling prices for the Germans and Italians.

German and Italian prices

The blue circles are tubes and the orange circles are cylinders. The size of the circles represents the number of sales. The little red dot in the center of the circles is the price. 

Let's look at cylinders. Plainly, the Italians sold them at a higher price, but they're the most expensive item and the Germans sold more of them. Now, let's look at tubes, once again, the Italians sold them at a higher price than the Germans, but they're cheaper than cylinders and the Italians sold more of them.

You can probably see where this is going. Because the Italians sold more of the cheaper items, their average (or pooled) price is dragged down, despite maintaining a higher price on a per-item basis. I've re-drawn the chart, but this time I've added a horizontal black line that represents the average.

The product type (cylinders or tubes) is known in statistics as a confounder because it confounds the results. It's also known as a conditioning variable.

A disturbing example - does this drug work?

The sales example is simple and you can see the cause of the trouble immediately. Let's look at some data from a (pretend) clinical trial.

Imagine there's some disease that impacts men and women and that some people get better on their own without any treatment at all. Now let's imagine we have a drug that might improve patient outcomes. Here's the data [Lindley].

Female Male
Recovered Not Recovered Rate Recovered Not Recovered Rate
Took drug 8 2 80% 12 18 40%
Not take drug 21 9 70% 3 7 30%

Wow! The drug gives everyone an added 10% on their recovery rate. Surely we need to prescribe this for everyone? Let's have a look at the overall data.

Everyone
Recovered Not Recovered Rate
Took drug 20 20 50%
Not take drug 24 16 60%

What this data is saying is, the drug reduces the recovery rate by 10%.

Let me say this again. 

  • For men, the drug improves recovery by 10%.
  • For women, the drug improves recovery by 10%.
  • For everyone, the drug reduces recovery by 10%. 

If I'm a clinician, and I know you have the disease, if you're a woman, I would recommend you take the drug, if you're a man I would recommend you take the drug, but if I don't know your gender, I would advise you not to take the drug. What!!!!!

This is exactly the same math as the sales example I gave you above. The explanation is the same. The only thing different is the words I'm using and the context.

Simpson and COVID

In the United States, it's pretty well-established that black and Hispanic people have suffered disproportionately from COVID. Not only is their risk of getting COVID higher, but their health outcomes are worse too. This has been extensively covered in the press and on the TV news.

In the middle of 2020, the CDC published data that showed fatality rates by race/ethnicity. The fatality rate means the fraction of patients with COVID who die. The data showed a clear result: white people had the worst fatality rate of the racial groups they studied.

Doesn't this contradict the press stories? 

No.

There are three factors at work:

  • The fatality rate increases with age for all ethnic groups. It's much higher for older people (75+) than younger people.
  • The white population is older than the black and Hispanic populations.
  • Whites have lower fatality rates in almost all age groups.

This is exactly the same as the German and Italian sales team example I started with. As a fraction of their population, there are more old white people than old black and Hispanic people, so the fatality rates for the white population are dominated by the older age group in a way that doesn't happen for blacks and Hispanics.

In this case, the overall numbers are highly misleading and the more meaningful comparison is at the age-group level. Mathematically, we can remove the effect of different demographics to make an apples-to-apples comparison of fatality rates, and that's what the CDC has done.

In pictures

Wikipedia has a nice article on Simpson's paradox and I particularly like the animation that's used to accompany it, so I'm copying it here.

(Simpson's paradox animated. Image source: Wikipedia, Credit: Pace~svwiki, License: Creative Commons)

Each of the dots represents a measurement, for example, it could be price. The colors represent categories, for example, German or Italian sales teams, etc. if we look at the results overall, the trend is negative (shown by the black dots and black line). If we look at the individual categories, the trend is positive (colors). In other words, the aggregation reverses the individual trends.

The classic example - sex discrimination at Berkeley

The Simpson's paradox example that's nearly always quoted is the Berkeley sex discrimination case [Bickel]. I'm not going to quote it here for two reasons: it's thoroughly discussed elsewhere, and the presentation of the results can be confusing. I've stuck to simpler examples to make my point.

American politics

A version of Simpson's paradox can occur in American presidential elections, and it very nicely illustrates the cause of the problem.

In 2016, Hilary Clinton won the popular vote by 48.2% to 46.1%, but Donald Trump won the electoral college by 304 to 227. The reason for the reversal is simple, it's the population spread among the states and the relative electoral college votes allocated to the states. As in the case of the rollup with the sales and medical data I showed you earlier, exactly how the data rolls up can reverse the result.

The question, "who won the 2016 presidential election" sounds simple, but it can have several meanings:

  • who was elected president
  • who got the most votes
  • who got the most electoral college votes

The most obvious meaning, in this case, is, "who was elected president". But when you're analyzing data, it's not always obvious what the right question really is.

The root cause of the problem

The problem occurs because we're using an imprecise language (English) to interpret mathematical results. In the sales and medical data cases, we need to define what we want. 

In the sales price example, do we mean the overall price or the price for each category? The contest was ambiguous, but to be fair to our CRO, this wasn't obvious initially. Probably, the fairest result is to take the overall price.

For the medical data case, we're probably better off taking the male and female data separately. A similar argument applies for the COVID example. The clarifying question is, what are you using the statistics for? In the drug data case, we're trying to understand the efficacy of a drug, and plainly, gender is a factor, so we should use the gendered data. In the COVID data case, if we're trying to understand the comparative impact of COVID on different races/ethnicities, we need to remove demographic differences.

If this was the 1980s, we'd be stuck. We can't use statistics alone to tell us what the answer is, we'd have to use data from outside the analysis to help us [Pearl]. But this isn't the 1980s anymore, and there are techniques to show the presence of Simpson's paradox. The answer lies in using something called a directed acyclic graph, usually called a DAG. But DAGs are a complex area and too complex for this blog post that I'm aiming at business people.

What this means in practice

There's a very old sales joke that says, "we'll lose money on every sale but make it up in volume". It's something sales managers like to quote to their salespeople when they come asking for permission to discount beyond the rules. I laughed along too, but now I'm not so quick to laugh. Simpson's paradox has taught me to think before I speak. Things can get weird.

Interpreting large amounts of data is hard. You need training and practice to get it right and there's a reason why seasoned data scientists are sought after. But even experienced analysts can struggle with issues like Simpson's paradox and multi-comparison problems.

The red alert danger for businesses occurs when people who don't have the training and expertise start to interpret complex data. Let's imagine someone who didn't know about Simpson's paradox had the sales or medical data problem I've described here. Do you think they could reach the 'right' conclusion?

The bottom line is simple: you've got to know what you're doing when it comes to analysis.

References

[Bickel] Sex Bias in Graduate Admissions: Data from Berkeley, By P. J. Bickel, E. A. Hammel, J. W. O'Connell, Science, 07 Feb 1975: 398-404
[Lindley] Lindley, D. and Novick, M. (1981). The role of exchangeability in inference. The Annals
of Statistics 9 45–58.
[Pearl] Judea Pearl, Comment: Understanding Simpson’s Paradox, The American Statistician, 68(1):8-13, February 2014.
[Wang] Wang B, Wu P, Kwan B, Tu XM, Feng C. Simpson's Paradox: Examples. Shanghai Arch Psychiatry. 2018;30(2):139-143. doi:10.11919/j.issn.1002-0829.218026

Sunday, February 21, 2021

The amazing gamma function

It blew my mind

A long time ago, I was a pure math student sitting in a lecture theater. The lecturer derived the gamma function (\(\Gamma(x)\)) and talked about its properties. It blew my mind. I love this stuff and I want to share my enjoyment with you.

(Leonhard Euler - who discovered e and the Gamma function. Image source: Wikimedia Commons. License: Public domain)

It must be important, it has an exclamation!

Factorials are denoted by a !, for example, \(6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720\). The numbers get big very quickly, as we'll see, so the use of the ! sign seems appropriate. More generally, we can write:

\[n! = n \times (n-1) \times ...1 \]
where:
\[n \in \Bbb Z*\]
\(\Bbb Z*\) is the integers 0, 1,...

Let's plot the function \(y(n) = n!\) so we can see how quickly it grows.


I stopped at n = 6 because the numbers got too big to show what I want to show. 

To state the obvious, \(n!\) is defined for positive integers only. It doesn't make sense to talk about -1.3!  or does it?

Integration is fun

Leonard Euler is a huge figure in mathematics; the number \(e\) is named after him, as is the iconic identity \(e^{i\pi} + 1 = 0\). In my career, I've worked in a number of areas and used different forms of math, in most places, I've handled something that Euler had a hand in. It's sad that outside of the technical world his name isn't better known.

One of the many, many things Euler did was investigate the properties of series involving \(e\). In turn, this led to the creation of the gamma function, which has a startling property related to factorials. I'm going to show you what it is, but let's start with some calculus to get us to the gamma function. 

We're going to build up a sequence of integrations. hopefully, the pattern should be obvious to you:

\[ \int_0^\infty x^0 e^{-x} dx = -e^{-x} \vert_0^\infty= 1\]
\[ \int_0^\infty x^1 e^{-x} dx = 1\]
\[ \int_0^\infty x^2 e^{-x} dx = 2\]
\[ \int_0^\infty x^3 e^{-x} dx = 6\]

With some proof by induction, we can show that the general case is:

\[ \int_0^\infty x^n e^{-x} dx = n!\]

(The proof involves some calculus and some arithmetic. If I get some time, I might update this post with a full derivation, just because.)

Euler named a version of this relationship the gamma function and wrote it as:
\[\Gamma(n+1) = n!\] We have a relationship between integration and factorials. So what?

Go back and look at the integration. Where does it say in the integration that \(n\) has to be a positive integer? It's perfectly possible to evaluate \(\int_0^\infty x^{-1.356} e^{-x} dx\) for example. Can we evaluate the integral for positive real values of \(n\)? Yes, we can. What about negative numbers? Yes, we can. What about complex numbers? Yes, we can.

If we redefine factorial using the gamma function, it becomes meaningful to calculate \(2.321!\) or \(-0.5!\) or even \((1.1 + 2.2i)!\). To be clear, we now have a way of calculating factorials for real numbers and complex numbers, so:

\[n \in \Bbb C\]

or maybe we should write

\[x! \ where \ x \in \Bbb C\] 

The gamma function has a very curious property that struck me as being very cool. 

\[\Gamma \left( \frac{1}{2} \right) = \sqrt{\pi}\]

When I heard all this, my undergraduate mind was blown.

What Legendre did wrong

Euler defined the gamma function as:

\[\Pi(n) = n!\]

But for various reasons, Legendre re-formatted it as:

\[\Gamma(n+1) = n!\]

Sadly, this is the form universally used now. This form is inconvenient, but like the QWERTY layout of keys on a keyboard, we're stuck with it.

What does it look like?

The chart below shows the gamma function for a range of values.  I've limited the range of the x and y values so you can see its shape around zero.

For \(n > 0\), it's now a smooth curve instead of points. Below zero, it has poles (infinities) at negative integer values. 

What use is it?

Factorials are used in probability theory and any form of math involving combinations. They're one of the bedrock ideas you need to understand to do anything useful. The gamma function is used in statistics, number theory, and quantum physics. 

One cool use of the gamma function is calculating the volume and surface area of an n-dimensional sphere:

\[V = \frac {\ {\pi^{{1 \over 2} n}  r^n}}  {\Gamma(  {1\over 2} n + 1)}\]
\[S = \frac{n}{r} V\]

where:

  • r is the radius
  • n is the number of dimensions

(n-dimensional spheres crop up in information theory - as you're reading this, you're using something that relies on their consequences.)

But frankly, I don't care about uses in the real world. It's a great function with some really cool properties, and sometimes, that's enough for me.

Programmers are mathematicians too

My high school math teacher told us our calculators would give us an error if we tried calculating factorials for any non-positive integer number. She wanted us to know why it wouldn't work. The people who built my high school calculator had a very literal definition of factorial, but it looks like the good programmers at Google are mathematicians at heart. 

Type the word 'calculator' into the Google search box and you should see something like this.

Now type in -1.5! You should get -1.32934038818. Google has implemented the factorial key using the gamma function for numbers that are not just non-positive integers. I've heard that calculators on other systems do the same thing too. This makes me unreasonably happy.

Pure math - but...

Pure math has a very odd habit of becoming essential to business. The mathematicians who developed number theory or linear algebra or calculus didn't do so to make money, they did it to understand the world. But even some very abstract math has spawned huge businesses. The most obvious example is cryptography, but wireless communications rely on a healthy dose of pure math too, as I'll show in a future post.

Monday, February 15, 2021

Management degrees - how I went from a C to an A: buzzword bingo

How to do well on a management degree

I'm having a spring clean, I'm scanning old documents and throwing away the paper copies. It's a trip down memory lane as I'm reviewing old management essays and course notes. The management degree I did was part-time in the evenings and I did it over several years as well as doing a full-time job, so my notes built up over time and there's a lot to scan. Looking over it all, here's my guide to doing well on essays in a management master's degree program.

Sever Hall, Harvard
(A classroom in Sever Hall. I had several lectures in rooms just like this. Image source: Wikimedia Commons, License: Creative Commons, Author: Ario Barzan)

Why I did badly at first

I had been in the technology industry for a long time before I took management classes. I was used to coding and writing technical documentation and I'd become stuck in my ways. The thing about most technical documents is that no one reads them, and very rarely do you get feedback on your writing style. In the few years before I began the classes, I'd started to do more marketing work, and I found it challenging - for the first time, I was getting negative feedback on how I was writing, so I knew I had a problem.

My first course was accounting, which I did very well in. But of course I did well, accounting is another technical discipline. It's like coding, but with different rules and the added threat of lawsuits and jail time.

The second course I did was an HR course and we used the case study method in class. I was gung-ho for my first essay and I was convinced I was going to get a great mark for it. I got a C.

I did what every bad student does when they get a bad grade: I blamed the lecturer. Then I stopped and gave myself a talking to. I was determined to do better.

I did badly for two reasons:

  • A stilted, over-technical writing style.
  • I didn't understand what the lecturer wanted. The goal was to show that I had absorbed the terminology of HR and could appropriately apply it. The goal was not to solve the business problem. In my essay, I focused on solving the business problem and I didn't mention enough of the HR concepts we covered in class.

How I did well

The first order of business was fixing my writing style. I had a short period between essays, but fortunately, it was long enough to do some work. I did crash reading on how to write better in general and how to write better essays. Unashamedly, I went back to basics and read guides for undergraduates and even high school students.  I talked to other students online about writing. I realized I had some grammar and style issues, but I also knew I couldn't fix them all in one go, so I focused on the worst problems first. 

Next was understanding what the lecturer wanted. Once I understood that the essay was a means of checking my understanding of concepts, I had a clean way forward: buzzword bingo. Prior to beginning any essay, I made a list of all the relevant concepts we'd covered in class, and I added some that weren't covered but I'd found through reading around. My goal was to ensure that I applied every concept to the case study and make it clear I'd done so. The essays were a vehicle to show understanding of concepts.

The third step was a better essay plan. I figured out how I would apply my buzzwords to the case study and built my work into a narrative. I made sure that the logical steps made sense from one concept to another and I made sure to link ideas. Every essay has a maximum word (or page) count, so I developed a word budget for each idea, making sure the most important ideas got the most words. This also helps with a perennial student problem, spending too many words on the introduction and conclusion. The word budget idea was the biggest step forward for me, it made sure I focused my thoughts and it always led to my essays being too long. In the editing process, I chopped down the introduction and conclusion and removed extraneous words, I also cut down on the use of the passive voice, which is a real word hog.

My essay process

Buzzword bingo. Make a list of every concept you think is relevant to the case study, making sure to use the correct terminology. This list must cover everything mentioned in class but it also must cover ideas not mentioned in class, you have to go above and beyond.

Weighting buzzwords. Which concepts are more important? More important concepts get a higher word count, but you have to know what's more important.

What's the question? What precisely are the instructions for the essay? Make sure you follow the rules exactly. If necessary, make a tick list for the essay.

Word budget. You have a word count, now allocate the word count in proportion to the importance of the ideas, including the introduction and conclusion.

Link ideas. What ideas go together? If there are multiple linkages, what are the most important ones?

Essay plan. Plan the essay paragraph-by-paragraph and allocate a word budget for each paragraph.

Write the essay.

First-pass revision.  Are you under the word count? If so, you missed something. Does the written essay change your understanding of the problem? If so, re-allocate your word budget. Do you need to change the order of paragraphs or sentences for the narrative to make sense?

Rest. Leave the essay alone for a few days. You need some distance to critique it more.

Second-pass revision. Remove the passive voice as much as possible. Check for word repetition. Check the introduction and conclusion make sense and are coherent.

Rest. Leave the essay alone for a few days. You need some distance to critique it more.

Third pass revision. Have you missed any concepts? Does the essay hang together? Does it meet the instructions precisely?

Allocate plenty of time. This is a painstaking process. You can't do it at the last minute and you can't compress the timescales by doing it all in a day, you need time for reflection. You have to start work on your essay as soon as it's set. Realistically, this is at least two weeks of work.

What happened?

For the next essay, I got an A- and it went up from there. In pretty much every course I did after that, I got an A for my essays.

The degree program offered a writing module, which I took. Prior to the writing course, I read every writing book I could get my hands on, including many grammar books (most of which I didn't understand). Part of the writing course was writing an article for publication and I actually managed to get an article published in a magazine. The editor made minimal changes to my text, which was immensely satisfying. Bottom line: I fixed my writing problem.

Did my approach to essay writing help me learn? Yes, but only marginally so. It did result in a huge boost to my grades though, and that's the main thing. It taught me a lesson in humility too - just because you're an expert in one thing doesn't make you an expert in everything.

Of course, I did get my degree and I did graduate, I was on the Dean's list and I was the commencement speaker for my class. I got there partly because of a better approach to essay writing, and you can too.

Monday, February 8, 2021

Frequency hopping and the most beautiful woman in the world

Spread spectrum

Modern digital wireless systems rely on spread spectrum techniques. The story of how the most obvious of them, frequency hopping, was invented is not what you think. It involves a beautiful Hollywood actress (possibly the most beautiful ever), a music composer, and a dinner party. Let me tell you the story.

(The most beautiful girl in the world, and the inventor of modern communications.  Image source: Wikimedia Commons, License: Public Domain)

Hedy Lamar

This woman lived an incredible life, if you get the time, read some of her life story. I'm just going to summarize it here.

Hedy was born Eva Maria Kiesler in 1914, in Vienna. Her parents were both Jewish, which was to play a part in this story. Her father was an inventor, which was also to be important. 

She got her first film role in 1930, and her first starring role in 1932. However, her big break came in 1933 with the notorious movie Ecstasy. I've heard the movie described as soft porn and it has a number of notable cinematic firsts - even today, it's NSFW so don't look for it from your work computer. 

In 1933, Hedy married Friedrich Mandl, an arms dealer with strong connections to the Nazis and the Italian fascists. Mandl was controlling and domineering. By 1937, Hedy knew she had to escape, so she left Austria and headed for the United States via London. Of course, she headed for Hollywood.

In Hollywood, she appeared in a number of films, some very successful, others not so much. The studios labeled her 'the most beautiful girl in the world' and marketed movies based on her beauty. She also actively and successfully raised millions for the war effort.

George Antheil

George was born in Trenton, New Jersey in 1900 to German parents and grew up bilingual. As a musician, he was strongly influenced by the emerging avant-garde music coming out of Europe, in particular, 'mechanical' music. He wrote music for piano, films, and ballets.

The dinner party and the piano roll

Hedy and George met at a Hollywood dinner party. They talked about the problem of radio-controlled torpedoes. Although a good idea, the controlling radio signals could easily be intercepted and jammed, or even worse, the torpedo could be redirected. What was needed was some way of controlling a torpedo by radio that could not be jammed.

George knew about automatic piano players, Hedy knew about torpedoes from her ex-husband. Together, they came up with the idea of a radio control where the radio frequency changed very rapidly; so rapidly, a human trying to jam the signal couldn't do it because they wouldn't be able to keep up with the frequency changes.  Here's a fictitious timeline example:

  • 1.2s - transmitter transmits at 27.2 MHz, receiver receives at 27.2 MHz
  • 1.3s - transmitter transmits at 26.9 MHz, receiver receives at 26.9 MHz
  • 1.4s - transmitter transmits at 27.5 MHz, receiver receives at 27.5 MHz
  • etc.
(A piano roll for automatically playing the piano. Image source: Wikimedia Commons, License: Creative Commons, Author: Draconichiaro)

To keep the transmitter and receiver in sync, you could use the same technology that powers automatic piano players. In an automatic piano player, a perforated roll is fed through a reader, which in turn presses the appropriate key. The perforated roll is a list of which keys to press and when. 

In the torpedo case, instead of which keys to press, the piano roll could instruct the transmitter or receiver which frequency to use and when. The same piano roll would be inserted into the torpedo and controller and both roll readers would be synchronized. After the torpedo was launched, the controlling frequency would change dependent on the roll, and the transmitter and receiver would stay in sync so long as the piano roll readers stayed in sync. 

Using a mechanism like this, the controlling frequency would change, or hop, from one frequency to another, hence the name 'frequency hopping'. Frequency hopping takes up more radio spectrum than just transmitting on one frequency would, hence the more general name 'spread spectrum'.

Hedy and George patented the idea and you can read their patent here.

Although Hedy and George thought of torpedoes as their application, there's no reason why you couldn't use the same idea for more secure voice communications.

What happened next

The patent sat in obscurity for years. The idea was way ahead of the technology needed to implement it, so it expired before anyone used it. Hedy and George made no money from it.

By the 1960s, the technology did exist, and it was used by the US military for both voice communications and guided munitions. Notably, they used it in the disastrous Bay of Pigs Invasion and later in Vietnam.

Moving forwards to the end of the twentieth century, the technique was used in early WiFi versions and other commercial radio standards, for example, Bluetooth.

Frequency hopping isn't the only spread spectrum technology, it's the simplest (and first) of several that are out there. Interestingly, some of them make use of pure math methods developed over a hundred years ago. In any case, spread spectrum methods are at the heart of pretty much all but the most trivial wireless communication protocols.

Hedy and George lived out their lives and things continued for them as they had before.
George continued to write music and opera until his death at the age of 58.

Hedy's career had ups and downs. She had huge success in the 1940s, but by the 1950s, her star had waned considerably. She filmed her last role in 1958 and retired, spending much of the rest of her life in seclusion. She died at age 85.

When I first started to work in the radio communications industry, the Hedy Lamar story was known, but it was considered a bit of a joke. I'm pleased that over the last few years, her contribution has been re-assessed upwards. In 2014, she was inducted into the US National Inventors Hall of Fame - it would have been nice had this been done in her lifetime, but still, better late than never.

If you liked this post you might also like