Tossing and turning

A few months ago, someone commented on one of my blog posts and asked how you work out if a coin is biased or not. I've been thinking about the problem since then. It's not a difficult one, but it does bring up some core notions in probability theory and statistics which are very relevant to understanding how A/B testing works, or indeed any kind of statistical test. I'm going to talk you through how you figure out if a coin is biased, including an explanation of some of the basic ideas of statistical tests.

The trial

A single coin toss is an example of something called a Bernoulli trial, which is any kind of binary decision you can express as a success or failure (e.g. heads or tails). For some reason, most probability texts refer to heads as a success.

We can work out what the probability is of getting different numbers of heads from a number of tosses, or more formally, what's the probability $$P(k)$$ of getting $$k$$ heads from $$n$$ tosses, where $$0 < k ≤ n$$? By hand, we can do it for a few tosses:

 Number of heads (k) Combinations (n) Count Probability 0 TTT 1 1/8 1 HTT THT TTH 3 3/8 2 THH HTH HHT 3 3/8 4 HHH 1 1/8

But what about 1,000 or 1,000,000 tosses - we can't do this many by hand, so what can we do? As you might expect, there's a formula you can use:
$P(k) = \frac{n!} {k!(n-k)!} p^k (1-p)^{n-k}$
$$p$$ is the probability of success in any trial, for example, getting a head. For an unbiased coin $$p=0.5$$; for a coin that's biased 70% heads $$p=0.7$$.

If we plot this function for an unbiased coin ($$p=0.5$$), where $$n=100$$, and $$0 < k ≤ n$$, we see this probability distribution:

This is called a binomial distribution and it looks a lot like the normal distribution for large ($$> 30$$) values of $$n$$.

I'm going to re-label the x-axis as a score equal to the fraction of heads: 0 means all tails, 0.5 means $$\frac{1}{2}$$ heads, and 1 means all heads. With this slight change, we can more easily compare the shape of the distribution for different values of $$n$$.

I've created two charts below for an unbiased coin ($$p=0.5$$), one with $$n=20$$ and one with $$n=40$$. Obviously, the $$n=40$$ chart is narrower, which is easier to see using the score as the x-axis.

As an illustration of what these charts mean, I've colored all scores 0.7 and higher as red. You can see the red area is bigger for $$n=20$$ than $$n=40$$. Bear in mind, the red area represents the probability of a score of 0.7 or higher. In other words, if you toss a fair coin 20 times, you have a 0.058 chance of seeing a score of 0.7 or more, but if you toss a fair coin 40 times, the probability of seeing a 0.7 score drops to 0.008.

These charts tell us something useful: as we increase the number of tosses, the curve gets narrower, meaning the probability of getting results further away from $$0.5$$ gets smaller. If we saw a score of 0.7 for 20 tosses, we might not be able to say the coin was biased, but if we got a score of 0.7 after 40 tosses, we know this score is very unlikely so the coin is more likely to be biased.

Thresholds

Let me re-state some facts:

• For any coin (biased or unbiased) any score from 0 to 1 is possible for any number of tosses.
• Some results are less likely than others; e.g. for an unbiased coin and 40 tosses, there's only a 0.008 chance of seeing a score of 0.7.

We can use probability thresholds to decide between biased and non-biased coins.  We're going to use a threshold (mostly called confidence) of 95% to decide if the coin is biased or not. In the chart below, the red areas represent 5% probability, and the blue areas 95% probability.

Here's the idea to work out if the coin is biased. Set a confidence value, usually at 0.05. Throw the coin $$n$$ times, record the number of heads and work out a score. Draw the theoretical probability chart for the number of throws (like the one I've drawn above) and color in 95% of the probabilities blue and 5% red. If the experimental score lands in the red zones, we'll consider the coin to be biased, if it lands in the blue zone, we'll consider it unbiased.

This is probabilistic decision-making. Using a confidence of 0.05 means we'll wrongly say a coin is biased 5% of the time. Can we make the threshold higher, could we use 0.01 for instance? Yes, we could, but the cost is increasing the number of trials.

As you might expect, there are shortcuts and we don't actually have to draw out the chart. In Python, you can use the binom_test function in the stats package.

To simplify, binom_test has three arguments:

• x - the number of successes
• n - the number of samples
• p - the hypothesized probability of success
It returns a p-value which we can use to make a decision.

Let's see how this works with a confidence of 0.05. Let's take the case where we have 200 coin tosses and 140 (70%) of them come up heads. We're hypothesizing that the coin is fair, so $$p=0.5$$.

from scipy import stats
print(stats.binom_test(x=140, n=200, p=0.5))

the p-value we get is 1.5070615573524992e-08 which is way less than our confidence threshold of 0.05 (we're in the red area of the chart above). We would then reject the idea the coin is fair.

from scipy import stats
print(stats.binom_test(x=115, n=200, p=0.5))

This time, the p-value is 0.10363903843786755, which is greater than our confidence threshold of 0.05 (we're in the blue area of the chart), so the result is consistent with a fair coin (we fail to reject the null).

What if my results are not significant? How many tosses?

Let's imagine you have reason to believe the coin is biased. You throw it 200 times and you see 115 heads. binom_test tells you you can't conclude the coin is biased. So what do you do next?

The answer is simple, toss the coin more times.

The formulae for the sample size, $$n$$, is:

$n = \frac{p(1-p)} {\sigma^2}$

where $$\sigma$$ is the standard error.

Here's how this works in practice. Let's assume we think our coin is just a little biased, to 0.55, and we want the standard error to be $$\pm 0.04$$. Here's how many tosses we would need: 154. What if we want more certainty, say $$\pm 0.005$$, then the number of tosses goes up to 9,900. In general, the bigger the bias, the fewer tosses we need, and the more certainty we want the more tosses we need.

If I think my coin is biased, what's my best estimate of the bias?

Let's imagine I toss the coin 1,000 times and see 550 heads. binom_test tells me the result is significant and it's likely my coin is biased, but what's my estimate of the bias? This is simple, it's actually just the mean, so 0.55. Using the statistics of proportions, I can actually put a 95% confidence interval around my estimate of the bias of the coin. Through math I won't show here, using the data we have, I can estimate the coin is biased 0.55 ± 0.03.

Is my coin biased?

This is a nice theoretical discussion, but how might you go about deciding if a coin is biased? Here's a step-by-step process.

1. Decide on the level of certainty you want in your results. 95% is a good measure.
2. Decide the minimum level of bias you want to detect. If the coin should return heads 50% of the time, what level of bias can you live with? If it's biased to 60%, is this OK? What about biased to 55% or 50.5%?
3. Calculate the number of tosses you need.
5. Use binom_test to figure out if the coin deviates significantly from 0.5.

London is an oddity

I was born and grew up in London but it was only after I left that I started to realize how much of an oddity it is; it's almost as if it's a different country from the rest of the UK. I thought other capital cities would have a similar disjointed relationship with their host countries, and I was partly right, but I was also mostly wrong. Let me explain why London is such an international oddity.

Zipf's law

Zipf's law refers to the statistical distribution of observations found in some types of data, for example, word frequency in human languages. It isn't a law in the sense of a scientific 'law', it's a distribution.

In simple terms, for measurements that follow Zipf's law, the first item is twice the second item, three times the third, and so on. For example, in English, the word 'the' is the most frequent word and it occurs twice as often as the next most common word ('of') [https://www.cs.cmu.edu/~cburch/words/top.html].

I found some readable articles on Zipf's law here: [http://www.casa.ucl.ac.uk/mike-michigan-april1/mike's%20stuff/attach/Gabaix.pdf, https://gizmodo.com/the-mysterious-law-that-governs-the-size-of-your-city-1479244159].

It turns out that a number of real-world measurements follow Zipf's law, including city sizes.

The US and elsewhere

Here's what city size looks like in the US. This is a plot of ln(Rank) vs ln(Population) with the biggest city (New York) being bottom right (ln meaning natural logarithm).

It's close to an ideal Zipf law distribution.

You can see the same pattern in other cities around the world [https://arxiv.org/pdf/1402.2965.pdf].

One of the interesting features of the Zipf city distribution is that it's mostly persistent over time [http://www.casa.ucl.ac.uk/mike-michigan-april1/mike's%20stuff/attach/Gabaix.pdf]. Although the relative size of a few cities may change, for most of the cities in a country, the relationship remains the same. Think about what this means for a minute; if the largest city has a population of 1,000,000 and the second largest has a population of 500,000, then if the population increases by 150,000 we would expect the largest city to increase to 1,100,000 and the second to increase to 550,000; most of the increase goes to the bigger city [https://www.janeeckhout.com/wp-content/uploads/06.pdf]. The population increase is not evenly spread.

A notable aside is how the press manages to miss the point when census data is released. If the population increases, most of the increase will go to the bigger cities. The story ought to be that bigger cities are getting bigger (and what that means). Instead, the press usually focuses on smaller cities that are growing or shrinking more than the average growth rate.

The UK and the London weighting

There's a big exception to the Zipf law relationship. London is much bigger than you would expect it to be. Here's the Zipf law relationship for UK cities with London in red.

London is twice the size you would expect it to be.

There are many theories about why London is so big. Some authors flip the question around and ask why Britain's second cities aren't larger, but that doesn't help explain why [http://spatial-economics.blogspot.com/2012/10/are-britains-second-tier-cities-too.html]. Here are some theories I've seen:

• The UK is an overly centralized country.
• London was an imperial city for a long time and that drove London's growth. The comparison group should have been imperial cities, and now the empire has gone, London is left as an oddity.
• London (was) in an economic zone that included the major cities of western Europe, so the comparison group isn't the UK, it's western Europe.

I think there's an element of truth in all of them. Certainly, UK governments (of all parties) have often prioritized spending on London, for example, there are no large-scale public construction projects anything like the Elizabeth Line elsewhere in the UK. Culture and the arts are also concentrated in London too, think of any large cultural organization in the UK (British Museum, National Theatre Company, Victoria & Albert...) and guess where they'll be located - and they're all government-funded. Of course, cause and effect are deeply intertwined here,  London gets more spending because it's big and important, therefore it stays big and important.

What are the implications?

London's size difference from other UK cities drives qualitative and quantitative differences. It's not the only UK city with a subway network but it's by far the largest (more than twice as big as the next networks).  It has more airports serving it than any other UK city. Its system of governance is different. Its politics are different. The fraction of people born overseas is different. And so on. Without change, these differences will continue and London will continue to look and feel very different from the rest of the UK, the country will be two countries in one.

As a child, I was right to pick up on the feeling that London was different; it really does feel like a different country. It's only as an adult that I've understood why. I've also realized that the UK's second-tier cities are falling behind, and that's a real problem. The UK is over-centralized and that's harming everyone who doesn't live in London.

London is considered a first-tier or world city [https://mori-m-foundation.or.jp/english/ius2/gpci2/index.shtml] and the challenge for UK governments is to bring the UK's other cities up while not dragging London down.

World cities but different geographies

In German, a world city (or "weltstadt") is a large, sophisticated, cosmopolitan city. There are only a handful of them and the list includes New York and London. Although there are obvious differences between these two cities, there are many, many similarities; no wonder they're sister or twin cities.

I was reading a National Geographic article about geographic misconceptions and it set me thinking about some of the less obvious, but profound differences between London and New York.

Let's dive into them.

If London was in North America...

 Cities north and south of New York

Let's line up some of the major North American cities in terms of miles north or south of New York. I'm going to ignore miles east or west and just focus on north and south. Here's the line on the left.

As you can see, Quebec City is 421 miles north of New York and Charlotte is 379 miles south.

On this line, where do you think London would appear? How many miles north or south of New York is London? Take a guess before scrolling down and peeking.

Here's the answer: 745 miles north.

That's right, London is way further north than Quebec City. London is actually slightly further north than Calgary. In fact, the UK as a whole is entirely north of the contiguous United States.

745 miles is a long way north and it has some consequences.

Daylight saving

Let's look at sunrise and sunset times and how they vary through the year. In the chart below, I've plotted sunrise and sunset times by month, removing daylight savings time shifts.

To show the differences a bit more starkly, let's take sunrise and sunset on solstice days:

Date City Sunrise Sunset Daylight time

2022-06-21 London 4:43:09 AM 9:21:41 PM 16h 38m 32s
2022-06-21 New York 5:25:09 AM 8:30:53 PM 15h 5m 44s

2022-12-21 London 8:03:52 AM 3:53:45 PM 7h 49m 53s
2022-12-21 New York 7:16:52 AM 4:32:12 PM 9h 15m 20s

That's a big difference. In London in the summer, you can party in daylight until 9pm, by which time in New York, it's gone dark. Conversely, in London in winter, the city is dark by 4pm, while New Yorkers can still enjoy the winter sunshine as they do their Christmas shopping.

On the face of it, it would seem like it's better to spend your summers in London and your winters in New York. If London is so far north of New York, surely New York winters must be better?

Blowing hot and cold

I've plotted the average monthly high and low temperatures for London and New York. London has cooler summers but warmer winters. Is this what you expected?

In winter, Londoners might still enjoy a drink outside, but in New York, this isn't going to happen. People in London don't wear hats in winter, but in New York they do. New Yorkers know how to deal with snow, Londoners don't. In the summer, New Yorkers use A/C to cool down, but Londoners don't even know how to spell it because it rarely gets hot enough to need it.

London's climate, and in fact much of Europe's, is driven by the Gulf Stream. This keeps the UK much warmer than you would expect from its latitude. Of course, the fact the UK is a small island surrounded by lots of water helps moderate the climate too.

The moderate London climate is probably the main reason why people think London and New York are much closer on the north-south axis than they really are.

Climate as an engine of culture

On the face of it, you would think cities with different climates would have different cultures, but New York and London show that's not always the case. These two cities are hundreds of miles apart (north/south) and have noticeably different climates, but they're culturally similar, and obviously, they're both 'world cities'. Perhaps the best we can say about climate is that it drives some features of city life but not the most fundamental ones.

The post-hoc fallacy

Over my career, I’ve seen companies make avoidable business mistakes that’ve cost them significant time and money. Some of these mistakes happened because people have misunderstood “the rules of evidence” and they’ve made a classic post-hoc blunder. This error is insidious because it comes in different forms and it can seem like the error is the right thing to do.

In this blog post, I’ll show you how the post-hoc error can manifest itself in business, I’ll give you a little background on it, and show you some real-world examples, finally, I’ll show you how you can protect yourself.

A fictitious example to get us started

Imagine you’re an engineer working for a company that makes conveyor belts used in warehouses. A conveyor belt break is both very dangerous and very costly; it can take hours to replace, during which time at least part of the warehouse is offline. Your company thoroughly instruments the belts and there’s a vast amount of data on belt temperature, tension, speed, and so on.

Your Japanese distributor tells you they’ve noticed a pattern. They’ve analyzed 319 breaks and found that in 90% of cases, there’s a temperature spike within 15 minutes of the break. They’ve sent you the individual readings which look something like the chart below.

The distributor is demanding that you institute a new control that stops the belt if a temperature spike is detected and prompts the customer to replace the belt.

Do you think the Japanese distributor has a compelling case? What should you do next?

The post-hoc fallacy

The full name of this fallacy is “post hoc ergo propter hoc”, which is thankfully usually shortened to "post-hoc". The fallacy goes like this:

• Event X happens then event Y happens
• Therefore, X caused Y.

The oldest example is the rooster crowing before dawn: “the rooster crows before dawn, therefore the rooster’s crow causes the dawn”. Obviously, this is a fallacy and it’s easy to spot.

Here's a statement using the same logic to show you that things aren’t simple: “I put fertilizer on my garden, three weeks later my garden grew, therefore the fertilizer caused my garden to grow”. Is this statement an error?

As we’ll see, statements of the form:

• Event X happens then event Y happens
• Therefore, X caused Y.

Aren’t enough of themselves to provide proof.

Classic post-hoc errors

Hans Zinsser in his book “Rats, lice, and history” tells the story of how in medieval times, lice were considered a sign of health in humans. When lice left a person, the person became sick or died, so the obvious implication is that lice are necessary for health. In reality, of course, lice require a live body to feed on and a sick or dead person doesn’t provide a good meal.

In modern times, something similar happened with violent video games. The popular press set up a hue and cry that playing violent video games led to violent real-life behavior in teenagers, the logic being that almost every violent offender had played violent video games. In reality, a huge fraction of the teenage population has played violent video games. More careful studies showed no effect.

Perhaps the highest profile post-hoc fallacy in modern times is vaccines and autism.  The claim is that a child received a vaccine, and later on, the child was diagnosed with autism, therefore the vaccine caused autism. As we know, the original claims of a link were deeply flawed at best.

Causes of errors

Confounders

A confounder is something, other than the effect you’re studying, that could cause the results you’re seeing. The classic example is storks in Copenhagen after the second world war. In the 12-year period after the second world war, the number of storks seen near Copenhagen increased sharply, as did the number of (human) babies. Do we conclude storks cause babies? The cause of the increase in the stork population, and the number of babies, was the end of the second world war so the confounder here was war recovery.

In the autism case, confounders are everywhere. Both vaccinations and autism increased at the same time, but lots of other things changed at the same time too:

• Medical diagnosis improved
• Pollution increased
• The use of chemical cleaning products in the home increased
• Household income went up (but not for everyone, some saw a decrease)
• Car ownership went up.

Without further evidence, we can’t say what caused autism. Once again, it’s not enough of itself to say “X before Y therefore X causes Y”.

Confounders can be very, very hard to find.

Biased data

The underlying data can be so biased that it renders subsequent analysis unreliable. A good example is US presidential election opinion polling in 2016 and 2020; these polls under-represented Trump’s support, either because the pollsters didn’t sample the right voters or because Trump voters refused to be included in polls. Whatever the cause, the pollsters’ sampling was biased, which meant that many polls didn't accurately forecast the result.

For our conveyor belt example, the data might be just Japanese installations, or it might be Asian installations, or it might be worldwide. It might even be just data on broken belts, which introduces a form of bias called survivor bias. We need to know how the data was collected.

Thinking correlation = causation

Years ago, I had a physics professor who tried to beat into us students the mantra “correlation is not causation” and he was right. I’ve written about correlation and causation before, so I’m not going to say too much here. For causation to exist, there must be correlation, but correlation of itself does not imply causation.

To really convince yourself that correlation != causation, head on over to the spurious correlations website where you’ll find lots of examples of correlations that plainly don’t have an X causes Y relationship. What causes the correlation? Confounders, for example, population growth will lead to increases in computer science doctorates and arcade revenue.

Protections

Given all this, how can you protect yourself against the post-hoc fallacy? There are a number of methods designed to remove the effects of confounders and other causes of error.

Counterexamples

Perhaps the easiest way of fighting against post-hoc errors is to find counterexamples. If you think the rooster crowing causes dawn, then a good test is to shoot the rooster; if the rooster doesn’t crow and the dawn still happens, then the rooster can’t cause dawn. In the human lice example, finding a population of humans who were healthy and did not have lice would disprove the link between health and lice.

Control groups

Control groups are very similar to counterexamples. The idea is that you split the population you’re studying into two groups with similar membership. One group is exposed to a treatment (the treatment group) and the other group (the control group) is not. Because the two groups are similar, any difference between the groups must be due to the treatment.

I talked earlier about a fertilizer example: “I put fertilizer on my garden, three weeks later my garden grew, therefore the fertilizer caused my garden to grow”. The way to prove the fertilizer works is to split my garden into two equivalent areas, one area gets the fertilizer (the treatment group) and the other (the control group) does not. This type of agricultural test was the predecessor of modern randomized control trials and it’s how statistical testing procedures were developed.

RCTs (A/B testing)

How do you choose membership of the control and treatment groups? Naively, you would think the best method is to carefully select membership to make the two groups the same. In practice, this is a bad idea because there’s always some key factor you’ve overlooked and you end up introducing bias. It turns out random group assignment is a much, much better way.

A randomized control trial (RCT) randomly allocates people to either a control group or a treatment group. The treatment group gets the treatment, and the control group doesn’t.

Natural experiments

It’s not always possible to randomly allocate people to control and treatment groups. For example, you can’t randomly allocate people to good weather or bad weather. But in some cases, researchers can examine the impact of a change if group allocation is decided by some external event or authority. For example, a weather pattern might dump large amounts of snow on one town but pass by a similar nearby town. One US state might pass legislation while a neighboring state might not. This is called a natural experiment and there’s a great deal of literature on how to analyze them.

Matching, cohorts, and difference-in-difference

If random assignment isn’t possible, or the treatment event happened in the past, there are other analysis techniques you can use. These fall into the category of quasi-experimental methods and I’m only going to talk through one of them: difference-in-difference.

Difference-in-difference typically has four parts:

• Split the population into a treatment group (that received the treatment) and a control group (that did not).
• Split the control and treatment groups into multiple cohorts (stratification). For example, we could split by income levels, health indicators, or weight bands. Typically, we choose multiple factors to stratify the data.
• Match cohorts between the control and treatment groups.
• Observe how the system evolves over time, before and after the treatment event.

Assignment of the test population to cohorts is sometimes based on random selection from the test population if the population is big enough.

Previously, I said random assignment to groups out-performs deliberate assignment and it’s true. The stratification and cohort membership selection process in difference-in-difference is trying to make up for the fact we can’t use random selection. Quasi-experimental methods are vulnerable to confounders and bias; it’s the reason why RCTs are preferred.

Our fictitious example revisited

The Japanese distributor hasn't found cause and effect. They’ve found an interesting relationship that might indicate cause. They’ve found the starting point for investigation, not a reason to take action. Here are some good next steps.

What data did they collect and how did they collect it? Was it all the data, or was it a sample (e.g. Japan only, breakages only, etc.)? By understanding how the data was collected or sampled, we can understand possible alternative causes of belt breaks.

Search for counterexamples. How many temperature spikes didn’t lead to breaks? They found 287 cases where there was a break after a temperature spike, but how many temperature spikes were there? If there were 293 temperature spikes, it would be strong evidence that temperature spikes were worth investigating. If there were 5,912 temperature spikes, it would suggest that temperature wasn’t a good indicator.

Look for confounders. Are there other factors that could explain the result (for example, the age of the belt)?

Attempt a quasi-experimental analysis using a technique like difference-in-difference.

If this sounds like a lot of work requiring people with good statistics skills, that’s because it does. The alternative is to either ignore the Japanese distributor’s analysis (which might be true) or implement a solution (to a problem that might not exist). In either case, the cost of a mistake is likely far greater than the cost of the analysis.

Proving causality

Proving cause and effect is a fraught area. It’s a witches’ brew of difficult statistics, philosophy, and politics. The statistics are hard, meaning that few people in an organization can really understand the strength and weaknesses of an analysis. Philosophically, proving cause and effect is extremely hard and we’re left with probabilities of correctness, not the certainty businesses want. Politics is the insidious part; if the decision-makers don’t understand statistics and don’t understand the philosophy of causality, then the risk is decisions made on feelings not facts. This can lead to some very, very costly mistakes.

The post-hoc error is just one type of error you encounter in business decision-making. Regrettably, there are many other kinds of errors.

The revolution is complete but I didn't notice

The other day I realized a market segment revolution had happened and I hadn’t noticed. There’d been a fundamental shift in the underlying technology and the change was nearly complete, to the point where very few new devices are based on the old technology. It's a classic case of technology disruption.

Batteries included

I was chopping up an old tree stump with an ax when a neighbor came over with his new chainsaw and offered to help. I gratefully accepted and he sliced up my large tree stump very quickly. Afterward, we got chatting about his new chainsaw; it was battery-powered.

(Not my tree stump, but it looked like this: allen watkin from London, UK, CC BY-SA 2.0, via Wikimedia Commons)

Frankly, I was astonished that a battery-powered chainsaw could chop up a tree stump this big and I said so. He told me the battery was good for more cutting if I had other trees to cut. He also told me he used the same batteries to power his lawn mower and he could cut his whole lawn (suburban New England) on one charge. I was taken aback, last time I looked battery powered devices were a joke.

No more gasoline internal combustion engines

The next time I went to Home Depot, I had a look at their lawnmowers and garden equipment. Almost all the lawnmowers were battery-powered, including ride-on mowers. Almost all the hedge cutters and trimmers and blowers were now battery powered too. In the last few years, garden equipment that was only ever gasoline powered has now become almost entirely battery-powered.

The benefits are obvious: no storing gasoline, no pull starts, no winter maintenance, and so on. The only drawback I could see was battery price and power, but battery prices have fallen substantially at the same time as battery capacity has gone up. We crossed a usability threshold a while back and the benefits of battery power have led manufacturers to make the switch.

Two technologies have made this change possible: brushless motors and improved batteries. Everyone knows battery technology has improved, but brushless technology gets far less attention. Brushless motors are far more energy efficient, which means longer operation and/or more usable power for the same energy cost. They’ve been around for years but they rely on electronic control circuity to work, which made them too expensive for all but specialist applications. However, the cost of electronics has tumbled which meant cheaper brushless motors became possible. The garden equipment I saw all uses brushless motors, as do modern power tools, lawnmowers, and even snow blowers (see next section). It’s the combination of modern batteries and brushless motors that's led to a small revolution.

For home and garden devices, the ultimate test for battery power is a snowblower. For those of you who don’t know, these are a bit bigger than a lawnmower, they’re very heavy, and they have a powerful gasoline engine. To clear a big New England snow dump, you’ll need to use a big snowblower and maybe a gallon or more of gasoline. Here’s a picture of one in use.

(Image from https://www.wnins.com/resources/personal/features/snowblowersafety.shtml)

Snowblowers consume a lot of power. Is it even possible to have a battery-powered snowblower? Astonishingly, the answer is yes. There are at least two powerful battery-powered snowblowers on the market. You can see a video of one here.

These new snowblowers are a lot lighter than their gasoline cousins, they don’t need you to store gasoline, and they don’t require a pull start or an electric starter. The bigger two-stage snow blowers (which you need in New England) use two big brushless motors and 80V batteries.

There are downsides though: batteries only last about 40 minutes clearing heavy snow and battery snowblowers are about 20-25% more expensive. This feels like an early adopter market right now, but in a few years, battery snowblowers will probably be the market standard.

The revolution will not be televised

Batteries have taken over the garden equipment world. The revolution has succeeded but no one is talking about it.

There are a couple of lessons here and some pointers for the future.

It’s not just about better batteries. This garden revolution relied on brushless motor technology too. If we think of what's next for battery power or alternative energy, we need to think about enabling technologies, for example, solar panels are sometimes coupled with inverters, so advances in inverter technology are key.

Manufacturers had an innovation pathway that made the problem more tractable. Home and garden devices have a range of power requirements. Electric screwdrivers and drills don’t need that much power, blowers and strimmers need more, lawnmowers still more, and snowblowers most of all. Manufacturers could solve the problems of lower power devices before moving up the ‘power’ chain. This is similar to Clayton Christensen’s “innovator’s dilemma” model of disruption.

Battery garden devices will put high-powered batteries in people’s homes, but they’ll be lying idle most of the time. What about using these powerful batteries to smooth out spikes in power demand or provide emergency power? What about charging the batteries at night when power is cheap and using the batteries during the day when power is more expensive? The problem is the step change needed in home electricity management, but maybe some incremental steps are possible.

Other battery uses become possible too, for example, bigger motorized children’s toys, outdoor power away from electricity supplies, or even battery-powered boats. If powerful batteries are there, innovators will find a use for them.

Perhaps the next steps in home energy technology won’t be led by battery technology imported from cars but by battery technology imported from humble garden tools.

Misunderstanding Asia

Growing up in the UK, I never really understood Asia well. I heard the usual mix of opinions; that ‘they’ had developed their economies by adopting the free market, that there was something special about Asian societies that favored prosperity, and of course, that 'they' cheated and stole intellectual property.

(Dado, Public domain, via Wikimedia Commons)

Years ago, I visited South Korea, China, Japan, and Taiwan. Immediately, I realized that what I’d read and understood was mostly wrong or at best very distorted. Even worse, the popular narratives in the west were pretty useless for understanding what I saw and heard.

Recently, I read a very illuminating book, “How Asia Works” by Joe Studwell.  Studwell provides a much better model for understanding Asia than anything I’d read before and I’m going to provide a quick overview of Studwell’s ideas here. I recommend you read his book.

How Asia Works

Studwell divides Asia into two broad groups: the successful trinity of Taiwan, South Korea, and Japan, and everyone else. China is of course a special case, but similar in many ways to the successful trinity. He immediately does away with geography and culture as factors explaining why the trinity was successful and others were not. Instead, he focuses on the development policies they followed and how they executed them.

In his view, there are three key drivers responsible for the rise of Japan, South Korea, and Taiwan; agriculture, industry, and finance. Behind these three drivers, there were crucial policies that enabled these countries to rise, but perhaps more important than the policies was the disciplined execution behind them.

To set the scene, at the start of his narrative, all the countries were relatively poor with little industry. Each of them had a large population and each had the desire to develop and improve the lives of their people.

(Studwell's book.)

Studwell’s key insight into agriculture is the difference between productivity and efficiency. We can define productivity as the human consumable output per hectare and efficiency as the human consumable output per hour of human effort. Gardens are typically much more productive than farms at the cost of being more effort-intensive (less efficient). This is because gardeners plant their crops closer together and make better use of limited space, the price of which is the substantial human effort to maintain and harvest crops. Poor countries typically have lots of people they need to feed and little foreign exchange to pay for imported food. It makes a great deal of sense therefore to use their labor in highly productive agriculture, which usually means smallholdings.

This is Studwell’s first key policy insight. Encouraging smallhold farming requires land reform. When people work for themselves and their families, they’re much more motivated to produce than when they’re tenants on someone else’s property. Each of the four countries, Japan, South Korea, Taiwan, and China all pursued land reform which involved redistributing land to smallhold farmers.  In all cases, landlords took a beating and saw little compensation for losing their lands. In each case, the countries were disciplined and prevented landlords from re-establishing control. All four countries saw agricultural productivity sharply rise. Rising agricultural productivity meant reduced food imports, agricultural exports to generate foreign exchange, and a surplus that was used to create demand for industrial output.

Other countries in Asia tried land reform but allowed landlords backdoors to rebuild their property portfolios. Although productivity rose in these countries, it wasn’t anything like the rise in Japan, Taiwan, South Korea, and China. It seems like a disciplined approach to land reform is key.

Land reform also sheds some light on why the Soviet and Chinese experimentation with collective farming was a disaster; it destroyed the incentive for people to produce more for their families. Collectivization wiped out China’s agricultural productivity gains.

It’s also the first area where Studwell’s ideas depart from standard economics. Western free-market economics stresses property rights. Forcing landlords to sell their land at low prices is very much counter to key free-market thought.

Industry is the next step. Studwell makes the same observation that everyone else does; textiles are the usual industrialization starting point because the skill set needed is relatively low. After textiles come other low-skilled products with countries working their way up the value chain to cars and semiconductors. The successful countries placed high import tariffs to protect their infant industries from more advanced foreign competition (again, deviation from free-market doctrine). They made capital available at low-interest rates to encourage company formation and growth, but crucially, they created a highly competitive internal market with companies forced to compete against each other (but not against foreign competition). The key policy was a disciplined focus on exports. South Korea tied investment capital access to foreign export targets; if your company hit its export targets you could get money, if it didn’t, you wouldn’t get money. This ensured export-led growth. Bear in mind that well-developed export markets usually have higher standards than developing domestic markets, so this policy forces manufacturers to meet higher foreign standards right from the start. He gives the example of a car produced by Malaysia’s Proton that lacked airbags and other safety features required for foreign markets, meaning the car could only be sold in Malaysia, limiting sales. Cars require imported parts, so a car produced for domestic consumption only means a hit to foreign currency reserves.

(Assembly line at Hyundai Motor Company’s car factory in Ulsan, South KoreaUser: Anonyme, CC BY-SA 3.0, via Wikimedia Commons)

Studwell made a comment that hit me between the eyes and woke me up. A highly competitive domestic market coupled with disciplined export-focused finance led to companies failing. Governments didn’t step in to prop up failing companies, rather they allowed the survivors to pick apart the carcasses of the dead companies. It’s not about governments picking winners, it’s about governments culling losers, but using a version of the free market to do so. Over the years in the UK, I’ve seen various attempts to build national champions in different segments, I can remember talk of “wasteful competition”, “world beaters”, and other rhetoric. It seems the successful Asian countries had a much better Darwinian survival-of-the-fittest approach. It’s cage-match economics, but it works.

The last part of Studwell’s trinity was finance; a disciplined approach to finance is what holds the entire thing together. In the agricultural stage, the goal is to finance smallhold farmers to enable them to buy fertilizer and the equipment they need to develop their farms. In the manufacturing stage, finance was tied to exports with very few exceptions. Disciplined finance becomes an extension of government development policy; countries that didn’t follow a disciplined path did not see the same level of investment. He points out that several countries used foreign investment to finance luxury real-estate developments that promised high short-term returns. Unfortunately, these types of projects don’t generate much foreign exchange and don’t offer long-term employment. The point is simple: don’t chase the highest returns, use finance to support strategic development initiatives. Once again, this runs counter to much free-market economic thought.

Studwell’s model explains much of what I heard in Asia, for example, it explains why joint ventures are usually structured in the way they are. It also helps explain why South Korea, Japan, and Taiwan used currency controls for as long as they did. Conversely, it explains why development in other parts of Asia was so stunted.

Where next?

For me, one of the benefits of reading the book was helping me shake off the intellectual straitjacket of western free-market economics. Successful Asian countries embraced some key free market ideas (“culling losers”) but rejected “the invisible hand” laissez-faire idea; governments very actively intervened in markets. It seems that development in the real world is not about intellectual purity but about what works.

The obvious questions for me are where next for the successful countries, will they continue with activist government intervention, and conversely, will the unsuccessful countries learn lessons from the winners? It left me thinking more broadly about the west, if we accept the premise that governments should intervene in markets, how could we improve life for people in the west?

Recycling waste in the garden and on the internet

My blog is supposed to be about technical and management issues, but today I'm going to write about composting. There are obvious 'humorous' comparisons with the technical world, most obviously about recycling ideas and rotting content, but beyond the obvious, there are lessons about material on the internet.

How it works for me

I have what's called a tumbling composter. It has two chambers. The idea is you fill one chamber with material to compost and while that's decomposing, you fill the other chamber. Complete composting takes a few weeks in summertime, a little longer in spring and fall, and stops almost completely in winter. You're supposed to rotate the drum every few days to aerate the compost. Each chamber gives about a wheelbarrow load of compost and you get several loads per chamber per year.

(My garden compost tumbler.)

Garden waste: a waste of time

The first lesson I learned is that it's hard or impossible to compost garden waste. In principle, garden waste is ideal, but in practice, there's so much of it that it overwhelms the compost mix and stops the decomposition process. You need a mix of materials for successful composting and garden waste is just too much of one thing.

Of course, the first thing I tried to compost was leaves and I learned they break down extremely slowly. A friend suggested I shred them first, but even then, the rotting process is slow. Leaves just aren't good for compost and you should dispose of them separately.

Sticks and branches decompose slowly too. If you're going to put woody material into the compost heap, you need to chop it up into small pieces first. Even then, they don't tend to rot completely.

If you look at the Amazon reviews of composting bins, you'll find multiple reviews from people who've stuffed their bins full of grass clippings, leaves, or other garden waste and they're complaining that it doesn't compost. They're publicly blaming the product instead of figuring out why they made a mistake. (First internet lesson: reviews and comments from people on the internet can be wrong and/or misinformed. The customer isn't always right, especially when they're writing reviews.) To make composters work, you have to mix your content.

Greens and browns: the golden ratio

Almost all composting websites talk about greens and browns and the correct ratio. Here's what they consider browns (the list varies from source to source):

• Dried grass clippings
• Woody plant material
• Pine needles
• Oats, grains, and feedstock
• Autumn leaves
• Oak leaves
• Sawdust
• Wood chips
• Straw and hay
• Uncooked pasta
• Shredded paper
Here's what they consider greens (again the list varies):
• Grass clippings
• Coffee grounds/tea bags
• Vegetable and fruit scraps
• Trimmings from perennial and annual plants
• Annual weeds that haven't set seed
• Eggshells
• Animal manure
• Seaweed

The correct ratio is something like 3 brown to 1 green, but the ratio varies from site to site and I've even seen it stated as 1 to 1. I try to stick roughly to a 3 brown to 1 green ratio, but it's never exact.

Initially, I found my composter gave balls or clumps of material. This is a well-known problem with tumbling composters like mine and is caused by the mixture being too wet and/or an insufficient amount of brown material. If your mix is clumping into balls, add more shredded paper, but mix it in thoroughly.

I've visited lots of sites to find details of the mix and what I should do. Strikingly, the writers all made similar statements about greens and browns and the ratio, but they never backed their assertions with science and they never linked to other resources. After a while, I realized I was seeing the same content over and over again, and even though it wasn't an exact copy, it was so close it might as well have been. Many of the sites didn't read that well and contained a lot of repetition, which leads me to think they were being SEO'd to death, it also explains the lack of links; they want to keep people on the site. Overall, I visited a lot of low-quality sites that didn't say very much. There are a couple of internet lessons here:

• Wily marketers are out-smarting search engines and getting low-quality pages to score highly.
• Content is recycled from site to site with almost nothing informative added.
• Many sites with information on the home and garden are just junk sites with low-quality copied content.
• I'll still read the low-quality content because I'm looking for advice; the marketers' tactics are working.

Am I guilty of the same thing? I hope not. I'm trying to say something new, but then this is a hobby site and I'm not making any money out of it.

Paper and kitchen waste: a working combo

The thing that works wonders for me is kitchen waste coupled with shredded paper; this gives me the best compost and it decays quickly. There are some rules though.

• No meat or dairy. Rotting meat or dairy attracts animals. No one wants rats dining in their backyard. Don't do it.
• Rules for paper:
• Whole pages take ages to decompose so shred paper or tear it up into small strips.
• Shredded paper from a shredder works well, but don't add it all at once as it tends to clump.
• Don't include paper with bright metallic inks, waxed paper, or glossy or shiny paper.
• Kitchen paper and similar paper will compost, but you have to tear it up into small strips.
• No pizza boxes with meat waste on them (it's the animal thing again).
• Cardboard will decompose well if you tear it up into small strips. It helps to soak it thoroughly first for several days. Adding too much cardboard can stop the decomposition process, so be careful about your mix.
• Coffee grounds and tea bags are great, but tear tea bags to speed decomposition.
• Add kitchen waste little and often rather than a lot at once. Chop up larger pieces (e.g. brocolli stems). Banana skins rot very quickly!

Blood and maggots

I used some kitchen paper to mop up blood from meat and threw the kitchen paper into the composter. A few days later, I saw maggots eating the blood-stained paper; but only the spots where the blood was. Gross, but fascinating. Maggots usually indicate you have animal products in your compost.

Starter mix

The composting process is mainly bacterial and the bacteria has to come from somewhere. To get started, I threw in several handfuls of soil from different parts of my garden. When I empty my composter, I don't remove all the compost, I leave some in so the decomposition process for the next load can get started.

I also added worms to my bins too. I hope they like the paper and cardboard I'm putting in. I don't want to be cruel, even to worms.

How much waste?

Once food and paper rot, it takes up a lot less space. I've found that a nearly full compost chamber has a lot more space after a week or two when the contents have decayed a bit. The lesson here is that even when a chamber looks full, if you leave it a while, you can fit more waste in.

Wasps and rats

I'd heard blood-curdling stories of wasps setting up home inside rotating compost bins. In practice, that didn't happen to me, maybe because I rotated the bins every few days during the warmer weather. I can see if you left the bins alone for a week or so, it might be an attractive place for wasps to nest, after all, it's warm and dry. The moral is: don't neglect your compost!

Because I don't compost any animal products, I've never had a rat or raccoon problem.

Winter is coming - even for the compost heap

I found that decomposition stops in winter. Once my chambers were full up in late November, that was it until March. The advice I read was not to rotate the drum once the weather gets cold, the idea is that rotating the drum causes the compost to lose heat; if you keep the drum still, decomposition can go on a bit longer. Of course, once winter really set in, the chamber contents froze solid and after a while, the sliding chamber cover froze in place so I couldn't view the chamber contents anyway.

To keep my recycling going during winter, I filled up cardboard boxes with food and paper waste and waited for the spring to restart composting; of course, I composted the cardboard box too. Because I didn't throw out meat products, I didn't have any problems with animals.

The secret composting benefit: garbage reduction

I bought the composter to get rid of garden waste and found out that it wasn't good for that. What I found in practice was it was great for disposing of kitchen waste and paper. Using my composter, I've managed to reduce the amount of waste I throw out by several trash bags per year. Of course, I also get several wheelbarrow loads of compost per year. Overall, composting is both better for the planet and better for my garden.

Sadly, I found that it wasn't just my composter that was full of recycled material, it turns out, that a lot of internet sites are too. Who knew.

Handwriting is the new typing

After many years of terrible handwriting (think spiders on LSD), I recently decided to improve it. I bought a book on handwriting and practiced, practiced, practiced. Along the way, I learned something about the writing experience; the choice of pen and ink matters. I'm going to share what I learned with you.

This post is all about ball pens within a reasonable price range, fountain pens are just too advanced for me and I'm cheap.

What makes a good handwriting experience?

Early on, I discovered that the pen and ink you use make a big difference, not only to the quality of the result (legible handwriting) but also to the tactile pleasure of writing. I found the smoothness of the pen moving across the paper was important; some pens just glide across the page and are wonderful to use, while others skip and drag like taking a pet to the vet. Some otherwise great pens gave smooth and thick lines that bled through to the other side of the paper, while other pens gave precise narrowness at the expense of scratchiness. After some experimentation, I concluded that the thrill of the writing experience is governed by two things: the pen barrel and the refill.

For the pen barrel, its weight and the feel of the pen in my hand were the most important factors. As we'll see later, the weight of pens varies by almost an order of magnitude and I had very different writing experiences at either end of the scale. After many trials, I found I like heavier pens. The feel of the pen is harder to describe; I like pens with some form of special 'grip' or finger guide, but my favorite pen is all metal and smooth (I'm obviously not consistent). In the picture below, only the Pilot G-2 (2nd from top) and the Zebra Sarasa (3rd from top) have guides.

(Muji 0.38mm, Pilot G-2 0.7mm, Zebra Sarasa 0.7mm, AliExpress 0.5mm)

Refills for ball pens are of two general types, ballpoint ink, and gel ink. Ballpoint ink is thicker and heavier but lasts longer, while gel ink is smoother on the paper but doesn't last as long. For a better writing experience, the choice for me is clear: gel ink. As a bonus, gel ink pens come in a rainbow of colors.

Gel ink refills (and pen refills in general) are like dogs, they come in a range of different sizes. There are international standards, but even within standards, the variation is great. The image below shows some refills which are all about the same length (110mm) and all about the same width (6mm). As you've probably guessed, some of these refills fit some pens and not others. Is there any way of knowing what size refill a pen will take? No. You just have to guess or buy the same refill that went into your pen.

The size of the ball on the refill is hugely important. Typically, gel refills have the following ball sizes:

• 1.0 mm bold
• 0.7 mm medium
• 0.5 mm fine
• 0.4 mm extra fine

The thicker the ball, the better the pen glides across the paper, but the cost is thicker lines which may lead to ink bleeding through to the other side of the paper. Thinner balls give more writing precision but can feel a bit scratchy and you have to be careful about the angle you use to write.

The other obvious factor to consider is the manufacturer. I tried M&G, Zebra, Muji, and Pilot. I found I liked the Muji 0.38mm refill for precision at the cost of a little scratchiness. Sadly, all of the Muji refills froze partway through and I couldn't revive them (see below). I ended up using the Zebra and M&J refills but I'll probably move to Zebra permanently soon (see below for why).

A few times, I've had the experience where a new refill stops working partway through. There are two closely related symptoms: it just stops writing altogether or it only writes in one direction. I've tried cleaning the type with alcohol, putting the refill in hot water, and removing the nib and cleaning the insides with alcohol. Nothing worked. On the internet, I've heard stories of people using heat guns or using naked flames to heat the refill nib, however, I've also heard stories of refills exploding when people do this kind of thing, so perhaps it's not a good idea.It's annoying, but typically refills cost around $1, so I just buy another refill and move on. Different weights I thought I liked heavier pens, but I wanted to be sure, and what better way for a nerd to be sure than weigh his pens? I weighed all my pens without their refills to avoid differences due to the refills themselves. Here are the results. Pen Weight Muji Gel Ink Ball Point Pen 6g Pilot G-2 8g Zebra Sarasa 23g AliExpress solid brass pen #1 42g AliExpress solid brass pen #2 43g There's a 7x weight difference between the Muji and the AliExpress pens. I knew the Muji was light, but I didn't think it was that light. Interchangeable refills - or not My favorite pen was my$2 solid brass AliExpress pen which takes M&G refills. M&G is a Chinese brand and unfortunately, it's recently become harder to get their refills in the US. I wondered if I could use the Zebra refills in my AliExpress pen. Sadly not. The M&G refills are slightly narrower than the Muji refill and have a slightly different end. These differences are small, less than 1mm, but pens are precision instruments, and when something won't fit, something won't fit. I couldn't find a non-M&G refill that fit, so when I finish my last M&G refill, my $2 brass pen becomes a$2 brass stick.

But all is not lost. I actually bought two seemingly identical brass pens from AliExpress a few weeks apart. It turns out, the second one is ever so slightly different. Different enough that the Zebra refill fits.

I'm lost

Before the pandemic, I mislaid my $2 (actually$1.99) AliExpress brass pen at work. The office manager asked me what I was looking for and I told her "My one ninety-nine pen". She dropped everything to help me find it, which we did after a thorough search. Once we'd found it, she said it didn't look expensive and I said it was $1.99, not$199. She gave me a look that said "you're an idiot" and of course, she was right.

Different name, same thing

I've just read a book that's both inspiring from a business perspective and at the same time, deeply worrying from a society perspective. It's about public relations and propaganda. The kicker is that the book was published in 1928.

(Propaganda, Edward Bernays, 1928)

The author was Edward Bernays who's generally regarded as the father of public relations and was and is a controversial figure. He was born in Vienna in 1891 and was Sigmund Freud's nephew - another example of the huge influence of the Frued family. In the 1890's, the family moved to the US, where he lived for the rest of his long life, he died in 1995 at the age of 103. During the first world war, he worked for a US government propaganda unit where he learned most of the tools of his trade. In 1929, he successfully promoted smoking to women, and in the 1950's, he worked with the United Fruit Company and the CIA to topple the democratically elected government of Nicaragua.

His 1928 book, Propaganda, outlines the theory behind public relations and gives details of how successful PR campaigns work. Although Bernays draws a distinction between propaganda and public relations, the line is very, very thing (if it's there at all). The book provides a psychological and sociological background for how PR works and even suggests that it's morally necessary for society to function. He then dives into the use of PR for commerce, politics, and education etc., providing examples of successful campaigns and how they were orchestrated. He very clearly explains, in terms of psychology and sociology, why some influence approaches work and some don't.  What's striking is how politicians and companies are still using these techniques today; it helps explain why some of our media are the way they are.

The book isn't an easy read. In my view, it's repetitive, overwritten, and lacks detail in many places. Bernays' moral justification for propaganda feels paper thin. But despite this, I recommend reading it, or at least reading a more recent book on propaganda, it's eye-opening.

The highlights

I'm not going to review the book in detail, instead, I'm going to give you some key quotes so you get a sense of what it says. You can decide for yourself if it's worth a trip to the library (or a click to download).

"In theory, every citizen makes up his mind on public questions and matters of private conduct. In practice, if all men had to study for themselves the abstruse economic, political, and ethical data involved in every question, they would find it impossible to come to a conclusion about anything."

In other words, people need PR to understand the world and form opinions about things.

"It has been found possible so to mold the mind of the masses that they will throw their newly gained strength in the desired direction. In the present structure of society, this practice is inevitable. Whatever of social importance is done to-day, whether in politics, finance, manufacture, agriculture, charity, education, or other fields, must be done with the help of propaganda. Propaganda is the executive arm of the invisible government."

Bernays talks a lot about the invisible government, these are the people who shape the thoughts and opinions of the masses.

"The mechanism by which ideas are disseminated on a large scale is propaganda, in the broad sense of an organized effort to spread a particular belief or doctrine."

"Small groups of persons can, and do, make the rest of us think what they please about a given subject."

"There are invisible rulers who control the destinies of millions. It is not generally realized to what extent the words and actions of our most influential public men are dictated by shrewd persons operating behind the scenes."

"The invisible government tends to be concentrated in the hands of the few because of the expense of manipulating the social machinery which controls the opinions and habits of the masses."

"Trotter and Le Bon concluded that the group mind does not think in the strict sense of the word. In place of thoughts it has impulses, habits and emotions. In making up its mind its first impulse is usually to follow the example of a trusted leader."

"The newer salesmanship, understanding the group structure of society and the principles of mass psychology, would first ask: "Who is it that influences the eating habits of the public?" The answer, obviously, is: "The physicians." The new salesman will then suggest to physicians to say publicly that it is wholesome to eat bacon. He knows as a mathematical certainty, that large numbers of persons will follow the advice of their doctors, because he understands the psychological relation of dependence of men upon their physicians."

"This point is most important in successful propaganda work. The leaders who lend their authority to any propaganda campaign will do so only if it can be made to touch their own interests. There must be a disinterested aspect of the propagandist's activities. In other words, it is one of the functions of the public relations counsel to discover at what points his client's interests coincide with those of other individuals or groups."

"Just as the production manager must be familiar with every element and detail concerning the materials with which he is working, so the man in charge of a firm's public relations must be familiar with the structure, the prejudices, and the whims of the general public, and must handle his problems with the utmost care. The public has its own standards and demands and habits. You may modify them, but you dare not run counter to them."

"The public is not an amorphous mass which can be molded at will, or dictated to. Both business and the public have their own personalities which must somehow be brought into friendly agreement."

"A sound public relations policy will not attempt to stampede the public with exaggerated claims and false pretenses, but to interpret the individual business vividly and truly through every avenue that leads to public opinion"

"Continuous interpretation is achieved by trying to control every approach to the public mind in such a manner that the public receives the desired impression, often without being conscious of it. High-spotting, on the other hand, vividly seizes the attention of the public and fixes it upon some detail or aspect which is typical of the entire enterprise."

"Present-day politics places emphasis on personality. An entire party, a platform, an international policy is sold to the public, or is not sold, on the basis of the intangible element of personality. A charming candidate is the alchemist's secret that can transmute a prosaic platform into the gold of votes."

"Propaganda will never die out. Intelligent men must realize that propaganda is the modern instrument by which they can fight for productive ends and help to bring order out of chaos."

Final thoughts

I can clearly see companies pursuing Bernays' PR strategy even today and what's more, I can see why they're doing it and why they're successful. I can see the role of newspapers and magazines in shaping public preferences and I can see how organizations are using social media in the same way. The same goes for politics.

It's nice to be idealistic about the future, but reading Bernays' book, I get the feeling people have been trying to manipulate me my entire life and that it's not going to stop.

Imitation is not the sincerest form of flattery

Prior to the pandemic, I wrote a thought piece on data science. It compared the work of data science to building Lego models and called back to some of my childhood memories of building Lego models with my brothers. I deliberately wrote it to have a slightly dreamy and nostalgic quality. I was very pleased with the finished piece and I referenced it from my LinkedIn profile. You can read it here: https://www.truefit.com/blog/Data-is-the-New-Lego.

The other day, I was thinking about this piece and did a Google search on it. I found someone had plagiarized it. They'd taken the whole article and replaced a few sentences with their 'own' work. They'd even used the same type of images I did. It was pretty much a word-for-word copy (to be clear: it's blindingly obvious this is a direct copy of my work). Of course, they didn't acknowledge my piece at all. What was truly galling was a comment someone had made calling the piece insightful. The plagiarist replied commenting that they were glad they liked it.

(Hariadhi, CC BY-SA 3.0, via Wikimedia Commons)

The plagiarist has several other pieces on Medium. I have no idea if they copied the other pieces too. They're studying data science and on their profile, they say they want to tell stories with data. Perhaps the biggest story they're telling is that they cheat and take credit for other people's work.

The borders of originality

In this case, the copying was a blatant lift of my work, but other cases are more difficult. There's a nuanced question of what's plagiarism and what's not, for example, many people have written stories about time machines after H.G. Wells, are they all guilty of plagiarism?

For me, the line is the story arc and ideas. If you're telling the same story as someone else and using the same ideas, you're on very thin ice. If you're using the same metaphors, similies, or allegories then you've crossed the line. If you must tell the same story as someone else (and you really shouldn't), at least use your own imagery.

What have I done?

On the person's Medium post, I have called out their plagiarism and I've reported the piece as violating Medium's terms and conditions. It was posted in the "Towards Data Science" publication so I complained to them too. The Towards Data Science team removed the author from their publication and reported the plagiarism to Medium. I reported the author for plagiarism to Medium again.

It also set me thinking about the interview process. I've looked at people's Github pages and their portfolios. Up to now, it didn't occur to me that people might blatantly cheat. After this experience, I'm going to up my checks.

Small things reveal deeper truths

I was reading an old story on the internet and it struck me that there's something I could learn from it about diagnosing company culture. I'll tell you the story and show you how small things can be very revealing.

The Van Halen story

Here's a quote from David Lee Roth’s autobiography, Crazy from the Heat, that tells the story.

"Van Halen was the first band to take huge productions into tertiary, third-level markets. We’d pull up with nine eighteen-wheeler trucks, full of gear, where the standard was three trucks, max. And there were many, many technical errors — whether it was the girders couldn’t support the weight, or the flooring would sink in, or the doors weren’t big enough to move the gear through. The contract rider read like a version of the Chinese Yellow Pages because there was so much equipment, and so many human beings to make it function. So just as a little test, in the technical aspect of the rider, it would say “Article 148: There will be fifteen amperage voltage sockets at twenty-foot spaces, evenly, providing nineteen amperes . . .” This kind of thing. And article number 126, in the middle of nowhere, was: “There will be no brown M&M’s in the backstage area, upon pain of forfeiture of the show, with full compensation.”

So, when I would walk backstage, if I saw a brown M&M in that bowl . . . well, line-check the entire production. Guaranteed you’re going to arrive at a technical error. They didn’t read the contract. Guaranteed you’d run into a problem. Sometimes it would threaten to just destroy the whole show. Something like, literally, life-threatening."

In other words, the no brown M&Ms clause was a simple compliance check that the venue had read the contract and taken it seriously. It was an easy test of much deeper problems.

(This would fail the test - there are brown M&Ms! Evan-Amos, Public domain, via Wikimedia Commons)

Tells

The brown M&Ms story shows that something simple can be used to uncover a fundamental and harder-to-check problem. The same idea appears in Poker too - it's the ideas that players have "tells" that reveal something about their hands. It occurred to me that over the years, I'd seen something similar in business. I've seen cases where companies have made sweeping statements about culture but small actions have given the game away. Unlike the Van Halen story, the tells are usually unintentional, but nonetheless, they're there. Here are some examples.

Our onboarding is the best, but we won't pay you

Years ago, I worked for a company that made a big deal of how great its onboarding was; the CEO and other executives claimed it was "industry-leading" and praised the process.

When I was onboarded, the company messed up its payroll and didn't pay me for a while; way past the legal deadline. I asked when it was going to be resolved and I was told I should "manage my finances better". I later learned this was a common experience and many new employees weren't paid on time, the "manage your finances better" was the stock response. In one extreme case, I know someone who wasn't paid for over two months.

As it turned out, this was a brown M&Ms case. It indicated profound issues at the company and in particular with the executive team; they were too remote from what was going on and they really weren't interested in hearing anything except praise. It took me and others a long time to discover these issues. The brown M&Ms should have warned us very early that something was quite broken.

I'm too important to talk to you

At another company, a new C-level executive joined the organization and there was a long announcement about how great they were and how they exhibited the company values, one of which was being people-centric. I reported to the new person's organization.

One day, early on in their tenure, the new C-level person visited the office I was working at. They walked straight by me and my team without stopping to say hello. During the week they were with us, they didn't meet or talk with any of us. They even managed to avoid being in the break room at the same time as the little people (and people tried very hard to meet the new executive). On that visit, the new C-level person didn't meet or say hello to anyone below vice-president level. Later on, they gave a talk to their organization that included a discussion of the necessity of connecting with people and how it was important to them.

I didn't see many of their other actions, but this was very definitely a brown M&M moment for me. I saw trouble ahead and left the company not long after, and I wasn't the only one.

Candies: going, going, gone

My last example is actually about candy.

I worked for a company that provided candy and snacks. It was very proud that what it provided was top quality, and I agreed; it really did provide great treats. The company presented top-quality candy and snacks as a way of showing how much it valued its employees; we were told that we got the best because we were valued.

You can probably guess what happened next. The snack and candy brands went from well-known brands to own-label brands, while the company insisted that nothing had changed. After a few months of own-label brands, the candy and snacks stopped altogether, and the company never said a word. A number of other things happened too, including worse terms and conditions for new employees (less leave etc.), more restrictions on travel, and fewer corporate lunches, but these were harder to see. The company started valuing employees less and the treats and candies were only the most visible of several actions that took place at the same time; they were the canary in the coal mine.

What can you do?

Small issues can give you a clue that things are deeply broken in hard-to-detect ways. You should be on the lookout for brown M&M moments that give you advance warning of problems.

As an employee, these moments provide insight into what the company really is. If the M&M moment is serious enough, it's time to think about employment elsewhere, even if you've just started.

As an executive, you need to be aware that you're treated differently from other people. You might not experience the brown M&M moment yourself, but people in your organization might. Listen to people carefully and hear these moments; use them to diagnose deeper issues in your organization and fix the root cause. Be aware that this is one of the few moments in your life you might get to be like David Lee Roth.