Black Swans, Normal Distributions and Supply Chain Risk

This is the latest in a monthly series of articles addressing supply chain risk issues, in conjunction with our sponsors riskmethods, who provide technology that helps firms monitor risk, minimise the impact and proactively take action to mitigate risk.

The expression “Black Swan” seems to have been around for years but was actually first used by economist, trader, author and academic Nassim Nicholas Taleb in his 2007 book, to mean an unexpected event that few predict (but often is later rationalised and seems much more explicable with hindsight).

He used it to describe certain events in the world of economics and finance that have had a major impact on the global political or economic situation; events that virtually none of the eminent forecasters, academics, professionals and others predicted before the event.

In the book, Taleb not only explains some of the psychology behind black swan events, but also describes some of the mathematics that underpins his thinking. One problem that he identifies is that experts often made assumptions about the underlying world that they are trying to explain. One way they do this is by assuming that observations come from a particular statistical distribution – which is then used to predict future events.

So the normal distribution – often called the “bell curve” - is probably the most commonly used probability distribution. But if you assume that it is a good explanation for what you are observing, you are making certain inherent assumptions about the probability of future events.

If I play golf three times a week, after a year I have a set of observations (my scores for each round) that will almost certainly follow a normal distribution pattern. The most frequent scores may well be around 100 strokes for the round, with the vast majority falling between 95 and 105. One or two might be under 90, and a handful over 110 unfortunately!

We could then use those past scores to predict the future pretty successfully. We might say, for instance, that the chances of me shooting under 80 or over 120 on any given day are virtually nil, which would be an accurate assumption. Indeed, using analytical methods, we could work out the probability of my next round being between any two given numbers – 105 and 100 strokes perhaps.

But take another personal example. If I drive into London every day for a couple of months, and note the time taken, the average journey might be 100 minutes. A few days might be as low as 90 minutes and on a bad day it might be 110 or even a little more.

Now if we made the same assumption about the underlying probability distribution as in our golf example, our model would suggest that the chances of the journey taking over 120 mins are virtually nil. Yet we would intuitively know that is not true. It just needs a breakdown on the Chiswick flyover, and that could easily be 120 minutes or considerably more.

As Taleb pointed out, other real-world examples show observations that don’t follow the normal distribution (bell-curve) when we might expect that they would. The distribution of the earnings of authors or rock stars for instance has more observations at the high end than you would expect, and those numbers can be way off the expected scale.  If you estimated the shape of the underlying probability distribution by examining the earnings of five thousand authors, then you happen to look at J.K. Rowling as number five thousand and one, it would mess up your model completely!

It is true that many natural and humanly created phenomena follow a normal distribution. That has led to a situation where forecasters and others who need to consider the future will tend to use it as a model for what they are observing. But that can lead to some big mistakes. For example, with a normal distribution, the chances of extreme events are very, very low. In our driving time example above, the chance of the extreme outcome looks very small based on the model, when in fact, it is not.

Another important point about the true normal distribution is that once you have a reasonable number of observations, making one more does not change the assessment of the underlying distribution significantly. But that is not true for other distributions - as per our Rowling example above. As Taleb says, almost all social matters are from what he calls “Extremistan” – this world where “inequalities are such that one single observation can disproportionally impact the aggregate or the total”.

Or it may be that things have changed, and we are working on old observations. We might still be looking at a normal distribution, but the shape of it might have changed and that has not been understood. On British TV recently, we saw an interview with a lady from the north of England. “When my house got flooded in 2014, they told me it was a once in a 100-year event. Then it happened again in 2015!” she complained.

That sounds like changes in weather patterns have blown that old probability model out of the window - and into the floods! Perhaps climate change means the “norm” in terms of volume or severity of rain has changed and our victim should expect floods once every ten years or even more often. The model is out of date.

Why does any of this matter? Because these reasons can explain why “experts” often fail to predict major risk events, or estimate the probability of such events inaccurately. The financial crash and crisis of 2008 was based in part on very clever people in financial institutions assessing that the probability of a combination of events was vanishingly small, when it turned out it wasn’t.

So, understand that estimates based on underlying probability distributions are only as good as the assumptions about that distribution. Reading “The Black Swan” (which is highly recommended, not the easiest read but engrossing nonetheless) also suggests we have a tendency to under-estimate the chances of rare events.

That applies to supply chain disruption too, whether through volcanoes or labour disputes, as much as it does to the journey time to work or the earnings of authors. Events that seem unlikely, surprising, or virtually impossible do happen, more often than we expect, and our risk analysis, mitigation approaches and management actions should bear this in mind.

First Voice

  1. RJ:

    As someone who is in the process of having the flood defences behind their home upgraded, I can relate to this!

    It also needs to be read in conjunction with Daniel Kahneman’s “Thinking Fast and Slow” which explores how, in general terms, we seem to have a natural tendency towards risk aversion, especially in relation to extreme events. In both, though, the key issue for business professionals is to understand the true probability of risks occurring and the impact that will be caused – far too often we go with “gut feel”, rather than taking the time and effort to question and analyse the data.

Discuss this:

Your email address will not be published. Required fields are marked *