# Micro-Case #4: All Risks are Relevant – What Are You Inadvertently Tolerating?

### Background

In a recent Micro-Case we showed how Monte Carlo simulations could be used to more effectively understand the risks and exposures in decision making processes. Since then, we have applied the concept to Risk Matrix Frameworks which has proven enlightening (because it simplifies complex risk structures while illustrating the true level of underlying risk).

### Risk Matrix Modelling

It is common to see risk management methodologies and frameworks
(e.g. CIS RAM, COSO) rely on categorical measures (e.g. High, Medium, Low)
to classify the Risk and/or Impact of an unfavorable *Event*. The
appeal is that simple language and math can help organizations define the appropriate
risk/impact levels, but the trade-off for this input simplicity is that the
output is complex and the true risk profile is lost in the noise.

The terminology and subtleties may change, but essentially, a
risk profile is usually made up of **x** risk items, classified
across **y** likelihood categories and **z** impact
levels. However, the number of unique combinations quickly becomes extensive
and therefore impossible to clearly evaluate. For x = 5, y = 3, z = 3, there
are 1,287 possible result-sets from just five risk items.

So, in search of an efficient and intuitive way to evaluate the profile, it is not uncommon to focus on the high risk / high impact combinations rather than try to parse out a risk parameter space with 1,287 or more points. However, assessing the “larger” risks does not adequately consider the lower (but very real) levels of risk.

In this Micro-Case, we show how a better understanding of **all** inherent risks can be obtained with some of the methods and visualizations that we are currently deploying to better articulate risk.

### Beyond Traditional Risk Frameworks

For this article, we assume a framework with 100 risk items, 3 risk levels and 3 impact levels (therefore 352 billion possible result-sets). We want to demonstrate a “Medium Risk” result, so we randomly generate a dataset with roughly a 50% chance of *Low *(Level 1), 40% change of *Medium *(Level 2) and a 10% chance of *High *(Level 3) for each of Risk and Impact. Our (artificial) 100 risk items are as follows:

Our data has a mean of 1.54 for Risk and 1.61 for Impact – and the heatmap (with jitter to slightly separate the same Risk/Impact points) is shown below and represents a fairly common, medium-health profile with 22 items being scored (Risk * Impact) as 4 or more, and 8 items greater than 4.

Let’s say we consider *Low* risk to be less than a 5% chance of an *Event* happening in the next 12 months, *Medium* is 5-25%, and *High* is anything over 25%. We can now assign probabilities to the Risk categories from above and simulate a possible year with our defined risk/impact parameters. Naturally this is hypothetical, but it is also very plausible given the risk/impacts. And we end up with a future 12 months that might see a series of *Events* as per below.

Our simulated year, with our defined risk/impact parameters experienced 10 *Events*, with one of Level 3 impact around day 47. The boxes show the probability assigned to that particular risk item, based on its Risk Level. Although most risks were considered to be “Low” (53/100), we do see a Level 2 impact event on day 161 that only had a 0.7% chance of occurring. Thus, our simulation has shown what a year, based on our risk profile, *might* look like. Clearly the organization is exposed to more impactful events than is probably acceptable and almost certainly more than was anticipated given the “Medium” expectation we had when creating the dataset. This is a considerably more interpretable representation of risk than a 100-line risk matrix.

### Event Probability Distributions

However, this is just one randomly sampled year, and the above
count of 10 *Events* is itself subject
to random variations. To address this, we run 10,000 one-year simulations to
draw an informed conclusion.

Below we created a probability distribution by counting the simulated number of *Events* (including zero) for each year, at each Impact level (shown across the x-axis). We then compute the proportion of all *Events* that each Event-count represents, and the resulting probability is shown on the y-axis.

As can be seen, given the categorization of risks per our Risk
Matrix, we should realistically expect a handful of *Events *each year with the most common occurrence for each Impact
level to be: **Level 1:** 3, **Level 2:** 2 and **Level
3:** 1.

Even more enlightening is the possibility of a right-tail year
where the number of events for each risk level could be: **Level 1:** 11, **Level
2:** 9 and **Level 3:** 5.

Thus informed, we can make some educated decisions around what is acceptable or not. What we thought was a “Medium” Enterprise Risk level looks to have been considerably understated.

### Conclusion

It probably comes as a shock that our perceived *Medium* level of risk is in fact riskier than envisioned. This is because a large number of risk items, even at small probabilities, will result in a non-zero level of* Events*. Traditional measures for evaluating the billions of combinations inherent to a risk matrix tend to ignore small (or even medium) probability risks and focus on large probability risks because they are easily identifiable and understood. It is in this way that companies are frequently tolerating more risk than they believe. By generating a probability distribution of events, the true risk can be better communicated and understood.

Consumers of risk models understandably find it difficult to comprehend the real exposure when presented with a risk matrix and pie/column charts. By applying a probability distribution to each categorical risk level, and then simulating outcomes based on the risk profiles, we clarify the real risk exposures. And while a simulation can only ever be an approximation of the real underlying risk, it shines a glaring light on an organizations’ real exposure.

Questions on this topic? Contact Paul Newton, Director of Business Analytics at pnewton@cviewllc.com.