Superforecasting: The Quest for Hyper Precision in Cyber Risk Assessment (Part III)

This is the third in a series of three articles on cybersecurity risk assessment challenges and solutions.

In our first and second installments, we look at the challenges associated with traditional cyber risk assessment methods, as well as some alternative approaches that we feel are better suited for today’s hypercharged cybersecurity risk environment.

Now, let’s take a look at what business leaders, in concert with their chief information security officers, should be doing to prepare board members for modernizing risk assessment and giving them a more precise and actionable cybersecurity game plan.

Throw Out Those Heat Maps

As a best practice, it’s time to jettison the idea of a heat map as the main tool to demonstrate and illustrate risk to senior leadership. In our first article in this series, we talked about how bad the science is that underlies the construction of a heat map, so we naturally believe this is a sensible thing to do. But what should the heat map be replaced with when your CISO presents to the board?

We propose that risk officers use “loss exceedance curves” to frame risk decisions to senior leaders in an easily consumable manner. In Jay Jacobs’ essay “Communicating Risk,” he defines loss exceedance curves as graphs that visually display the probability that loss will exceed some amount within some period of time. These graphs utilize three lines on the chart—Inherent Risk Curve, Risk Tolerance Curve and Residual Risk Curve—to illustrate to the board the point at which existing risk exceeds what the organization feels it can tolerate.

Of course, anyone reading this article is familiar with the notion of surfacing “material” findings, whether you’re talking about things that impact financial projections, cyber threats or any other outcome that has strong potential to create a meaningful (material) impact on the organization’s performance. Defining what “material” means for your organization is essential in plotting the risk appetite for the board.

For instance, let’s assume that your loss exceedance curve displays that there is a 100% chance of losing $1 million or more over the next three years. Clearly, anything under a $1 million loss is not material to the organization; it is just another cost, like paying a salary or purchasing raw materials. Now, what happens if our graph shows that the board is willing to assume a 10% chance that the organization might lose $10 million or more in the same time frame; $10 million is a lot to any organization, but at those odds, it is an acceptable risk to the organization. Finally, there’s the scenario where the board is not willing to assume any risk where the loss is $40 million or more, regardless of the probability. In this example, we can see that the board views any loss above $40 million as material to the business.

In fact, any place on the graph where the Risk Tolerance Curve is above the Inherent Risk Curve represents a potential material loss to the organization. Using the loss exceedance curve, that will be demonstrated as clear as day for the board. 

The Power and Beauty of Algorithms

Throughout this series of articles, we’ve sought to demonstrate that quantitative risk assessment methods are far superior to the qualitative methods traditionally used to evaluate risk. As business leaders and risk officers prepare for their board presentation, how can you use a quantitative risk philosophy on your own environment to present a more accurate picture of risk for the board?

We believe you can—and should—use a custom algorithm to give the board a more complete and accurate portrayal of risk. In building this algorithm, we believe you need to follow a specific sequence:

  1. Determine the current cost of the existing security programs.
  2. Estimate the cost after the “boom” (the breach event).
  3. Build the Inherent Risk Curve.
  4. Build the Risk Tolerance Curve.
  5. Overlay those curves.
  6. If inherent risk is less than risk tolerance, do nothing (for now).
  7. If inherent risk is greater than risk tolerance, develop your strategy for reducing risk.
  8. Build the Residual Risk Curve.
  9. Calculate the cost of the new strategy.
  10. If the cost of the residual risk plan is greater than the risk tolerance curve, start over. (If you’re building a solution that costs more than the organization’s risk tolerance, you need to rethink it.)
  11. For each board meeting, repeat.

After 25 years of trying to convey technical risk to senior leaders, it is obvious to us that the network defender community has been doing it all wrong.

There are reams of research that prove the traditional qualitative risk matrices—the infamous heat map—to convey business risk to the board is just bad science.

It is time for network defenders—supported, backed and promoted by business leadership—to turn to better, more scientific methods that have proven to work in other areas. Where problems seem intractable and a large set of established outcomes does not yet exist, new methods are needed, and we have detailed some of them.

Specifically, statistical sampling and Monte Carlo simulations can—and must—replace heat maps with a more representative, statistically based set of latency curves that easily illustrate the inherent risk to the organization, compared to the board’s risk appetite.

Editor’s note: This article was adapted from a technical paper presented by the authors at the 2019 RSA Conference.

Rick Howard is chief security officer at Palo Alto Networks; David Caswell is head of the computer and cyber sciences department at the U.S. Air Force Academy; and Richard Seiersen is co-author of “How to Measure Anything in Cybersecurity Risk.”

share: