Superforecasting: The Quest for Hyper Precision in Cyber Risk Assessment (Part I)

This is the first in a series of three articles on cybersecurity risk assessments challenges and solutions.

Since the commercial sector hired its first chief information security officer (CISO) way back in 1994, network defenders have built a poor track record of transforming technical risk into business risk for senior executives and board members. We’ve been good at identifying problem areas in our infrastructure and processes that bad actors might exploit, but we have struggled—badly, at times—from the very beginning to clearly and urgently turn technical issues into something the board could understand.

It wasn’t for a lack of effort or commitment. We certainly tried. But we’ve been using the wrong tools for the wrong reasons for the wrong problems, so we shouldn’t be surprised that our risk assessments have been imprecise. To paraphrase author/statistician/pundit Nate Silver, we’ve delivered too much noise and not enough signal.

Network defenders have long used tools like heat maps and qualitative risk matrices in talking to executives and the board. But in hindsight, we never were comfortable with the precision these tools offered—and our organizations often paid the price for it. Those and other tools were handy for organizing information and provided some very nice eye candy for board members who were either too trusting in our knowledge or not savvy enough to ask the right questions.

And, as a cybersecurity community, we did ourselves no favors by insisting that cyber risk is somehow more special than the other risks that senior leaders make decisions about every day. By cloaking cyber risk in the language and actions of fear, uncertainty and doubt, we contributed to senior executives’ failure to fully embrace the CISO’s role as a true C-level peer.

After 25 years of wrestling with these issues, it’s clear that we need a better way. But before we jump ahead to the solution, let’s talk some more about the problems and challenges of measuring risk using the wrong tools and techniques.

The Quest for Hyper Precision

Several recent books—including “How to Measure Anything in Cybersecurity Risk,” co-written by Richard Seiersen, one of the co-authors of this article—shed important light on why our legacy approaches to cybersecurity risk assessment were flat-out wrong, and what the implications were for ensuring good cybersecurity health.

The bottom line to all of these books is that network defenders can convey the threat and impact of cyber risk with far more precision than we have done in the past. And, even more importantly, we can do so in a language that senior leaders can understand and act on, in much the same way they evaluate any other organizational risk.

Imagine you’re in an executive staff meeting, and your VP of sales is asked about revenue projections for the coming quarter. Now, imagine he or she replies with something like, “Well, it’s pretty good, although we’re iffy about hitting the goal.” Or if your CFO is asked about anticipated earnings per share, and they tell you, “ Uh, it could be a dollar a share….or possibly off a bit.”

You wouldn’t stand for this. In fact, you’d be inclined to fire that executive on the spot.

That’s what we’ve been doing in making risk assessments on cybersecurity threats and conveying the risks and implications to our business leaders. And that’s because we’ve often used tools that are, by definition, far too imprecise.

Catchy, but Imprecise Measuring Tools  

Many risk officers use heat maps or matrices to convey risk to senior leaders, identifying potential threats to the business and the likelihood that those events will occur. And in our own careers, the authors of this article have gotten away with listing the 150-plus technical things that could highly impact the organization if something went wrong. We would diligently label each entry as either high, medium or low in terms of likelihood of the event happening and for the relative impact to the organization if it did occur. This resulted in a very eye-catching, colorful depiction of relative safety or danger, which is the tool we all used in order to convince leadership to spend some money in order to reduce the temperature displayed on the heat map.

If anybody challenged our rating scheme or our assumptions in assessing risk and impact, we would have been hard-pressed to provide a satisfactory answer. If somebody wanted us to explain why one entry was red compared to yellow, we might have said something like, “Blah, blah, blah…25 years of experience…trust us…give us the money.”

Now, however, the books we mentioned earlier and other important bodies of evidence and research have made it clear that qualitative heat maps are imprecise tools to convey technical risk to business leaders.

And it’s not just our measurement tools that are askew. Many of our long-standing and deeply held assumptions on things like probability, statistical sampling and forecasting in general need to be rethought. If not, and if we don’t move away—rapidly—from imprecise risk assessment tools like heat maps, we will not only help our organizations reduce risk, but we will actually increase risk and the negative impact of those risks because our assumptions will be far off base.

But what do we do instead? That’s the subject for our second article on cybersecurity risk assessment.

Editor’s note: This article was adapted from a technical paper presented by the authors at the 2019 RSA Conference. 

Rick Howard is chief security officer at Palo Alto Networks; David Caswell is head of the computer and cyber sciences department at the U.S. Air Force Academy; and Richard Seiersen is co-author of “How to Measure Anything in Cybersecurity Risk.”

share: