Superforecasting: The Quest for Hyper Precision in Cyber Risk Assessment (Part II)

By , ,

This is the second in a series of three articles on cybersecurity risk assessment challenges and solutions.

In the first installment of this series, we peeled back the covers on the problem of imprecise cyber risk assessment, due largely to the reliance on ineffective and incomplete risk assessment tools and techniques. Things like heat maps and other forms of qualitative risk assessments no longer can adequately prepare organizations for the mounting wave of cyber threats—threats that pose increasingly larger peril to our organizations, employees, customers, partners, industries and communities.

Instead of continuing to rely on bad science in providing risk assessment advice to senior executives and board members, network defenders should consider more data-driven approaches to dramatically improve risk assessment accuracy and executive confidence. These include techniques such as Bayes probabilistic algorithms and Monte Carlo simulations in order to build latency curves.

Bayes? Monte Carlo? Latency curves? Don’t worry, we’ll make those issues clearer for you in a moment.

Superforecasting Improves Risk Assessment Precision

One of the pioneers of statistical sampling and probability analysis was Thomas Bayes, who penned the original formula now known as Bayesian interpretation of probabilities in 1740—way before the era of integrated circuits, local-area networking and the Internet. Bayes developed a mathematical relationship that allows a researcher to relate the probability of new evidence to the probability of previous knowledge—in other words, “likelihood” compared to “prior” experience. 

Now, risk assessments are never easy, but they’re a lot easier when you have tons of data available to analyze. But the real world has problems when data is scant, and leaders worry about potential events that have never happened. Bayesians began experimenting with simulations to provide answers to problems where data did not exist. This is very helpful in the cybersecurity space, where so many new threat vectors pop up each day and where zero-day attacks are becoming increasingly common and problematic.

Fast forward to a more modern era—World War II. Dr. Stanislaw Ulam, working with his legendary colleague Dr. John von Neumann, applied the concepts of Solitaire to mathematically predict results of a wide range of issues. They called their technique—based on the ideas of using simulation to confirm theory in modern statistics—the Monte Carlo method.

Using the combination of Bayes’ concepts and the Monte Carlo method, researchers and risk experts can more precisely determine the probability of something like a cyber attack happening by adding new data as it becomes available. And if we know anything about cybersecurity, it’s that new data is constantly becoming available.

It is this combination of Bayesian statistical sampling and Monte Carlo simulations that lets security professionals and risk assessors replace imprecise, subjectively biased heat maps with a more representative set of latency curves—typically depicted graphically as the time between when an event occurs and a relevant action is taken. In short, these more precise and quantifiable statistical approaches provide far more accurate and actionable risk assessments because they are shaped by the inclusion of new, real-time data in the probability analysis. 

And there’s even more good news. Not only are these “superforecasting” methods yielding more precise and actionable data, but they provide executives with a single chart that displays the inherent risk to the organization and whether it exceeds the board’s risk appetite.

Risk Reduction Strategies

There are many tactical things risk officers can do to reduce risk, but four main strategic approaches stand out:

  1. Avoid the risk. You might decide that the risk of executing the current plan is not worth the potential damage to the business because of the cyber risk probability and its impact. Just don’t do it.
  2. Transfer the risk. In the 2000s, hackers targeted retail stores for cybercrime, because retailers stored their customer credit card information on site. To reduce their risk, retailers started using third-party payment processors to store customer data. 
  3. Mitigate the risk. People, processes or technology can be used to reduce the chances that a hacker will successfully penetrate security defenses. You can hire more and smarter people, establish better security practices, install new technology to thwart cyber adversaries or do some combination of the three.
  4. Accept the risk. If you decide the risk is not material, you can move the Risk Tolerance Curve (the risk your organization is willing to take over some timeframe) under the Inherent Risk Curve (the actual risk your organization is exposed to). You might want to consider buying cybersecurity insurance as part of your risk acceptance, but at least you’ve done your homework to model the ratio of risk tolerance and actual risk.

Risk officers should constantly evaluate their risk strategies to map to various factors, including aligning with management’s and the board’s risk appetite. Which strategy is selected will depend on factors such as the cost to implement a new strategy, the potential impact of new threats and a combination of regulatory, governance, competitive and customer issues. 

In this way, conveying potential risk and the alternative mitigations to reduce that risk is not entirely straightforward. It isn’t like you can feed a program raw data and the computer will spit out an answer—at least not yet. There is a bit of an art to it. But we can use algorithms, based on Bayes and Monte Carlo, that will help keep things straight.

What’s Next? Get Ready for the Board Meeting

So, now that we’ve tackled the thorny issues of the shortcomings of legacy risk assessment techniques and how we should modernize our approach, what do we do with this more precise view of risk ? And how do we prepare for the discussion with board members? That’s the subject for our third article on cybersecurity risk assessment.

Editor’s note: This article was adapted from a technical paper presented by the authors at the 2019 RSA Conference. 

Rick Howard is chief security officer at Palo Alto Networks; David Caswell is head of the computer and cyber sciences department at the U.S. Air Force Academy; and Richard Seiersen is co-author of “How to Measure Anything in Cybersecurity Risk.”

End Points

  • Qualitative risk assessment practices like heat maps should be replaced by more data-driven models such as Bayes algorithms and the Monte Carlo method of simulation.
  • Using these approaches, network defenders can provide executives and board members with a “superforecasting” model that is easily viewed and understood in a single graph.
  • Risk officers need to continuously adapt their risk strategies to align with new threat data, new market conditions and shifting executive appetites for risk.

Topics