Is Machine Learning Part of Your Security Strategy?

In the world of security perimeter defenses, more is not necessarily better. This is particularly true with threat detection, where software discovering 90 million possible threats a week is really no more helpful than one that finds 9 million threats a week. Indeed, from a signal-to-noise ratio perspective, those additional discoveries may work against security, in that they make finding those 2,000 actual attack attempts more difficult. This is the much-dreaded alert fatigue dilemma.

This is the problem that machine learning—especially unsupervised machine learning—aimed to solve. The premise was that unsupervised ML would quickly learn the patterns and, thereafter, instantly recognize a true threat and distinguish it from the ever-present network noise of a large company network.

The hiccup with this theory is that unsupervised ML perimeters suffer from the same weakness as many antivirus systems: to identify a pattern of a serious attack, the system must be successfully victimized by that attack method at least once. But cyber-attack methods evolve and change over time. So, as long as cyber criminals continually develop new methods, ML defenses will never be absolute.

Still, can ML be more effective than manual human alternatives? Often, the answer is “yes.”

But first, CISOs and CSOs must understand where ML works best and where it doesn’t.