As part of my day job, I get to talk with a wide range of organizations across many different industries, and annual budget time is just kicking off. This year I’ve seen two intersecting trends: a growing willingness (resignation?) from business owners that they have to pay much more attention to security and pay more for it, and second, a demand for some way to measure the effectiveness of that spend. As one business leader asked, how do you tell the difference between an effective program and just plain luck?
It’s a tough question for a number of reasons. After all, we have to be good 100% of the time, and our adversaries only need one solid success to undo all our work. Teams are reporting raw counts of attacks thwarted, malware remediated, time to discover, time to remediate, records lost, and so forth. Those are all important, but only show activity, not effectiveness and none get really to the cybereconomic case for the security investment.
The formal answer is that we should spend money less than or equal to the annualized loss expectancy for the asset involved: ALE = Single Loss Expectancy * Annual Rate of Occurrence. Sounds great, and you’re all set to pass the CISSP exam, but is that possible in the real world? At the moment, I’d argue no. While do have good data for some factors – the Ponemon Cost of a Data Breach study just came out (though the Anthem breach settlement will skew the next one), and a pretty good handle on the daily grind of malware, phishing and compromised accounts, the worst incidents don’t fall into those categories.
Both in severity and frequency, we are lurching from one black swan event to another. You know, those company-jeopardizing, class-action-attorney enriching breaches. I gave a keynote on cognitive security recently and attendance was down because NotPetya hit that morning. No one predicted it, and there’s no way to predict when, what, or how the next one will hit. Our best estimate is that things are getting worse, not better – more sophisticated, less frequent, more impactful, but as the SEC is fond of reminding us, past performance is no guarantee of future success – or failure. We simply can’t predict the future. Anyone who says differently should quit their security job and go work in the stock market.
All that makes the CISO’s life painful through the budgeting process – do you get more money if you were hit by Petya/WannaCry/NotPetya because you had insufficient capability, or do you get fired for blowing last year’s budget on an ineffective program? That’s one of the reasons most CISO’s are focusing heavily on incident response, not just detection and prevention. They’re also starting to step back and ask themselves if they’re getting good value for the investment – particularly in niche tools, or ones focused on yesterday’s threat (like signature based antivirus). As for the budget itself, what I see most organizations using is a combination of baseline no-brainer capabilities, regulatory requirements, peer best practices to find the sweet spot for the ‘commercial reasonable measures’ budget target. Right now, keeping up with the Joneses is a common target. Or put it another way, we don’t have to be faster than the bear – we just have to be faster than the next guy running.