It’s a dangerous world, and yet there’s a spate of recent studies showing that people and companies continue to take risks with their cybersecurity. The truth is that we all have to take risks – there is no such thing as complete safety. The question is, how do we decide what risk to take?
All too often the answer is that we take risks by default – without an explicit decision, and accept the status quo simply by inaction. As individuals, folks fail to use password managers, or click on links in email. Corporations may not implement good security instrumentation and analytics. And both individuals and companies fail to apply patches in a timely manner. Most often that’s due to a lack of awareness or budget, or an unwillingness to make tradeoffs between usability, cost and security. There’s a joke among fire protection vendors that the best time to talk to a potential customer is when the building across the street just burned down. From a cybersecurity perspective, while it’s not quite Dresden after the allied bombing, big chunks of the city are in ashes, and yet the inertia continues.
I recently wrote a post suggesting that folks ask ‘why’ when their toaster asks for internet access. That’s a plea to assess risk and actually make a decision; to not just accept risks by default. That requires augmenting our desire to manage risk with good supporting processes including creating a culture of risk awareness and authority, establishing a clear risk workflow, and most importantly, building a security program that responds with ‘how’ instead of ‘no’.
Risk awareness begins with identifying data owners for the critical information – someone who has business responsibility (legal, regulatory, ethical) for the assets in our organization. A Chief Risk Officer can help identify those owners, and collaborates with them on decisions, but often doesn’t have the business acumen necessary to fully evaluate tradeoffs. From there, risk awareness has to permeate the organization, down to individual staff and developers. To paraphrase Ian Malcom in Michael Chrichton’s Jurassic Park, everyone needs to stop focusing on if we could, and start asking if we should. That’s a mentality shift that our security professionals can help folks make, if we provide proper support.
Key to that support is having an established process to identify stakeholders and owners, and how to evaluate the risk and benefit, and make the decisions. That process needs to accommodate situations where there are no clear owners or lines of authority – I’ve seen cases where inertia reasserts itself when it becomes difficult to figure out who to ask. We have to have a default path, and an overall process that returns answers promptly. Getting risk answers must be easy.
Those answers should very rarely be ‘no’. This is something that security folks, particularly those from a compliance background, really struggle with. We have to avoid making snap judgements based our innate low risk tolerance or assumptions about budgets (and willinginess to spend them). Here’s a great example: The users have a need to share files security outside the company. The security team says ‘no’ to box, dropbox, or one drive – either because of a lack of perceived control, or because they make an assumption that the business won’t fund a corporate account. So folks ignore the policy, and we have hundreds or thousands of users accessing box, dropbox, google docs, or one drive – because they have to do it to run the business. The better option is to respond ‘yes, and here’s how to do it safely, and how much it costs’. Better yet, we should provide several options and costs for the users to choose from (including the cost of a breach!). Business stakeholders generally make smart decisions when presented with reasonable options, and are much better at following – and finding money to support – policies they helped craft.
So that’s our duty – create a risk culture, help raise and expose the risk, then present options and facts. At that point the business, not security, makes the decision. For a profession full of control freaks, that can be hard to do (it took me years to really learn it) . It’s even harder when they make what the we believe is a bad choice, because we know we’ll still get blamed if there’s an incident, or at least have to clean it up (I sometimes think the CISSP logo should be a mucking shovel). As an aside, that’s why a key part of the process is good, written documentation with formal signoffs.
As long as we make sure it’s a fully informed choice, we can go home, and sleep well at night. After all, we don’t own risk. We own risk awareness.