Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

Business stakeholders need the full story

August 16, 2018 By Doug

(C) Depositphotos / @ efks

There’s a lot of talk about aligning security programs and business or functional goals, but in practice, that’s much easier “powerpointed” than done.  Business consequences of security decisions, and security consequences of business decisions in the broader context are all too often missed or ignored, sometimes even deliberately.   As Obi-Wan said to Luke, “What I told you was true, from a certain point of view”.

Let me share a couple of examples to frame this conversation.

Security ignoring functionality.  The TSA is studying reducing security at smaller airports to refocus the spend at larger facilities.  The plan would be to do minimal screening initially, then rescreen passengers when they arrive at a larger airport.  Critics and defenders jumped into the fray – critics that there’s a reduction in security for part of the system, and that attackers would then simply target those facilities, and defenders that this is a reasonable cost tradeoff given limited resources.

The problem is that both of those people are viewing this within a narrow security-only view and miss the broader impact:  it would require massive infrastructure investment at airports and break the business model of most of the major airlines.  Rescreening passengers from feeder airports would require all connections to extend by another hour, raising operating costs, and the airports would have to be reconfigured to add internal screening checkpoints.  The total economic cost would far exceed the projected $115M in TSA budget savings.

Functionality ignoring security.  Let’s look at autonomous vehicles.  Don’t get me wrong, the folks developing those system do have an awareness of some the security risks, but they’re again, focused within the system (preventing the vehicle from being hacked).  Yet they ignore the risks of the vehicles being used exactly as intended.  Just one example:  a terrorist loads explosives on a vehicle, and then programs it to drive a route, with a GPS trigger that sets off the bomb, while they’ve already flown out of the country.  That’s not a hack, it’s building a smart bomb with the self-driving software as the navigation unit.  There’s no security measure in the autonomous vehicle that can prevent that misuse case from happening.

In both cases, this is due to the scope of vision.  Within each individual team, the approach and decisions are valid, but when taken in the larger context, they no longer are.   That’s driven by cultural and budget divisions:  the TSA doesn’t own a budget for the entire air transit system, and the autonomous vehicle company doesn’t own the societal impact of the invention.  Risk adjusted total economic cost is something that entrenched interests rarely address because doing so with intellectual honesty requires facing answers that are at odds with their worldview.

To be fair, those are both extreme examples to illustrate the point, yet the same thing occurs within our organizations on a smaller scale.  I’ve written before that the business stakeholder is the only one that can make the final tradeoff decision between security and functionality.  In most cases, neither the reporting structure or culture support a true peer conversation.  If the CISO (security) reports to the CIO (functionality) are you getting the full, uncolored view of both sides?  That’s why I’m seeing a growing trend to move the CISO out from IT and into either a full peer role, or under the CRO (Risk Officer) so the tradeoff decisions are presented to stakeholders from equal peers.

Culture is much harder to change, and we’re always going to have bias in these decisions.  The TSA has a culture (understandably) of being unwilling to step back on current measures for fear of blame if something later happens.  Autonomous vehicle developers are unwilling to slow down for fear that a competitor will get their first.  Apple appears unwilling to admit that sometimes thicker, heavier, and having ports and buttons is more secure and more usable for fear of…well, I’m not sure what (losing dongle profits?), but you get the point.

Right now, we can at least get the organizational structure out of the way and give both risk and function equal voices so our business stakeholders can make fully informed decisions.

Filed Under: Security Tagged With: autonomous car, business stakeholder, CISO, risk, security, tsa

Adopting an industrial mindset: Cyber Safety

November 2, 2017 By Doug

We’ve always said that there’s two kinds of organizations, those that have been hacked, and those that don’t know they’ve been hacked.  Yet security teams are still having problems getting resources and attention from our business stakeholders, particularly in industrial companies that consider IT and technology a back office problem.

Over my career I’ve worked in manufacturing, energy, utilities, oil and gas, and other similar industries.  One thing they all have in common is a focus on accident avoidance and safety – that is, how to fail gracefully.  That’s why they have a safety briefing before every meeting on where to evacuate to in case of a fire, or a safety minute with a thought of the day, or even those ubiquitous signs about ‘100 days since our last injury’.  The constant focus on safety has had amazing results:  business can now do dangerous things with much lower risk.   Yet many CISO’s in those industries are challenged in having cyber security made a high priority.

Often the OT folks won’t let IT touch the environment, which is unfortunate because it’s often riddled with insecure IOT devices, outdated and unpatched machines, and even modems still hanging off industrial equipment running PC Anywhere for dial-up maintenance by third party providers.  Discussions of hacking and cyber risk just don’t resonate much with someone running an offshore platform, or a manufacturing line.  So how do we get their attention?  Change our vocabulary.

We need to talk not about cyber security, but rather cyber safety.  To speak in the industrial language and talk about risk, not as ransomware or data exfiltration, but as plant downtime, risk to life and safety, generator outages, line stoppages, and so forth.  It’s getting traction, and in the process, we’re learning from our peers.  For example, we were talking with a line operator about the risk of someone hacking in and changing the computer to speed up the line (theoretical risk) in an attempt to crash it.  He shared that there are multiple control points (aka defense in depth) against it, including a purely mechanical control that will rate govern the equipment to get an operator time to intervene manually.

Then he turned and asked me why we didn’t have a rate governor around our critical data (e.g. on the database itself), so if someone does hack in, they can’t get the information out all at once…to give the SOC time to intervene.

Hmmmm.  He’s on the cutting edge with that – there’s some early stage architecture work being done but it’s hardly widespread.  Yet to him, it’s pretty obvious.

Because a system isn’t safe unless it can fail gracefully.  That’s just one example of where the safety mindset can help our security programs, as much as we can help theirs.   We just need to start speaking the same language.  Cyber Safety has a nice ring to it.

Filed Under: Security Tagged With: business, CISO, industrial, response, risk, safety, security

  • « Previous Page
  • 1
  • 2

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser