Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

2019 Security Program Horizons

December 11, 2018 By Doug

One of the things I love most about my job is the opportunity to collaborate with hundreds of security leaders across many industries and geographies.  There’s definitely industry focuses, as well as some geographic trends, yet the overarching themes are common across the security landscape.  Following the usual year end tradition, here’s what I see on the horizon for our programs, as well as some things that aren’ton the radar that probably should be, and as a bonus, one that is, that probably shouldn’t be.

The overarching theme again in 2019 will be staffing and resources.  I separate those intentionally, both because people are more than just a resource, and because the staffing challenges are deeper than the budget challenges.  We’ve all heard the varying statistics about millions of unfilled cybersecurity jobs in the next few years, yet as damaging as unfilled positions are, the churn occurring within the existing staff is worse.

One CISO, at a medium sized company, has given up trying to retain most of his staff – he views himself as a farm team for the big companies.  So he’s trying to maintain a core of well-compensated people and live with the churn at the lower levels of the organization.  Many CISO’s have complained that their HR pay bands/scales/ranges are based on IT, rather than security, and are both low and far too static. Yet even when they are able to maintain market compensation, the mind numbing tedium of repetitive tasks cause job frustration and churn.

Those staffing challenges are driving the two big technical trends for 2019:  widespread adoption of machine learning in the SOC for incident discovery, and automation/orchestration for remediation. There’s (rightly) a lot of skepticism about machine learning and AI right now, yet real implementations and applications are having significant success in reducing the grunt work of low-level incident identification and analysis.   User and entity behavioral analytics are still in the early stages, though we’ll see wider adoption.  While some organizations will attempt to build their own security analytics data lakes using base ML technologies, as we’ve seen this past year, those efforts often fail, and I don’t expect widespread traction in that area.

Once the incidents are identified, for routine remediation, automation will explode next year.  That’ll be split about evenly between human in the loop and hands-off automation, depending on culture and the level of the incident.  One CISO has a policy that every time an incident is manually remediated, the next step is to automate it – the program goal is that manual remediation only occurs once. That’s improving staff morale and retention, allowing his highly skilled people to move up the value chain, and that approach will see widespread adoption next year, particularly for commodity incidents.

Another trend we’ll see, particularly among small and medium sized organizations is a move towards managed security services, at least for Tier-1 and often a hybrid model for Tier-2 and 3.  We’ll continue to see some dissatisfaction with MSS providers, and churn among those customers. Aside on that – the best practice is to make sure to own the analytics infrastructure and data, so that when the MSS changes, history isn’t lost.  The root cause of the dissatisfaction is that MSS contracts are written like IT outsourcing contracts, and have very clear specifications of what will be done. Understandable from a liability standpoint, but ineffective in a fast moving and dynamic cyber-hostile world.  I’m starting to see some MSS providers working towards more flexible contract language, but that’s slow going.  Still, due to the staffing shortage, particularly for off-hour support, MSS will be a core feature of a growing number of programs in 2019.

The flip side to MSS and it’s challenges, is the cloud.  In this case, I’m talking mostly about security fromthe cloud.  Right now, on-prem solutions require care and feeding, and often it’s the security professionals who are managing the tools.  Moving those solutions off-prem frees up staff to actually do security.  I saw the corner turn in 2018, with even risk-averse organizations embracing the cloud for select portions of their infrastructure. In 2019 that’ll accelerate, particularly for analytics and identity.  Related to that is the emerging trend of the cloud providers offering security solutions themselves.  Right now that’s rudimentary at best, and only for environments directly on their cloud.  I don’t expect major improvements in 2019 – but let’s revisit for 2020.

An honorable mention goes out to companies with large IOT deployments, particularly for critical infrastructure:  securing those environments will be the major program driver in 2019.  That’ll begin with security analytics – just being able to understand what’s happening in the OT network is the largest challenge.  The volume of events and data produced, as well as the unique characteristics of the environment, will require custom machine learning models to properly detect anomalies.  Rule-based analytics are likely to remain problematic for IOT data sources due to the high variance between implementations.

The next honorable mention is SSL decryption.  This is just started to emerge as a major concern over the past few months, and I had three conversations about it in the past two weeks alone.  Upwards of 60% of traffic is now encrypted, including the vast majority of CnC traffic and data exfiltration.  If the 2019 budget didn’t include SSL decryption funding, that’s likely to be an incremental ask.

The last honorable mention goes to our business stakeholders, who are now facing the reality that they need more than just technical means of addressing cyber risk.  First, there’s been a growing trend to move the CISO out from under the CIO or CTO, and to a risk, compliance, general counsel, or direct COO/CEO reporting structure, and I expect that to become much more common in 2019. Second, as the threat of a black swan event becomes real, business executives are growing concerned about having good crisis communication plans in place.   What looks like a good idea in the heat of battle often turns out to be a really bad decision, so a few forward looking teams are building those coms plans in advance.  Part of that includes being prepared for a question on an earnings call asking if you’ve ever experienced a breach.   The proliferation of privacy regulations makes answering those very touchy, as ‘breach, incident, disclosure’ and such all may carry specific legal meaning.  A few more big breaches, and this could be a major trend in 2019.

And that leads me to the things that should be major trends, but aren’t.  Those privacy regulations are largely known, but I’m not seeing significant efforts to address them programmatically.  Companies that had to comply with GDPR are assuming those efforts will be sufficient for the upcoming California or now-in-effect Colorado laws, and they’re probably in not too far off (assuming they did a worldwide adoption).  For organizations that didn’t have GDPR requirements, I’m not seeing widespread interest in a data classification and discovery effort. It’s hard and tedious, but if you don’t know where the data is, what it is, or who owns it, complying with disclosure regulations is essentially impossible.  If we get a national pre-emptive law (highly unlikely) those teams will be caught short.

That’s a good example of the big piece that’s missing from the hot trends: basic blocking and tackling. In addition to data governance, many organizations, including those looking at AI and machine learning, still don’t have positive control over what’s on their network, how it’s configured, or in many cases, even formal policies governing the environment.  Identity remains problematic, with a lack of centralized authority, integration with employee life-cycle, let alone SSO.  Gaps in that basic infrastructure will prevent the ‘hot trend’ initiatives from realizing full value.  It’s hard to do UBA without endpoint or identity management!

Now the bonus, I hear a lot of interest in threat hunting.  That’s one that commonly comes up in conversation, though honestly, the vast majority of organizations aren’t ready to really tackle it, at least not beyond the vanity title.  Let’s leave that for another blog post, and probably a 2020 trend.

In closing, I had a CISO, pretty worn out from a long year, wistfully hope for a ‘Christmas Truce’.  I suspect that desire is the widest trend of all, so here’s hoping for a Silent Night this season.

Merry Christmas to you and yours!

Filed Under: Security Tagged With: 2019, AI, automation, Christmas Truce, CISO, machine learning, managed services, orchestration, security, security program, ssl decryption, staffing, threat hunting

Business stakeholders need the full story

August 16, 2018 By Doug

(C) Depositphotos / @ efks

There’s a lot of talk about aligning security programs and business or functional goals, but in practice, that’s much easier “powerpointed” than done.  Business consequences of security decisions, and security consequences of business decisions in the broader context are all too often missed or ignored, sometimes even deliberately.   As Obi-Wan said to Luke, “What I told you was true, from a certain point of view”.

Let me share a couple of examples to frame this conversation.

Security ignoring functionality.  The TSA is studying reducing security at smaller airports to refocus the spend at larger facilities.  The plan would be to do minimal screening initially, then rescreen passengers when they arrive at a larger airport.  Critics and defenders jumped into the fray – critics that there’s a reduction in security for part of the system, and that attackers would then simply target those facilities, and defenders that this is a reasonable cost tradeoff given limited resources.

The problem is that both of those people are viewing this within a narrow security-only view and miss the broader impact:  it would require massive infrastructure investment at airports and break the business model of most of the major airlines.  Rescreening passengers from feeder airports would require all connections to extend by another hour, raising operating costs, and the airports would have to be reconfigured to add internal screening checkpoints.  The total economic cost would far exceed the projected $115M in TSA budget savings.

Functionality ignoring security.  Let’s look at autonomous vehicles.  Don’t get me wrong, the folks developing those system do have an awareness of some the security risks, but they’re again, focused within the system (preventing the vehicle from being hacked).  Yet they ignore the risks of the vehicles being used exactly as intended.  Just one example:  a terrorist loads explosives on a vehicle, and then programs it to drive a route, with a GPS trigger that sets off the bomb, while they’ve already flown out of the country.  That’s not a hack, it’s building a smart bomb with the self-driving software as the navigation unit.  There’s no security measure in the autonomous vehicle that can prevent that misuse case from happening.

In both cases, this is due to the scope of vision.  Within each individual team, the approach and decisions are valid, but when taken in the larger context, they no longer are.   That’s driven by cultural and budget divisions:  the TSA doesn’t own a budget for the entire air transit system, and the autonomous vehicle company doesn’t own the societal impact of the invention.  Risk adjusted total economic cost is something that entrenched interests rarely address because doing so with intellectual honesty requires facing answers that are at odds with their worldview.

To be fair, those are both extreme examples to illustrate the point, yet the same thing occurs within our organizations on a smaller scale.  I’ve written before that the business stakeholder is the only one that can make the final tradeoff decision between security and functionality.  In most cases, neither the reporting structure or culture support a true peer conversation.  If the CISO (security) reports to the CIO (functionality) are you getting the full, uncolored view of both sides?  That’s why I’m seeing a growing trend to move the CISO out from IT and into either a full peer role, or under the CRO (Risk Officer) so the tradeoff decisions are presented to stakeholders from equal peers.

Culture is much harder to change, and we’re always going to have bias in these decisions.  The TSA has a culture (understandably) of being unwilling to step back on current measures for fear of blame if something later happens.  Autonomous vehicle developers are unwilling to slow down for fear that a competitor will get their first.  Apple appears unwilling to admit that sometimes thicker, heavier, and having ports and buttons is more secure and more usable for fear of…well, I’m not sure what (losing dongle profits?), but you get the point.

Right now, we can at least get the organizational structure out of the way and give both risk and function equal voices so our business stakeholders can make fully informed decisions.

Filed Under: Security Tagged With: autonomous car, business stakeholder, CISO, risk, security, tsa

A CISO, an AI, and a bot walk into a bar….

June 27, 2018 By Doug

© Depositphotos / Johan Swanepoel

Over the past few weeks, I’ve been facilitating sessions at Evanta CISO events.  If you’re not aware, these are discussions for CISO’s by CISO’s, held around the country and well worth the time.  The topic for my sessions was AI & orchestration in cybersecurity, with more than 60 CISOs participating in five cities.  While each venue had a slightly different emphasis, there were a number of broad trends across the country.

The first is that there’s both great interest, great skepticism and great caution about these technologies. Interest because, as one CISO put it, they have to reduce the time and resources required between “mole and whack”.   Skepticism, because, as another put it, we’re at ‘peak hype’, and caution because of the lack of new skills required to take advantage of them.

The consensus was that orchestration is ready for prime time, and about 18 months ahead of AI, so let’s take that one first.  While the topic was orchestration, the majority of CISO’s spoke of automation, both human initiated and fully autonomous processes.  There’s a tremendous amount of grunt work in the SOC – having to remediate the 437thinstance of someone clicking on a flying toaster video is tedious and frustrating, so ‘automating the known’ is a high priority. About a third of the orgs are well along the way to having automation triggered directly from an event in the SIEM and not within an incident response workflow.   About a third are using a human in the loop approach for all orchestration, and about a third are just starting the journey.  For all,  the goal is to move their staff up the value chain to more interesting work, which increases job satisfaction and reduces turnover.  Sounds great, right?

One challenge is that you have to trust your model/system that’s generating incidents, to ensure that you don’t remediate inappropriately.  Once CISO said that automation is like gun control – it means hitting what you aim at.  Another organization has an explicit workflow for new incidents that requires automation (or triage as to why it can’t be).  Nearly all expressed frustration at how much time and energy they are spending on gluing systems – both security and IT – together.  One remarked that it’s like the late 1990’s again: everything is hub and spoke, and asked where the middleware for security?  A number were very interested in being able to consume security as microservices delivered from the cloud.  For those with managed security services, they’re planning to put orchestration and automation into their next renewal contracts.

And they plan to do the same for AI, assuming that the technology matures in time.  Right now, only about a third of the orgs are using AI, and the vast majority of those are implementations of basic machine learning.   Several were using cognitive to handle unstructured information, and only a handful were starting to explore using deep learning.  There were a number of common goals and obstacles.

Almost universally the desire is to use AI to remove the noise and get to the signal.  SIEM was acknowledged as the ante to get into the game, though a number of CISO’s are wondering if a advanced AI can replace a SIEM (and if the regulators will allow that).  The ‘black box’ of deep learning is expected to be a problem with gaining approval from auditors.

A surprisingly large minority either had, or were looking to build a security data lake from scratch, and then implement their own incident discovery capabilities within the lake.  The majority are looking for vendors to build COTS projects along those lines, again, as a replacement for a traditional SIEM. As a number agreed, those projects are expensive and high-risk – and they can’t afford mistakes with a limited budget.   AI, and especially machine learning, within individual segments, like application source code analysis or user behavior analytics, was viewed as much more ‘real’ today than building a system to discover a rock in a data lake.  I made a comment about many of those turning into data swamps, to a lot of knowing laughs around the table.

That triggered a good conversation about the need to ‘move security to the left’.    For source code analysis there was wide agreement that security needs to get involved in the DevOps – or DevSecOps – process.  For UBA a plurality spoke to cultural barriers to monitoring all the people all the time.  Big Brother is something they want to avoid, and one described an internal marketing effort to rebrand it ‘Big Mother’ instead:  Not watching over you, but watching out for you to keep you safe.

And that brought the group to the longer term goal:   They want to have an AI consuming inbound security information (books, articles, alerts, threat feeds, news stories, and so forth), then consolidate it down into proactive actions and recommendations, including an initial, automated survey for IOCs in the environment – a push to hunters, instead of a pull from responders. Getting there requires both trust in the AI, and skills that don’t exist yet.

Skills were woven throughout the entire conversation, and a point of concern.  Orchestration/automation frameworks require programming skills that don’t currently exist on the security team.  For AI, particularly for those building their own data lakes, the largest obstacle is finding data scientists with security backgrounds.  A number of orgs are going out into the colleges – and even high schools – to recruit cyber security students for internships, and steer their electives towards programming and data science skills.  There was a strong consensus that existing programs need to evolve and add those to the core curriculum.  They may not be needed today, but they will be.

In the end, there was universal agreement that our security programs need to change posture – to move from responding on our heels to hunting on our toes, and that AI and automation will make that possible.

Filed Under: Security Tagged With: AI, artificial intelligence, automation, CISO, Evanta, Gartner, machine learning, ML, orchestration, security

  • 1
  • 2
  • Next Page »

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser