Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

A CISO, an AI, and a bot walk into a bar….

June 27, 2018 By Doug

© Depositphotos / Johan Swanepoel

Over the past few weeks, I’ve been facilitating sessions at Evanta CISO events.  If you’re not aware, these are discussions for CISO’s by CISO’s, held around the country and well worth the time.  The topic for my sessions was AI & orchestration in cybersecurity, with more than 60 CISOs participating in five cities.  While each venue had a slightly different emphasis, there were a number of broad trends across the country.

The first is that there’s both great interest, great skepticism and great caution about these technologies. Interest because, as one CISO put it, they have to reduce the time and resources required between “mole and whack”.   Skepticism, because, as another put it, we’re at ‘peak hype’, and caution because of the lack of new skills required to take advantage of them.

The consensus was that orchestration is ready for prime time, and about 18 months ahead of AI, so let’s take that one first.  While the topic was orchestration, the majority of CISO’s spoke of automation, both human initiated and fully autonomous processes.  There’s a tremendous amount of grunt work in the SOC – having to remediate the 437thinstance of someone clicking on a flying toaster video is tedious and frustrating, so ‘automating the known’ is a high priority. About a third of the orgs are well along the way to having automation triggered directly from an event in the SIEM and not within an incident response workflow.   About a third are using a human in the loop approach for all orchestration, and about a third are just starting the journey.  For all,  the goal is to move their staff up the value chain to more interesting work, which increases job satisfaction and reduces turnover.  Sounds great, right?

One challenge is that you have to trust your model/system that’s generating incidents, to ensure that you don’t remediate inappropriately.  Once CISO said that automation is like gun control – it means hitting what you aim at.  Another organization has an explicit workflow for new incidents that requires automation (or triage as to why it can’t be).  Nearly all expressed frustration at how much time and energy they are spending on gluing systems – both security and IT – together.  One remarked that it’s like the late 1990’s again: everything is hub and spoke, and asked where the middleware for security?  A number were very interested in being able to consume security as microservices delivered from the cloud.  For those with managed security services, they’re planning to put orchestration and automation into their next renewal contracts.

And they plan to do the same for AI, assuming that the technology matures in time.  Right now, only about a third of the orgs are using AI, and the vast majority of those are implementations of basic machine learning.   Several were using cognitive to handle unstructured information, and only a handful were starting to explore using deep learning.  There were a number of common goals and obstacles.

Almost universally the desire is to use AI to remove the noise and get to the signal.  SIEM was acknowledged as the ante to get into the game, though a number of CISO’s are wondering if a advanced AI can replace a SIEM (and if the regulators will allow that).  The ‘black box’ of deep learning is expected to be a problem with gaining approval from auditors.

A surprisingly large minority either had, or were looking to build a security data lake from scratch, and then implement their own incident discovery capabilities within the lake.  The majority are looking for vendors to build COTS projects along those lines, again, as a replacement for a traditional SIEM. As a number agreed, those projects are expensive and high-risk – and they can’t afford mistakes with a limited budget.   AI, and especially machine learning, within individual segments, like application source code analysis or user behavior analytics, was viewed as much more ‘real’ today than building a system to discover a rock in a data lake.  I made a comment about many of those turning into data swamps, to a lot of knowing laughs around the table.

That triggered a good conversation about the need to ‘move security to the left’.    For source code analysis there was wide agreement that security needs to get involved in the DevOps – or DevSecOps – process.  For UBA a plurality spoke to cultural barriers to monitoring all the people all the time.  Big Brother is something they want to avoid, and one described an internal marketing effort to rebrand it ‘Big Mother’ instead:  Not watching over you, but watching out for you to keep you safe.

And that brought the group to the longer term goal:   They want to have an AI consuming inbound security information (books, articles, alerts, threat feeds, news stories, and so forth), then consolidate it down into proactive actions and recommendations, including an initial, automated survey for IOCs in the environment – a push to hunters, instead of a pull from responders. Getting there requires both trust in the AI, and skills that don’t exist yet.

Skills were woven throughout the entire conversation, and a point of concern.  Orchestration/automation frameworks require programming skills that don’t currently exist on the security team.  For AI, particularly for those building their own data lakes, the largest obstacle is finding data scientists with security backgrounds.  A number of orgs are going out into the colleges – and even high schools – to recruit cyber security students for internships, and steer their electives towards programming and data science skills.  There was a strong consensus that existing programs need to evolve and add those to the core curriculum.  They may not be needed today, but they will be.

In the end, there was universal agreement that our security programs need to change posture – to move from responding on our heels to hunting on our toes, and that AI and automation will make that possible.

Filed Under: Security Tagged With: AI, artificial intelligence, automation, CISO, Evanta, Gartner, machine learning, ML, orchestration, security

Cognitive Fuzziness – Getting the definition right

June 23, 2017 By Doug

(c) www.123rf.com / Benjamin Haas

There’s a ton of hype about cognitive security in the marketplace these days, and the marketing departments are operating in full force.  So beyond the hand waving, cheerleading and me-too-ing, what do we actually mean by cognitive?

Cognitive involves three things:  The ability to mine data for information, the ability to recognize patterns in that data, and the ability to understand natural language.   The key component across all of these is an ability to reason and infer on a probabilistic basis from the context of the information.  But it’s not Lt. Commander Data from Star Trek fame – cognitive isn’t artificial intelligence.  It’s more like the library computer in the original series, that is, a machine that can answer questions put to it.  Cognitive is a foundational technology for AI, but we’re a long way from real AI – 2001 came and went without HAL, and so will 2017.

Machine learning, which is often confused with cognitive (sometimes deliberately) has been around for years, and while it’s an enabling technology, there’s no magic there.   It can be extremely useful, but also some limitations to keep in mind.  The models created are only as good as the data inputs and variables selected.  Poor input data yields models that may appear to work, but diverge over time, and you’d best hope that the data isn’t already compromised when the model is built.  Even when you have a good baseline, continuously updated models can be either spoofed (reset the ‘normal’ baseline over time), or destabilized by a persistent, and patient attacker.    There’s techniques to combat the attacks, so it’s worth asking about which ones are used.

Cognitive uses machine learning as a training tool when it’s being taught to understand a particular set of vocabulary and grammar – cybersecurity for example.   Traditional unstructured information systems simply operate on keywords and often metadata, but cognitive systems understand the context of the information components in relation to each other.  For example, if I talked about Apple’s CEO eating an apple while negotiating a contract with Apple, most engines would return the document based on a keyword – Apple, or potentially from tags or metadata a human added to the document.  A cognitive engine with a large corpus might return that document for questions about computer companies, fruit that grows on trees, and the Beatles’ record company, depending on how the question was worded.

So when using terms like machine learning, cognitive, or artificial intelligence applied to cyber security, it’s important to be crisp about which one is used, and what it implies. We’re not quite in snake oil territory here, but there is a lot of both intentional fuzziness and casual laziness in the press and marketing.  Regardless of which term though, remember that there’s no silver bullet that will solve your security challenges.  Cognitive is a force multiplier, but not a magic army.

Filed Under: Security Tagged With: artificial intelligence, cognitive, cybersecurity, machine learning, natural language, snake oil

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser