Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

A CISO, an AI, and a bot walk into a bar….

June 27, 2018 By Doug

© Depositphotos / Johan Swanepoel

Over the past few weeks, I’ve been facilitating sessions at Evanta CISO events.  If you’re not aware, these are discussions for CISO’s by CISO’s, held around the country and well worth the time.  The topic for my sessions was AI & orchestration in cybersecurity, with more than 60 CISOs participating in five cities.  While each venue had a slightly different emphasis, there were a number of broad trends across the country.

The first is that there’s both great interest, great skepticism and great caution about these technologies. Interest because, as one CISO put it, they have to reduce the time and resources required between “mole and whack”.   Skepticism, because, as another put it, we’re at ‘peak hype’, and caution because of the lack of new skills required to take advantage of them.

The consensus was that orchestration is ready for prime time, and about 18 months ahead of AI, so let’s take that one first.  While the topic was orchestration, the majority of CISO’s spoke of automation, both human initiated and fully autonomous processes.  There’s a tremendous amount of grunt work in the SOC – having to remediate the 437thinstance of someone clicking on a flying toaster video is tedious and frustrating, so ‘automating the known’ is a high priority. About a third of the orgs are well along the way to having automation triggered directly from an event in the SIEM and not within an incident response workflow.   About a third are using a human in the loop approach for all orchestration, and about a third are just starting the journey.  For all,  the goal is to move their staff up the value chain to more interesting work, which increases job satisfaction and reduces turnover.  Sounds great, right?

One challenge is that you have to trust your model/system that’s generating incidents, to ensure that you don’t remediate inappropriately.  Once CISO said that automation is like gun control – it means hitting what you aim at.  Another organization has an explicit workflow for new incidents that requires automation (or triage as to why it can’t be).  Nearly all expressed frustration at how much time and energy they are spending on gluing systems – both security and IT – together.  One remarked that it’s like the late 1990’s again: everything is hub and spoke, and asked where the middleware for security?  A number were very interested in being able to consume security as microservices delivered from the cloud.  For those with managed security services, they’re planning to put orchestration and automation into their next renewal contracts.

And they plan to do the same for AI, assuming that the technology matures in time.  Right now, only about a third of the orgs are using AI, and the vast majority of those are implementations of basic machine learning.   Several were using cognitive to handle unstructured information, and only a handful were starting to explore using deep learning.  There were a number of common goals and obstacles.

Almost universally the desire is to use AI to remove the noise and get to the signal.  SIEM was acknowledged as the ante to get into the game, though a number of CISO’s are wondering if a advanced AI can replace a SIEM (and if the regulators will allow that).  The ‘black box’ of deep learning is expected to be a problem with gaining approval from auditors.

A surprisingly large minority either had, or were looking to build a security data lake from scratch, and then implement their own incident discovery capabilities within the lake.  The majority are looking for vendors to build COTS projects along those lines, again, as a replacement for a traditional SIEM. As a number agreed, those projects are expensive and high-risk – and they can’t afford mistakes with a limited budget.   AI, and especially machine learning, within individual segments, like application source code analysis or user behavior analytics, was viewed as much more ‘real’ today than building a system to discover a rock in a data lake.  I made a comment about many of those turning into data swamps, to a lot of knowing laughs around the table.

That triggered a good conversation about the need to ‘move security to the left’.    For source code analysis there was wide agreement that security needs to get involved in the DevOps – or DevSecOps – process.  For UBA a plurality spoke to cultural barriers to monitoring all the people all the time.  Big Brother is something they want to avoid, and one described an internal marketing effort to rebrand it ‘Big Mother’ instead:  Not watching over you, but watching out for you to keep you safe.

And that brought the group to the longer term goal:   They want to have an AI consuming inbound security information (books, articles, alerts, threat feeds, news stories, and so forth), then consolidate it down into proactive actions and recommendations, including an initial, automated survey for IOCs in the environment – a push to hunters, instead of a pull from responders. Getting there requires both trust in the AI, and skills that don’t exist yet.

Skills were woven throughout the entire conversation, and a point of concern.  Orchestration/automation frameworks require programming skills that don’t currently exist on the security team.  For AI, particularly for those building their own data lakes, the largest obstacle is finding data scientists with security backgrounds.  A number of orgs are going out into the colleges – and even high schools – to recruit cyber security students for internships, and steer their electives towards programming and data science skills.  There was a strong consensus that existing programs need to evolve and add those to the core curriculum.  They may not be needed today, but they will be.

In the end, there was universal agreement that our security programs need to change posture – to move from responding on our heels to hunting on our toes, and that AI and automation will make that possible.

Filed Under: Security Tagged With: AI, artificial intelligence, automation, CISO, Evanta, Gartner, machine learning, ML, orchestration, security

Securing your Dessert

June 7, 2018 By Doug

(C) Depositphotos / belchonock

I have a joy/frustration relationship with Apple.  Their products are amazing and have changed my life, and at the same time some of their design decisions and choices are user hostile (dongles).  Their software usually just works, but when it doesn’t, well, you get Siri. On one point though, their heart – and code – is in the right place, and that’s with security and privacy, so kudos Apple: Mojave certainly isn’t a barren desert – it’s a good dessert when it comes to security and privacy.

In their upcoming releases, Apple is doing a number of things to dramatically improve security and privacy.  Safari will now take steps to prevent ‘fingerprinting’ by returning only generic configuration information, and by blocking the tracking embedded in comments and social media buttons.  They’re also removing social media account integration into the OS.  Both those are big changes that provide passive protection against invasive tracking.

Other changes include a really nice password API for tools like 1Password(my password manager of choice and recommendation).  The built-in tools are ok, but I’d rather have a purpose built solution, and Apple’s now putting that choice into our hands. There’s camera and microphone warnings, end-to-end facetime encryption and a lot of other small refinements too.

One of the more controversial changes is that they will now block USB data access starting one hour after a passcode was last entered.  That renders Greykey and similar devices useless – it’s a class-protection feature, rather than whacking the specific bug currently exploited.  Without getting into the policy issue of law enforcement back doors – after all math is hard and unforgiving (that’s why gravity is not just a good idea, it’s the law) – this is a protection that we all want. Why?  Because it’s only a matter of time before a Greykey is stolen and reverse engineered.  Then we get a dark-web service ‘Send us the stolen device, and we’ll send you the data back’.  No thanks.

What else would I like to see?  An option to, after initial connections (e.g. to a captive portal), change my DNS servers to Quad 9 or 1.1.1.1 for further tracking and malware protection (I recommend Quad 9 by the way).  Split DNS would be even better – use the network provided one for local traffic, but a standard one for all other queries.    While we can do that on home routers, it’s a real problem when on cellular data.

I’d also like to see an iOS application outbound firewall.  I really don’t want my games sending data back, and while I can block it on cellular, I can’t on wifi.  That’s been an outstanding request in their queue for years.

A bigger stretch (because the content providers would probably freak out) as a separate paid service, would be an iCloud based VPN that ‘just works’ to protect against ISP eavesdropping, tracking, and HTML injection.  The ultimate would be an Apple search engine that doesn’t monetize search data.  Just please don’t have Siri do it, or we’re likely to get a Beatle’s album when we’re looking for information on apple pie recipes J.

Seriously, Apple’s gone a long way to making security consumable by everyone, not just those who have the time and inclination to follow (or build) their own recipe.  Kudos to the company, and particularly to Tim Cook for building a business model of serving customers instead of exploiting consumers. That’s a big reason why I recommend Apple  products to my family and friends – secure apple pie makes a great dessert.

Filed Under: Security Tagged With: 1password, apple, dessert, FaceTime, password, privacy, quad9, security

Principle of Least Data

May 23, 2018 By Doug

(c) Depositphotos / Photojog

GDPR goes live this week, which is the first real salvo against corporate data creep (as in both expansion, and creepy).  Companies these days tend to keep every bit of information that they can, because they might possibly need (or be able to exploit) it someday.  Is it worth the risk?

A lot of companies make their money using and exploiting user data, with varying degrees of disclosure. One large social media company recently has been doing a lot of tapdancing around their business model, and the fact that the more privacy controls they provide their users, the less information they have to sell to their customers.  They care about user privacy only to the extent that it jeopardizes people’s willingness to provide the data they need to sell their services. It’s a bit disingenuous to pretend otherwise.

But this is more insidious than overt, intentional data collection – many companies capture and retain data just because it’s there.  Here’s one example:

You’ve done everything right.  You’ve disabled the default password on your router.  You’ve created a new SSID.  You’ve set a good wifi passphrase.  Then you discover your cable provider has captured that information and stored it on an insecure site.  That happened this week: Comcast Bug Made it Shockingly Easy to Steal Wifi Passwords.  My first question after reading it was why did Comcast even capture that information to begin with?  Even if it was to make it easier for customers to configure their networks (doing it via a web portal rather than on the device), that’s no reason to retain the information, let alone put it on a publicly accessible server.  That’s just sloppy and lazy.

More importantly it violates something I call the Principle of Least Data.  Organizations should only collect information for current business, legal or regulatory purpose.  They should only retain that information as long as required to complete those purposes and should actively dispose of data as soon as permitted.  If it’s not there, it can’t be leaked or stolen.

GDPR changes the risk equation, which is why Facebook just moved all non-EU citizen profiles outside the zone.  Many other organizations are doing something similar, and adopting two-tier policies for EU and non-EU data collection. Some are making life easy and just adopting it worldwide.  But the truth is that we will see breaches and disclosures of GDPR subject information. That’s inevitable.  The stakes are just much higher now.  So the best defense is simply to get rid of the data you don’t need anymore (or never really needed it to begin with).

So when you launch a new project, make sure someone asks the question ‘why are we collecting it, and when do we get rid of it’.  “Because we might need it someday” isn’t a valid option – keep the least data you need to do your business.

P.S. The story above is why I strongly recommend that people purchase their own cable modems and wifi-routers – and not use any of the hardware provided by an ISP.  These should be two separate devices (not an all in one), as the carrier controls the modem, while you have full control over your network.  Bonus: You’ll save a ton of money over the life of the device.  Just make sure you change that default password and update the firmware on a regular basis!

Filed Under: Security Tagged With: security

  • « Previous Page
  • 1
  • …
  • 5
  • 6
  • 7
  • 8
  • 9
  • …
  • 24
  • Next Page »

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser