Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

Technical Storytelling – Keeping your Audience Awake

December 4, 2018 By Doug

(C) Copyright Depositphotos / @luislouro

When people ask me what I really do for a living, I tell them I’m a storyteller:  I listen to people tell how things are, apply my experience and insight to the situation, then tell a story about how we can make the future better.  After a recent keynote, I was flattered when several people came up and asked me to share some thoughts and ideas on technical storytelling.   That’s a wide ranging topic, and while I’ve been studying the topic for years, I don’t pretend to be an expert on all of it.  So here’s just a short intro to the topic of modern, technical storytelling.

We are all wired for story, and have been since the first time we told a tale around the campfire – stories, and sharing them, are one of the defining characteristics that make us human.  And those stories have things in common.  They have a beginning, a middle and an end, and often have heroes and villains, trials and tribulations, and a twist or surprise.  Stories connect us with each other, and transform us from individuals – or vendor and customer – to a team working together towards a better outcome. Use cases are stories when built properly.  Product demos are most effective when they’re stories –  hint: ‘this tab does this, and this button does that’  is nota story!

Stories can be visual, verbal, or written.  I’ve found that it works best when they’re either visual and verbal, or written by itself (though some pictures can help).  In this post, I’ll talk mostly about formal presentations, but whiteboards also fall into that category – you just draw the picture as you go.  The worst are when written and verbal are combined.  Case in point is the infamous slideument that tries to multi-purpose a presentation with a document.  It does neither well, and often sucks at both.   By this measure, about 99% of presentations, well, suck.  Seriously, if you’re going to leave your slides behind – or heaven forbid, read them, you don’t need to show up in the first place.  That’s because of what Guy Kawasaki calls the Bozo Effect.  It goes something like this:

If you need to put eight-point or ten-point fonts up there, it’s because you do not know your material.  If you start reading your material because you do not know your material, the audience is very quickly going to think that you are a bozo.  They are going to say to themselves ‘This bozo is reading his slides.’  I can read faster than this bozo can speak.  I will just read ahead.

And if you’re thinking, ‘well, I don’t read them, I just talk to them’, then you’re missing the point. If there’s text on the screen, the audience will read it, wonder what you’re skipping, and read instead of listening to you.  That’s especially bad if it’s at a keynote, because they probably have to strain to read the tiny 8 point font with the wrong copyright date at the bottom on the footer.  A few words, a strong visual, and a compelling story are all you need, and it’s far more effective as books like Made to Stick, Brain Rules, and A Whole New Mind which talk about the science behind all this.

The best stories are always personal, which is why it’s critical to design and customize your presentation to the audience.  Bespoke trumps off the rack and sends a message that you are fully invested in the experience.  That’s the opposite of skipping slides in a ‘canned’ deck, which sends a clear message that the audience wasn’t worth the effort to prepare properly.  Likewise, running long at all, or short by a lot, sends a message that the audience wasn’t worth practicing for.  Now having a stump speech is fine (I have several), but they’re alwaystailored to the audience. What’s nice about the Zen/Ted style is that since the bulk of the content is verbal, a lot of the tailoring can be done in speaker notes.

More substantial tailoring is really like building a presentation from scratch.  That begins with a brainstorming session.  I like going analog – one thought per post-it – and clustering, rearranging, organizing, and cutting(!), on a white board before settling in to build the content and map visuals to the concepts.  For visuals, it’s worth investing in high-quality images or graphics.  My style is photographic, so I get my images at either Depositphotos.com or shoot my own. Yours may be more graphical, but please, no cheesy clip art.   Please keep in mind IP rights to images, and only use ones that are appropriately licensed.

Fonts are important – especially using them consistently and placing them properly.  One pixel out of alignment or slightly different size causes a cognitive lurch and distracts the audience.  Speaking of size, a good rule of thumb is that if you can’t read the font on your laptop screen from across the room, it’s too small.  I built a custom template with fonts pre-set, and gridlines to ensure that everything is in place.  And speaking of templates, don’t use one with a standard header and footer.  If the audience doesn’t know who you are and why you’re there, you’re doing something wrong.  And since you don’t ever (seriously, never…no really, never!) share your deck – it means nothing without your narrative after all – you don’t need to worry about copyright on every page.  I highly recommend Slideology and Presentation Zen as guides to build better decks, and regularly re-read them, especially before building a major presentation.

Last, Practice, practice, practice.  Rehearse out loud, with a remote control, either sitting or standing as you will actually deliver it.  If you really want good feedback, record yourself (on video!) and watch the session. Then do it again. Repeat until you know what you’re going to say on the next slide and can seamlessly hit the remote button in the middle of a sentence – no pause necessary between slides.  Trust me, that’s probably the most painful experience you can inflict on yourself, but it pays off.  If you want to see a master presenter at work, go watch any ‘SteveNote’ – a Steve Jobs Apple keynote.

All told it takes between 10 and 30 hours to build and practice a new one-hour presentation, and usually includes between 60 and 120 slides and images.  For the initial few run-throughs, speaker notes with thoughts about what I want to say are hugely helpful, but after practicing enough, all you really need to see on the speaker display is the current and next slide.  That’s when you know you’re ready.

Now all this is about how I build and deliver technical stories.  A lot of these techniques are universal (e.g. lose the slideument), but others, like the photographic style of presentations, are my own creative style.   Bergman, Lucas, Spielberg, Eastwood, Roddenberry, Pournelle, Niven, Rand are all incredibly effective storytellers, each with completely different styles.  Now, I’m not including myself among their ranks, just illustrating that the key to truly effective storytelling is to find your own unique creative voice. We are all storytellers!  Every one of us told stories as kids, made believe that we were super heroes or heroines, animals, cops and robbers, or simply made something up about how that window got broken.  It’s part of being human.  We just have to remember, and practice, how to do it well.

It’s worth the effort. Your audiences will thank you.

Filed Under: Security Tagged With: security, storytelling

Beyond SIEM – Next Generation Security Analytics

November 14, 2018 By Doug

(C) Depositphotos / @ooGleb

I’ve written before that security is fundamentally an information management problem.  It’s about having good sensors and instrumentation in the environment, having that information flow to a central repository where anomalies can be identified, and then being able to take action on it back in the environment.  That’s traditionally be done through a SIEM solution, and while they have provided significant advances to our security posture, we need to look ahead to more sophisticated defenses – to move beyond signatures and rules, to behavior.

Endpoint antimalware has undergone a similar transition.   We all started out running signature based antivirus, which was pretty effective – in the early 2000’s –  at protecting us against known threats.  Within a short time, most of the large vendors have about the same hit rate, so  it’s became an arms race to see which vendor can update their signatures fastest as the competitive differentiator.   That’s why many programs I work with are moving towards the ‘free’ solutions bundled in with the operating system, particularly on the latest OS releases, and then redeploying that spend elsewhere.

Elsewhere is to modern antimalware solutions focused on the behavior of the system.   Attacks have patterns of behavior, so they have rules that aren’t based on a simple file hash, but are still common across all users.  Having a web browser open a new window minimized, then have constant traffic even while minimized, is a pretty good indication that something is up, even if the destination IP address hasn’t shown up as a bad on a threat feed.

Security analytics systems have followed a similar path.  Initially they focused on catching signatures – IP’s, domains, URLs, hashes that were known IOCs.  Later, they started aggregating information across multiple sources to cross-correlate activity and alert on it.   Downloading information from the payroll system, followed by Dropbox activity to a non-approved account, for example.  Data activity monitoring or DLP plus CASB, brought together in a SIEM catches those kinds of attacks.  Don’t get me wrong, this is a huge advance in capability, and catches a large number of attacks early in the kill chain.  Yet it falls short when we’re trying to defend against advanced, unknown attack vectors.

Most modern analytics platforms have started to use AI and machine learning to create individual user profiles.  The above example might be appropriate for a given role or individual, but not for others.  These user-level capabilities allow us to assign risk scores to accounts, but the models are still fairly limited.  Most rely on behavior over the previous several weeks or a few months.  They’re also largely focused on human users of systems, and ignore entities like bots, servers, or containers…let alone televisions, toasters, and other IOT devices.

That’s the first analytics advance, and the one closest to mainstream.  To move beyond user behavior analytics, towards true entity analytics for everything in the environment.  A lot of that can be automated – platforms can build and maintain behavior models based on past (assumed good) behavior, and then detect deviations from that norm.  Yet those too have limits, as they’re generic for a class of entity across multiple organizations.

That’s the second advance:  inserting business cycle knowledge into behavior models. An online retailer probably has a good idea of what their container workload profile looks like.  If they see a CPU or traffic spike, they can infer that something has gone wrong (either a incident, or an IT issue), and take the container down – ideally automatically.  But if that happened on Black Friday, it wouldn’t be an anomaly.

That’s an obvious example, but let’s take others.  We see a flurry of use of box.net right before the end of a quarter.  Alert?  If it’s coming from the folks doing our M&A work, maybe.  But if it’s a sales rep, communicating new pricing to a customer using the customer’s file sharing solution (rather than ours), probably not.  In fact, if our team blocks that, the VP of sales is probably going to have words with our CISO.  I could go on, but you see my point.  We’re going to need to bake business knowledge into our models.  At the minimum, that probably means a 12 month behavior window for most entities, with the business calendar being one of the factors included in the model.

The last advance on the horizon is deploying the ability to build machine learning models directly into the hands of the defenders.  For most organizations, the idea of a data scientist who understands security use cases and can build a model is beyond their reach – a very expensive purple squirrel as one recruiter described it.  So we need to make it easy for a defender to build and deploy a model that’s customized to the environment – to merge AI/ML with the existing rule capabilities in our analytics platforms, and alert on events specific to our critical assets and their unique behavior across the annual business cycle.

And that’s the evolution of SIEM we’re headed towards.  Not just a security platform, but a business security analytics platform.  Yes, we’re still going to need signatures and rules, as well as automatic and generic behavior analytics.  That’s where most of our threats are.  For the true APT’s though, we need far more dynamic, flexible and mass customized business aware AI and ML to improve our chance of detecting them before the boom happens.

Filed Under: Security Tagged With: AI, artifical intelligence, behavior, machine learning, ML, security, security analytics, siem, uba, user behavior

Entering the era of pervasive security

November 7, 2018 By Doug

(c) Depositfiles / katacarix

I often open a keynote presentation by noting that organizations are undergoing a fundamental shift in security strategy – moving from compliance focused, to a risk based approach. That’s still ongoing, even for large and sophisticated organizations there is still a gravity towards ‘doing it for the audit’, rather than ‘doing it because there’s risk’.  Yet there’s another transformation on the horizon that most businesses are ill-prepared to address: we’re headed towards an era of pervasive security.

Compliance provides a floor for a security program – it’s the basic minimum that needs to be in place to pass an audit, but it does not mean that you’re secure.  Nearly every large breach in the past few years was compliant and had passed audits.  Yet they all were breached.  That realization is one reason that most security programs have been moving towards business risk – to provide a more effective security program that reflect the real-world threats facing them.

Now that sounds great, and it accurately reflects reality; after all, we can’t secure everything. Limited resources – people, money, time, technology – mean that we have to prioritize and focus our efforts on those portions of the program with the greatest return.  And the bad guys know it.

That’s the fundamental difference between security incidents and IT failures or natural disasters.  Often parallels are drawn between those, and programs and plans are drawn up based on system outages or tornados.  That’s fine to a point, but we have active adversaries working against us – attacking our systems, looking for weak points to gain a foothold.  One CISO recently said that what keeps him up at night are the low-risk systems.  Because there’s little security around them, and they talk to his high risk systems.

That’s why we need to enter the era of pervasive security.  The good news is that pervasive security, for most organizations today, begins with basic blocking and tackling.  Patching systems, scanning for vulnerabilities, threat feeds, encryption, securing identities – especially privileged users, and having good visibility into what’s happening on systems and across the network all contribute to building that platform.  But there’s the largest challenge of all, and ironically it’s IT and product development. We’re continuing to build insecure products and systems.

It needs to be a mindset baked into our DevOps workflow (DevSecOps!).  Engineers, developers, business analysts, UI designers, DBA’s, all need to have a secure thinking mindset – thinking about how things break isn’t enough.  The whole organization needs to think about how things can be broken.   Pervasive security by design – hardening systems against attack, making them resilient when they are attacked, and recoverable when they are compromised will require a fundamental shift in how we build and deploy systems – and funding to go along with it.  That’s not an easy shift, and will take both willpower and investment from the CEO and board level down.

Compliance isn’t going away – especially if there is a breach, not being compliant is brand-damaging (even if it wasn’t related to the breach itself).  Risk focus won’t either – it’ll help us prioritize where we deploy resources, and will continue to be the language as we communicate with business stakeholders and the board.  But those conversations will change; pervasive security will be the new normal for successful businesses in the next decade.

Filed Under: Security Tagged With: business, compliance, pervasive, program, risk, security

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 24
  • Next Page »

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser