Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

Sly like a (Fire)Fox

June 25, 2020 By Doug

Mozilla has been overriding network settings for DNS in the browser for a while now, motivated by privacy concerns, but recent actions to default to an ISP DNS service raise questions and seem inconsistent with that design.

DNS over HTTPS is an attempt to block eavesdropping on DNS requests, which is great in theory, but causes a number of problems, especially with the current design in Firefox.  From an architectural standpoint having any application use its own DNS system rather than the network stack’s configuration is poor design.

First, network administrators have real requirements to monitor DNS requests.  For more formal networks, this is often used to block malicious links, track malware, and prevent access to prohibited content.

In my own case, I run a Pi-Hole to block advertising and track across all of the devices on my network, and in turn have that pointed to Quad9 (secure DNS of course) to leverage their block list, with CloudFlare as a backup.  Other home users will have parental control software active.  In both cases, Firefox overrides those settings and bypasses any local blockers.  By default.  With no notification or consent.

Their original argument for making this opt-out was because most users won’t turn it on if it’s opt-in.  I can sort of get that, and previously the default DNS servers were pretty benign.  However, with the announcement recently, Firefox will now default to an ISP’s ‘secure’ server if you’re on their network.  Mozilla claims that making this change is OK because users can opt-out, but again, that’s contradictory to their reasoning for opting users in by default in the first place. In any case, this doesn’t seem exactly in line with their previous position on providing secure DNS to avoid ‘ISP eavesdropping’ is it?

I’m not mentioning the specific ISP, because I expect it’s just the first of several that are going to go down this path.  And while it’s possible that they’ll actually provide secure, private, non-tracked, non-filtered DNS lookups, there are loopholes in the Mozilla DOH Resolver Policy. And when it comes to ISPs, let’s just say that past practices are cause for reasonable concern.

What Firefox should do instead is pop a configuration screen that allows the user to opt-in both to DNS over HTTPS, and then select the server they’d like to use.  No default.  No automatic enablement.  When new server options are added, just pop that screen up again and ask if they’d like to change.   Empower the users to make a choice based on their own priorities and interests.

That’s how you support user privacy.

Filed Under: Security Tagged With: cybersecurity, dns, firefox, mozilla, security

Viability scanning: discovering the security risk of technical debt

July 31, 2019 By Doug

(c) Depositphotos / @stockasso

Flash was one of the great 1990’s technology, bringing rich content to the (largely) text based Web at the time, but evolved in an era before widespread security risks and is no longer fit for purpose.  Adobe announced its end of life two years ago this month, and yet there are sites that either use it by default, or require it to function at all.  Windows 7 is much closer to end of life, but many companies won’t be migrated in time. Those are just two examples where technical debt will become a security risk, yet we never seem to learn.

If you’ve read my articles, you know I have a business focus.  I get business cycles, budgets, and risk tradeoffs.  In some cases, delayed migration might be the right answer – the classic is a Windows XP system that runs a multi-million dollar piece of industrial equipment. That’s fine to a point, but it requires compensating controls, which may work in isolated instances, but they don’t scale.  For ones like Flash, the only real option is replacement.  We know our adversaries stockpile vulnerabilities in systems with announced end of life to deploy once patches are past.  When that happens companies are going to be faced with either going dark, emergency replacements, or accepting significant risk and liability for what, at that point, will border on negligence.

Looking forward, companies need to move towards life-cycle budgeting for new technology acquisitions. That involves assessing the real lifespan of a particular platform, and planning ahead when the original system is acquired.  That will drive better decisions during the development and acquisition process, to avoid vendor, and version, lock-in that tie solutions to dead-end platforms.  We also need to do the same thing for existing systems: take inventory and do a realistic assessment of the lifespan of the assets.  Call it ‘viability scanning’ to go along with ‘vulnerability scanning’.

This analysis is a convergence of enterprise architecture, IT governance, program management, and cybersecurity.  On the technology side, it needs to include end of life dates, vendor history, vendor viability, product viability and openness.  On the skills side it includes the ability to hire quality staff to maintain, enhance and if necessary, port the application going forward.  On the business side, it needs to be baked into the overall business plan and multi-year budget cycle.  Technical debt always costs more at the end of the loan than the beginning – just ask the victim of any recent breach where the patches had already been released.

Equally important is the vendor side of the equation.  Vendors need to announce end of life dates, rather than letting people assume from past behavior.  Better yet, they need to have a policy forannouncing those dates – a promise against short-notice end of life surprises.  For example, on the good side is Microsoft, who has both pieces in place – no one’s surprised when they drop support.  The counter-example is Apple, who has neither – we never really know when an OS version is no longer supported…at best we’re left to guess.

In the end though, it’s up to the business owners to make the decision.  Technical debt only gets worse, and I’ve yet to see an organization that does well with deficit spending.

Filed Under: Security Tagged With: cybersecurity, enterprise architecture, flash, it governance, security, technical debt

GDPR Fines: So now we know

July 10, 2019 By Doug

Copyright © 2016 Alexey Novikov

Over the past few years, as companies I work with have been getting ready for GDPR, everyone knew about the potential fine size, but no one really knew if they’d be as big as they could be.  Now we know.

In the past few days, Marriott (https://thehackernews.com/2019/07/marriott-data-breach-gdpr.html) and BA (https://thehackernews.com/2019/07/british-airways-breach-gdpr-fine.html) were both hit with $100M+ fines for breaches. While both are going to appeal, the benchmark has been set, and we now know that the regulators are serious about enforcement.  One interesting fact – if the reports are accurate, Marriott is being finedunder the GDPR, while the breach occurred beforeit went into effect.   That certainly changes the risk equation, as retroactive security is, alas, still beyond our ability today.  I suspect we’ll see a similar seriousness with CCPA (the new California regulation), though those costs will include consumer litigation as well.

So what’s an organization to do?  Protecting against breaches is still critical, but they’re still going to happen.  I’d argue that early detection and quick response are table stakes in the current regulatory environment.    Recent studies show that lateral movement begins with 18 minutes of the initial breach, and data exfiltration can quickly follow.  Now to be fair, you’re probably not going to lose 90 million records overnight, but for a smaller firm, losing hundreds of thousands is still enough to close the doors.

Which leads to the big gap I see, particularly at smaller companies – business hour security isn’t enough anymore. At a minimum, after hours tier-1 with a rapid on-call escalation process, but for larger organizations it may include 7×24 investigation as well.  On top of that, I’ve seen a lot of teams focusing on improving detection over the past couple of years, to good effect, but without the ability to acton alerts very rapidly to stop a breach in progress, improved detection quickly reaches a limit.  Automation for remediation is critical – both to ensure it occurs rapidly, but equally important is that it happens in a controlled and planned manner in a stressful situation.  Running through the data center yanking cables to stop ransomware isn’t exactly an ideal response approach.

In one of my keynote talks, I note that security is an information management problem – without good information you don’t know what’s going on, and without taking action on that information, there’s little point in knowing.  The answer to GDPR, CCPA and other regulatory risk is to ensure that your security information lifecycle is complete, active, and effective.

Filed Under: Security Tagged With: automation, fine, GDPR, security, security information

  • 1
  • 2
  • 3
  • …
  • 24
  • Next Page »

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser
  • Sapphire Pool