Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

WannaCry – Who’s to blame?

May 23, 2017 By Doug

(C) 2009 Andrew Lewis / istockphoto.com

The latest strain of ransomware has been in the news, accompanied by somewhat sensationalistic news coverage.  Yes, it’s a big deal, but not unexpected – ransomware is only going to get worse.  Right now it’s focused on availability, next it’ll be integrity (more on that in the next post).  One question that’s just starting to be asked is, who’s fault is it?  I’m looking beyond the cyber criminals who released it, and towards the IT ecosystem that enables this to happen.

The NSA is a target for a lot of pundits.  From media reports, there was an internal debate about disclosing the vulnerability to Microsoft, but ultimately the agency decided against it.  It’s easy to take an absolute position on this – we should horde vulnerabilities for intelligence purposes, or we should always disclose.  Unfortunately, we live in a grey world and such black and white absolutes are hard to come by.  As the agency realized that the tools had been exposed, they privately notified Microsoft, who quickly issued a patch, and they’re to be commended for that.  They’re in a tough position, and it’s not an easy answer.  I’ll have more thoughts on policy options in the future.

End users certainly share some of the responsibility.  Patching is often stated as the first and most important defense against attacks.  I’d argue that running a supported operating system is even more important.  Folks still running XP need to either isolate the machine (physically offline), or upgrade.  That may mean spending money to update industrial systems, or to change procedures to run them disconnected from other networks.  There are no other viable options.  Consumers need to turn on automatic updates on their personal machines, and apply them regularly.  What they don’t need is to spend more money on consumer antivirus.

Businesses are in a tougher position – they may have thousands of machines that need updating, including both servers and laptops/desktops.  There are tools available (for example, IBM BigFix) that make this straightforward, but often it’s not the actual patching that’s the issue, it’s compatibility with enterprise systems.  Corporate development needs to remove as many platform dependencies as they can, to make applying patches less risky.  But we can’t even get rid of Flash, Silverlight, and Java, so OS linkages are likely to take even longer to fix.   They need to build processes to test and apply security patches quickly – it’s just hygiene, but it needs to have a higher priority than it currently does.

Which brings us to Microsoft.  They have been making this harder by changing how patches are provided (combining security and feature patches, and drastically reducing the information about what’s in a patch).  Both of those need to change to make it easier to assess and test updates.  On Windows 10, they also force-download and install patches – something that’s controversial.  That’s hit me with high mobile data usage, but probably keeps the vast majority of people far safer than the Windows 7 approach.  Are they responsible for the bug?  Sure, but I can’t beat them up over it – all software has bugs, and Windows 10 is a major improvement over previous editions.  By comparison their major competition is having a growing problem with defects.

So who’s to blame?  At some level, we all are.  Security professionals for making easy-to-say statements like ‘upgrade and patch immediately’ without regard to system stability or upgrade cost, pundits who say ‘disclose all vulnerabilities’ without regard to legitimate national intelligence needs, vendors who focus on rapid release of features at the expense of system stability, businesses who fail to invest in keeping their IT infrastructure current, and end users who blindly assume that all the others will take care of it for them.

Not an easy fix.

Filed Under: Security

Laptops & Airplanes: Security vs. Safety

May 20, 2017 By Doug

© Paul65516 | Dreamstime Stock Photos & Stock Free Images

Airplanes and laptop bans have been in the news a lot recently.  As someone who flies a fair bit, I’ve been watching the circus with both horror and amusement.  I have a suspicion as to what the real motivation may be, but we’ll get to that in a bit.

There’s allegedly intelligence information that terrorists are building laptop bombs to take down commercial flights, like what was done recently in Somolia.  Let’s accept that at face value for the moment.  In the case in Somolia, airport workers were involved, which is the one of the largest security risks in aviation today – a corrupt trusted insider.  I suspect, but don’t that EU and US airport security is somewhat better than Mogadishu, but let’s set that aside too.

All this is driving a potential ban on laptops and tablets in the passenger compartment of aircraft – but not in the cargo hold.  While baggage in the cargo hold is screened, the asserted motivation is that laptop bombs would have to be manually actuated.  Evidently timers are beyond the capability of terrorists?  Strikes me as a bit unrealistic.

What is real though, is the risk of fire from a few hundred lithium ion batteries improperly packed, on devices that have been slept (not turned off), in soft sided suitcases tossed around in the baggage handling process.  As Samsung found out with the Galaxy Note 7, this is not an academic issue – if one of those devices had caught fire in a cargo hold, the runaway fire would have been impossible to extinguish, and likely brought the aircraft down.  If we can’t allow an e-cigarette in the cargo hold, why would we allow a laptop with 10-100x the thermal energy?  That’s exactly what the airlines are pushing back on.

Fire is absolutely a far greater risk than a terrorist incident.  65 Million people fly between the US and Europe annually.  Between tablets and laptops (some people have both, most have at least one), that’s a massive increase in fire risk – and to the point where I’d have serious second thoughts about flying.

If there’s a real risk of a ‘laptop bomb’ then require additional screening – power them on & swab them down. That’s far more effective than sticking them in the cargo hold.  It’ll avoid the economic damage from reduced passenger traffic, lost/damaged/stolen devices, and the inevitable fires.  What it will do is increase security costs, add additional screening at the boarding areas for international flights (or at the main checkpoint if they do this for all flights), and generally create more passenger friction.  That could be mitigated by exempting Global Entry and/or TSA-Precheck passengers from the additional screening, but it’ll be an impact regardless.

And here’s where my suspicious, cynical nature comes in.  What if that screening is the real end goal, and all this chatter about a flat-out ban is designed to manipulate public opinion, and get us ready to accept the ‘lesser of evils’?   “It’s a pain, but it could have been worse – they could have banned them completely.”

Hmmmmm.

Filed Under: Security

3D Facial Authentication on iPhone 8?

February 23, 2017 By Doug

(c) Depositphotos / @ adogslifephoto

MacRumors has an interesting article on the iPhone8 with a rumor that it’ll forgo the fingerprint reader in favor of a 3D facial scanner.  It’s an interesting idea that could be very convenient, but would it be secure?

The obvious first question is, can it be spoofed?  It’s relatively straightforward to capture a 3d model of someones face, including visual coloration.  That can then be split into a texture, which is unwrapped digitally, printed and transferred to a flexible skin.  The 3D model can be printed on a consumer 3D printer, and the recombined with the printed skin to form a reasonably accurate 3d model of someone’s head.

Will it be good enough to spoof the sensor?  If it includes IR sensors that look for non-uniform thermal images, it’d be more reliable, but if it’s just an image and morphology recognition, it should be possible.  A lot will depend on the tolerance built in, and most facial recognition systems have a crossover problem.

Assuming Apple releases a phone that has this, and allows charging and headphones at the same time, without looking like (homage to Bruce here) a bleached squid is dangling from my shirt, I’ll give it a try and let you know.

Next we have the issue of compelled unlocking.  This is a murky area of law, and we don’t have clear direction.  Forcing someone to type in a password is probably not going to survive.  Requiring someone to press a finger to a sensor is currently winding it’s way through the courts, and that outcome is definitely in the grey area.

I suspect that requiring someone to hold still while a phone is held up in front of their face is likey to be permitted.

Last, these systems have real challenges with false positives and negatives – they range from nearly a joke (hold up a picture), to annoying (high failure rate).

Apple’s managed to do some interesting things with usable user-friendly security, so if anyone can get the tradeoffs right, it’s probably them.  I just hope it’s not the sole option on a flagship product.

 

Filed Under: Security

  • « Previous Page
  • 1
  • …
  • 17
  • 18
  • 19
  • 20
  • 21
  • …
  • 24
  • Next Page »

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser