Doug Lhotka

Technical Storyteller

  • Home
  • Cybersecurity
  • State of Security
  • Photography
  • 3D Modeling & Printing
  • About

Opinions and commentary are mine, and do not reflect those of my employer.

(C) Copyright 2019-2023
Doug Lhotka.
All Rights Reserved.
Use of text, images, or other content on this website in generative AI or other machine learning is prohibited.

Voting Fraud – Back to the Future

June 16, 2017 By Doug

(c) www.depositphotos.com / roibu

We’ve forgotten that things like stuffing ballot boxes, buying votes with alcohol, missing and broken voting machines, and all other manner of manipulation occurred in the past (some more recent than others).  I’d argue that on balance, our elections today are the most fraud-free that they’ve ever been, but with the advent of more and more electronic voting equipment (and, heaven forbid, internet voting), the risk may be again growing.

Many of the current systems use a single machine where votes are registered and counted.  Most have a paper audit trail, though at least one model uses thermal paper, which is not remotely archival.  Those audit trails are rarely machine readable, often consolidate a large number of issues into tiny font, or worse, scroll part of the ballot off because it’s too long.

Combining the voting selection and counting into a single system makes those machines a key risk for failure, either through fraud, hacking, or simple failure. I had a friend that worked for an IV&V company certifying voting equipment.  She’d worked in aerospace and the procedures were as good as any I’ve ever seen, but even with all that, issues still got through – new vulnerabilities, failure to patch machines, or simply insecure design are all problems that still plague us today.  There’s simply no way to ‘prove’ that the machines are secure – even through formal validation methods.  There’s too many machines, too many locations, too many opportunities for tampering, and too much code to test.

So what to do?  I believe that looking to the past provides the answer – paper ballots.  But let me explain the nuance here.  Build a one machine on which the voter makes their choice, and then have it print out a paper ballot using good old-fashioned pigment ink/toner so it’s archival.  By having the machine print the ballot, rather than a human marking the form, it eliminates the ‘hanging chad’ or ‘improperly marked’ ballot, because the machine prints them.

Once printed, the voter can validate that the printed votes match their intent, and deposit into a separate machine that tabulates the votes.  The paper ballot remains the legal record, and can be preserved for hand counts as long as needed.  Splitting the system into two separate machines separates the most complex code – the user interface to cast votes on a lower risk level, because the voter can verify the output.

The counting machine does need integrity checks and a higher level of validation, but we can mitigate that by using two different machines from different vendors and compare the results.  Of course, that only works if there’s a durable, voter self-validated paper ballot as the legal record, and manual transit between machines.

It won’t help with mail ballots (which have a whole separate set of risks), and isn’t exactly cool and modern.  But it’ll work.  One last thing: let’s keep voting off the internet.  That’s, as we say in the business, a bad idea.

Filed Under: Security

Apple, Security, Threat Models and a Tightening Sandbox

June 6, 2017 By Doug

Apple and logo are registered trademarks of Apple, Inc

I watched Apple’s iOS and MacOS keynote with a lot of interest.  Security, privacy, encryption, and two-factor all got some attention, either in the updates or on the main stage – it’s really cool to see a company build a product strategy around those capabilities.

At the same time, they’re removing granular decisions about how that security is implemented.   This dumbing down and forcing people into a very narrow configuration is getting annoying, and is becoming pervasive across their product line.  So when does it become a security risk?  When Apple’s threat model doesn’t match yours.

Let me share a few examples – like what is and isn’t sync’d to the cloud.  I ran into an annoying “feature” when reconfiguring my home network over the weekend – if you sync anything to iCloud keychain (to use HomeKit for example), you sync everything (which is why I don’t use it for passwords).  For example, it’s no longer possible to have a different set of wifi networks on each device.

Another example of this is the fingerprint reader – you can use the fingerprint, or a pin/passcode, but not both.  Now on a phone that’s probably ok, but on a Mac?  It’d be nice to see an option to use a simple PIN and a fingerprint, but Apple’s decided that the risk of fingerprint forgery is small.  Is that your threat model?  Maybe, and maybe not.

We can control application data access on cellular data, but not on wi-fi?  Apple’s threat model is about data usage.  Mine’s about monitoring and tracking (and to be fair, data usage too).    Evidently two-factor will be forced for AppleID logins in iOS 11.  That’s generally good, but I can come up with situations when you’d want to turn it off.  Will it be allowed?  Not sure.

They’re now going to store and sync all your messages via iCloud, not just device to device.  Sure it’s encrypted, but what if I want some data left on one, but not on others?  Again, it’s not hard to come up with some use cases where you’d want more granular control (and yet they still don’t have a “delete all chat’s option”, go figure).

They push their streaming content hard, to the point that the TV app doesn’t work reliably in airplane mode (I’ve had a case open with executive relations for months on this one), which they don’t view as a risk.  I do – to Availability, and I’ve suffered through multiple flights without media as a result.  I’ve been sorely tempted to buy an Android tablet just to have movies when I’m delayed for four hours during a thunderstorm.

Hopefully Siri will get a brain transplant and not just a face lift as HomePod comes out, but the idea of an-always on speaker listening in my house is, well, creepy.  And one with a camera?  I was amused recently when I saw someone with a sticker over their laptop camera….right next to an Echo look.  No thank you.

Apple Pay person to person is interesting, and I’ll be very curious to see how they deal with fraud – or fake allegations thereof.  The QR code integration into the camera is interesting, and I can see fun ways to leverage it – like taking someone to a malware site by posting one on a sign next to a scenic overlook, and titling it ‘Photographic Tips’.

I could go on, but I think I’ve made my point.  Apple’s a remarkable company, and I use many of their products, but their view of users, threat models, and use cases is growing steadily narrower.   It’s still the most secure computing and mobile platform for consumers, but let’s not kid ourselves – there’s tradeoffs to be had.

Filed Under: Security Tagged With: apple, iCloud, privacy, threat model, user security

The Cell Phone Wiping Conundrum

June 2, 2017 By Doug

(C) www.depositphotos.com / @ baloon111

A colleague of mine recently lost their cell phone while in airplane mode.  They triggered the remote wipe function, figuring that it if was turned on, it would trigger and erase the device.  They use a password manager, so figured all their data was safe.  But they didn’t call the cell phone carrier and disable the SIM, because that would prevent the wipe from working.

A month later, a cell phone bill for several hundred dollars shows up – the person that had found the phone, didn’t try to resell it or steal the data, they simply extracted the SIM card and put it into another phone, then made a bunch of overseas calls.  We saw that years ago with stolen calling card numbers and conference call passcodes, so it’s just a modern version of an old scam.

But she had a good point – the smart phone manufacturers are highlighting the ‘wipe’ function as a key security feature, but if the device is offline, it doesn’t work.  Even the cell carrier’s own sites will tell you to send the wiping commands, without an acknowledgement that reporting the device lost or stolen will disable data connectivity.  Don’t get me wrong, it’s still work sending the command (especially if it wasn’t in airplane mode), but you can’t rely on remote wipe.

Not a good tradeoff, but there’s some things we can do to help.  Let’s start with some before the device is lost.

  • Make sure you have the device set to wipe on 10 failed attempts, and lock out any biometrics after only a few (iPhone does the latter automatically)
  • Only use robust biometrics – post on that coming soon. Tons of snake oil out there – avoid iris and photo at the moment.  Bonus points for fingerprint readers if you don’t use index or thumb prints.
  • Only use airplane mode when required – otherwise leave cellular service on*
  • Be cautious about what you enable on the lock screen
  • Limit how much email you leave on your server. IMAP or exchange via a real email client allows you to move all your mail to a trusted local data store (drag from INBOX to On My Mac folders for example).  This will replicate and remove email from your device, so if someone does gain access to it, there’s limited email history available to them.  Good reason to avoid cloud-only email services by the way.
  • Put a sticker on the back of the phone with contact info, in case the device is found by an honest person and returned.
  • Back up the device regularly. I prefer local to the cloud because of the restore times (more on consumer cloud and it’s risks another time).
  • Have a quick response plan ready to go in case it’s lost with steps below

And when it is lost:

  • Call the device – if you’re lucky, an honest soul will answer and you can get it back. Also try ‘find my device’ to see if it’s locatable.  If not, move on to the next steps:
  • Immediately send wipe command / put in lost mode
  • Immediately change your email passwords. If the device is somehow unlocked, email is how other password resets are validated.
  • Notify the cell phone carrier the device has been lost.  This prevents the phone from being used for SMS or voice multi-factor authentication.
  • Buy a new phone.

The ecosystem is training people to not disable the SIM card, and the cell phone carriers aren’t doing any form of device authentication when SIM cards are moved.  I understand the convenience factor there, but at least it should trigger a higher level of fraud detection.  New device, no notification from the customer, well, if you see a bunch of calls to Europe happening, maybe the carrier should do some level of validation that it’s legit traffic?  Fraud detection algorithms like that are relatively straightforward to put in place.

Right now we’re given a choice between preventing calling fraud and providing a window for wiping the device.  Tough situation.

 

* Note:  This is why ‘find my mac’ type of functionality is essentially useless – it has to connect to a known, trusted network before activating.

Filed Under: Security Tagged With: fraud, phone, remote wipe, theft

  • « Previous Page
  • 1
  • …
  • 16
  • 17
  • 18
  • 19
  • 20
  • …
  • 24
  • Next Page »

Cybersecurity

Photography

3D Modeling & Printing

Recent Posts

  • Cabin Ruins in Montana
  • Grand Canyon HDR
  • Grand Canyon First View
  • Grand (foggy) Prismatic Spring
  • Sunny Day at Grotto Geyser