Select Page

So FYI that the picture doesn’t have anything to do with this post.  It’s just getting close to Halloween, so figured I’d roll with that.  There are quite a few things in the news today -and a few stories I wanted to comment on.  I was planning on doing a “flyover” but then I started noticing a theme and decided to comment on that instead.  I’ll go through the items and let’s see if you can tell where I’m going with these as we go through.

First up, the story about the mysteriously-deleted Georgia election server.  There’s a pretty good writeup from Ars Technica yesterday but the situation might be summarized as:

  • Voting system server in Georgia has pretty severe security problems
  • Lawsuit was filed asking the system to be decommissioned and the results annulled
  • Four days later, the data was scrubbed from the server
  • Lawsuit moves to federal court; that same day the backups were degaussed (three times)
  • Apparently, at some point along the chain, a litigation hold notice was provided per an update from Ars

Seems pretty shady right?  Like, destroying the data just days after the lawsuit was filed?  And then the same day as it escalates, they decide to degauss the backups multiple times?  I heard that and, truthfully, I was like “holy crap, that’s some Amerikans-level spy shit right there.”  But then I went on to read through the large chunks of the email thread that came through from a FOIA request.  It takes a while to read – but I found some interesting takeaways.

First, I went in thinking something had to be super dirty..  Now I’m not so sure.  From the read through, the folks at KSU strike me as a fairly diligent security team.  You can get a pretty firm handle on their incident response process, including measures they took in response to the initial notification, their correspondence with law enforcement (who seemed less on the ball frankly), and their pretty thorough after-incident reports…  You can get a good flavor for how that (fairly small) team runs just from these emails.   Likewise, you can see them proactively reaching out to internal counsel about how long to retain records – before deleting them – to which they did not get (IMHO) a very germane or applicable reply.  Are they the best security team ever?  Well, they’re small… and have a lot on their plate.  But I’ve certainly seen much worse.

The second story is about the IOActive report about SATCOM.  IOActive put out a report about issues with this system (fairly standard) and in response the vendor issued a patch.  Nothing to see here, right?  Wouldn’t be… Except now the vendor is pushing back saying the research is overblown.  The research itself is pretty straightforward: SQLi in the login form and a built-in backdoor account.  It happens.  Likewise, the report acknowledges that the normative situation would be that these are not “worst case scenario” in most production deployments because of network segmentation in use for a field deployment.  In fact, the report says directly, “Vessel networks are typically segmented and isolated from each other, in part for security reasons… While the vulnerabilities discussed in this blog post may only be exploited by an attacker with access to the IT systems network, it’s important to note that within certain vessel configurations some networks might not be segmented, or AmosConnect might be exposed to one or more of these networks.” 

The position of the vendor seems to be: 1) this product is old, so while it is still in deployment, it’s scheduled for termination.  2) It’s hard to exploit. 3) There are compensating controls.  OK.  I’m all about that.  But don’t these same arguments apply in equal measure to something like, for example, SMBv1?  Here’s the deal: I’ve commented on this before, but if you’re a product vendor, never ever dispute the researcher.   The PR is terrible.  Even if you think the research is completely bogus, don’t fight it.  What is particularly upsetting about this one is that the company did the right thing initially: they patched their legacy, soon to be commissioned, likely not that vulnerable product. They already did the right thing.  Now, they have undermined those efforts, lost any good PR value, and are going down the “theoretical vulnerability” route (they do, in fact, use that parlance.)  Oldsters out there will remember the L0pht’s tagline “making the theoretical practical…”  Do you remember why they said that?  I do – and, if you read this, you’ll know too.  The moral of the story is that saying it’s “theoretical” is bait – and most security pros will remember the truly epic “told you so” that happened in that case.

In both of these cases, decent security efforts were undermined by something else going on somewhere else.  In the case of the election server, the response team took what appear to be reasonable measures.  Now though, the optics are legit terrible — for reasons that I suspect we’ll find out is no fault of theirs.  Likewise, the SATCOM vendor (Inmarset) did the right thing in response to a (probably hard to exploit in a normative case) vulnerability.  Now their workmanlike (and responsible) efforts to address the issue have been undermined — for example, the reason I even know about the story in the first place is they are already taking the bad PR hit in the press.   The lesson I guess is that it behooves the organization as a whole to make sure security is addressed holistically.  Making one specific team accountable is fine (and a good idea), but that doesn’t mean the rest of the organization can just “do whatevs” and expect the outcome to be a hardened enterprise.