This morning the always-exceptional Mike Mimoso over at the ThreatPost posted this article about research being discussed at BlackHat today. The research highlights vulnerabilities in various radiation monitoring apparatus – you know, the kind used at power plants (for safety), border crossings, container terminals, and the like.
On a personal note, allow me to comment that from experience, I’m glad these protections exist and are in place. From my consulting days, I did various assignments that are germane to the usage of these devices… these devices help offset some pretty nasty risk scenarios.
Anyway, the backstory on this is that a vulnerability researcher at IOActive found some issues in that radiation monitoring equipment. Generally, they can be attacked. Specifically, he found a hardcoded username/password in one (yikes), a hardcoded encryption key in another (well hey now) and radio-frequency attacks that another is susceptible to (also not good).
A researcher finding issues like these isn’t particularly surprising. And, while the issues do highlight what has to be some pretty loose software controls at these manufacturers, bugs happen. I think we’re all pretty prepared to forgive that. What is noteworthy are the vendor responses. Specifically, there were three different vendors impacted – each one apparently provided a different excuse for why they don’t consider the issue worth addressing:
- One vendor alleged that their monitoring devices are only installed in secure locations, so the hard-coded username and password in the device source wasn’t an issue.
- Another argued that trying to a patch would break a critical communications protocol the devices use, so they really can’t address the radio-frequency vulnerabilities in the devices.
- The last one said they don’t see the hard-coded encryption key used to protect the firmware from being changed as a security problem.
Each of these responses is terrible in its own special way. In fact, in aggregate, this is like a playbook for exactly what not to do when responding to a vulnerability researcher that has found some issue in your product. Why? There are a few reasons:
- It’s terrible PR – it makes the organization look completely uninformed about security generally and hardening their product specifically. If you’re in the business of providing a security tool, looking like you’re completely ignorant on the topic isn’t a great marketing strategy.
- Negligence arguments – The second is that it sets you up for future claims of negligence should anything go wrong. Here’s the deal: if you know about some issue and you claim that you’re not going to fix it, you better be right. If you didn’t know about it, erring is human. If you know about it and plan to fix it, you’re taking action regardless of how glacially slow you might be in doing so. If you know about it and do nothing? Well, IANAL, but seems to me someone could claim that’s negligent.
- It’s a challenge to attackers – ’nuff said. People who disagree with your “threat assessment” will likely want to prove you wrong… spectacularly and publicly.
Look, here’s the deal. Even if you’re evil, the right answer is still that you plan to fix the issue. For example, let’s say that you’re Lucifer himself: you have absolutely no plan to address the issue and would prefer that the vulnerability researcher go pound sand. What do you do? Even in this case, the right answer is to acknowledge the issue, say thank you to the researcher and make nicey-nice, and then (once the pressure is off) you can take a million years to actually implement a fix. Or never do. For example, maybe you decide that the issue will be addressed in a “future firmware update” that just keeps getting delayed until the issue is OBE. You get all the benefit of not fixing it, without impact to your business, marketing, or setting yourself up for future armchair quarterbacking.
Let me be clear: I’m not recommending anybody do that. I’m just saying that there’s a difference between “smart evil” and “blundering evil.” Not fixing it and instead arguing that the problem doesn’t exist in the first place? Never a good idea.