Here’s an article about how medical device manufacturers continue to not get it done securing what they produce. It references a few data points, including one from a Ponemon survey outlining how people are concerned about it, but yet action taken is relatively low. I’ve been writing about this for years: not only the near-continuous stream of research highlighting issues in implantable biomed (pacemakers, pumps, defibrillators, etc.) which this article focuses on, but likewise biomed that you see in institutional care settings (pharmaceutical dispenser systems, patient monitors, imaging systems, etc.) Security on these devices is terrible. Like, when I can routinely bring down an HL7 interface engine by port-scanning it, that’s a problem — when your MRI machine uses default passwords for the OS on which it runs, that’s a problem too.
I’m not usually a “doom and gloom” person. In fact, those of you who know me know that I explicitly hate that — FUD generally causes more problems than it helps solve. But on the other hand, I get frustrated with the interplay between human nature and certain types of risk. Specifically, people seem to generally tend toward ignoring a threat – or, if that seems unfair, let’s say “downplaying it” – until such time as some actual problem occurs. And then, when it does, it’s “hair on fire” time: everyone runs around trying to react, people demand action, everybody and their brother wants to make statements to the press, and there’s generally a big brouhaha. This happens until people lose interest and everyone can go back to safely ignoring the issue again.
Don’t believe me? See WannaCry. A CVSS 9.3 issue in SMB is discovered and publicly disclosed? Let’s hit the “sleep” button on that. Patch is published by Microsoft, marked as critical? Pipe that one to /dev/null. But then some malware gets released that actually leverages that situation to do some nastiness? Panic-button time. Then, when WannaCry ends, we go back to status quo until some variant emerges to start up the panic cycle once again.
WannaCry was an issue, but what really gets my dander up about the biomed thing is that the consequences are human lives – or at least human health. I prophesy (and I hope I’m wrong) that this issue is going to eventually lead to loss of life on some scale. It’ll probably be a small scale, but could be larger depending on what’s exploited and how (see Therac-25)… or at least to making sick people worse. I predicted back in 2006 that all it takes is a motivated attacker using the right biomedical device as an attack vector and they could assassinate someone under the right conditions. It’s still true now. In fact, it’s arguably worse.
Why does it have to be the worst case before people address this stuff? /RANT