It would be an understatement to say that people were upset about EternalBlue.  Microsoft apparently was already upset about it before WannaCry, but was even more upset afterwards, calling for a “Digital Geneva Convention.”

If you haven’t been following the story, here’s the background:

  • EternalBlue is an exploitation vector (CVE-2017-0144, CVSS 9.3) that impacts SMB (file sharing) implementations in Microsoft Windows
  • Microsoft Windows addressed that issue in MS17-010 published in March of 2017
  • The NSA A group generally referred to as the “Equation Group”, very likely a state-run cyberwarfare outfit apparently new about the issue since 2013
  • The issue was not disclosed to the community at large or to Microsoft upon discovery
  • The exploit was incorporated into an Equation Group toolkit released (for reasons unknown) by Russia another group calling itself (arguably more interestingly because of the Mass Effect reference) the Shadow Brokers
  • WannaCry exploited this and wreaked havoc on the world

This has a lot of people fired up.  Why?  Because if it is the case that the “Equation Group” is the NSA [note: has this been confirmed?  operating under the assumption that it hasn’t been, but the PATCH act certainly seems to imply that at least US lawmakers think it is], that would mean that a government entity knew about – and left undisclosed – a critical issue (among others by the way) that literally put people’s lives at risk.  If the issue were to have been disclosed to Microsoft (and therefore patched) when it was initially discovered, there would likely be no WannaCry (or, if there were, one with drastically reduced impact) and therefore the much of the consequences would have been avoided.

Of course, there is another side to the story.  If the issue were to have been patched when it was discovered (let’s say in 2012), it would have limited its utility as a cyberwarfare tool beyond the time which it would have been patched – certainly by 2013 and onward.  In fact, one might argue that, were a state-sponsored outfit like the Equation Group to follow a true responsible disclosure paradigm, there’s really not much reason for them to have a vulnerability research capability at all.  Why do it if you can’t use what you find for very long?  Unlike researchers in the private sector, the purpose isn’t “marketing” but instead actual offensive capability – capability that goes away once the issue is patch-able.

So how do you balance these two things?  A thorny problem.

Some lawmakers, in an attempt to put some parameters around this, have recently proposed the Protecting Our Ability to Counter Hacking (PATCH) Act.  I’d suggest you go read it, but the TLDR is that is establishes a “review board” to systematically review and make recommendations about criteria for when vulnerabilities should be disclosed. Who’s on the board?  Heads of various federal entities (e.g. DHS, FBI, NIA, CIA, etc.) as well as “ad hoc” members (to include heads of other stuff – Treasury, FTC, other members of the security council, etc.) It also establishes a disclosure mechanism and a reporting framework that includes (among others) civil liberties and procedural oversight.

So I’m not sure how I feel about this.  Frankly, before WannaCry, I was encouraged when I initially saw the Equation Group toolset.  I worked (briefly) in the federal space back in the day.  I won’t mention specifically which groups I worked with while I was there, but my general takeaway was that the US offensive cyber capability “stinks on ice”.  Meaning, there wasn’t a high degree of competence that I witnessed firsthand during my experience.  So when I saw the relative sophistication of the Equation Group toolset, it suggested that at least one group had some real skill – was better than I thought it was.  Not only was this toolset on par with other state-sponsored groups (which, stuxnet excepted, I hadn’t seen evidence of), but it was actually leading the pack!  Since I’m sure they’ve advanced since 2013, they likely still are.

I do have some criticisms of the PATCH Act specifics though.  The language suggests to me that these folks aren’t just establishing the criteria for when a vulnerability should be released, they are actually reviewing specific issues.  For example, the bill says, “The head of each Federal agency shall, upon obtaining information about a vulnerability that is not publicly known…”  So a group of folks with little-to-no understanding of technology are making decisions about what to disclose?  One of two things must be true: either they do it in an informed way (and thereby need some serious technically-astute support staff to make that happen) or they do it in ignorance (which would be a cluster).  I’m assuming this will be like other government activities and it’ll be up to staffers – in which case, the necessary apparatus is likely to be fairly large.  Lots of folks to review and make these decisions – with the potential for beans-spilling.

Other than that, I guess we’ll see if this passes – and if it does, what the procedures and policy they come up with are.  Either way, it’s interesting to me that the full disclosure discussion is now being conducted in congress.  I don’t think I could have seen that coming.