If you read between the lines of this Dark Reading article, it tells us some pretty interesting things about our industry: the vulnerability management folks have done a pretty good job of convincing everyone that chasing vulnerabilities is a good use of time.
It isn’t.
We’ve known for years that there was no correlation between the number of vulnerabilities overall, and the number of vulnerabilities exploited by attackers (see the image below, from a 4 year old Gartner study).
While the total number of reported vulnerabilities goes up year-on-year, the number of vulnerabilities exploited is actually going down. I haven’t seen the data for 2022, but I suspect it shows the same trend…and even if it doesn’t, effective vulnerability management has always been about not just connecting vulnerability data to attacker behaviour, but exploitability. And I don’t mean someone’s assessment of exploitability, generally—I mean the assessment of a particular vulnerability’s exploitability in your environment specifically. Without this, the data is simply too abstract to be useful for prioritization.
Most security tooling still has a painfully unsophisticated relationship with risk. “Attackability”, as referenced in the Dark Reading article, is just sexier terminology for what should happen in any basic risk identification process; a vulnerability that cannot be exploited by a given threat is not a risk. Making that determination is the entire point of the risk assessment phase of the overall risk mgmt process, and it is highly specific to your environment, not generic to everyone’s environments. That’s why (per the above), without this information, closing off vulnerabilities is a complete shot in the dark…the connection between the vulnerability, known attacker behaviour, and the potential exploitability of the vulnerability has no relationship with your own infrastructure. So you end up chasing vulnerabilities because of their CVSS score, or news stories about attacks, and (to the point of the Dark Reading article) 97% of them are not even exploitable by attackers. That means 97% of the effort your security team are expending to chase down vulnerabilities is a complete waste of their time.
What can we learn from this?
- It’s not enough for your vulnerability tool to prioritize based on CVSS or known attacker behaviour. You must determine whether or not you are, and to what degree you are, susceptible to those attacks. This is one reason why there have been fewer attacks against log4j than people expected…in many cases, the vulnerable instances of log4j can only be attacked from the inside. Which for sure has a potentially high impact, but a much lower likelihood (and therefore exploitability), than someone attacking servers exposing their RDP ports to the internet.
- It’s still the basics that matter most. The fact that the number of vulnerabilities exploited is actually quite small (and potentially still decreasing) relative to the total, means that attackers are just exploiting the same dumb stuff over and over again. They don’t need to perform more elaborate exploits, when the same stuff they’ve been doing for the last 10 years still works.