A few months ago, Jeff Jones at CSO Online blogged about “Scrubbing the Source Data”, talking about the challenges of using vulnerability data for analysis. Part 1 examined using the National Vulnerability Database (NVD) showing how you can’t blindly rely on the data from VDBs. In his examples he shows that using the data to examine Windows is probably fairly accurate, yet examining Apple is less so and Ubuntu Linux is basically not possible. Unfortunately, there isn’t a part two to the series (yet) as implied by the title and introduction. Jones concludes the post:
Given these accuracy levels for vulnerabilities after the vendor has acknowledged it and provided a fix, it doesn’t seem like too much of a stretch to also conclude that using this data to analyze unpatched data would be equally challenging. Finally, I think this exercise helps demonstrate that anyone leveraging public data sources needs to have a good understanding of both the strengths and the weaknesses that any given data source may have, with respect to what one is trying to analyze or measure, and include steps in their methodology that accomodates accordingly.