Category Archives: Vulnerability Statistics

Response to Kenna Security’s Explanation of the DBIR Vulnerability Mess

Earlier this week, Michael Roytman of Kenna Security wrote a blog with more details about the vulnerability section of the Verizon DBIR report, partially in response to my last blog here questioning how some of the data was generated and the conclusions put forth. The one real criticism I will note, is that Roytman’s blog does not acknowledge or warn that the list of CVE IDs included in the DBIR had typos, causing the wrong IDs to be included. In the world of vulnerability databases, that unique ID is designed specifically to avoid such confusion. Carrying the wrong IDs undermines the integrity of the data being presented.

In addition to my comments, Roytman had a long call with Adrian Sanabria which led to generating a new set of data with a different scope. From the Kenna blog:

We had an excellent offline discussion in which he dove deeply into the assumptions of my work, asked thoughtful, deep questions in private, and together, we came up with a better metric for generating a top 10 vulnerabilities list. To address these issues, I scaled the total successful exploitation count for every vulnerability in 2015 by the number of observed occurrences of that vulnerability in Kenna’s aggregate dataset. Sifting through 265 million vulnerabilities gives us a top 10 list perhaps more in line with what was expected – but equally unexpected! The takeaway here is that datasets like the one explored in the DBIR might be noisy, might have false positives and the like, but carefully applied to your enterprise the additional context successful exploitation data lends to vulnerability management is priceless.

I won’t go into much detail in this blog, but will say that I disagree with the statement that severely flawed data can produce takeaways that are “priceless”. Organizations acting on these top 10 lists may be spending time and resources chasing vulnerabilities that do not impact them, or pose very little risk compared to other threats they face. That action most certainly has a price; one that can be enumerated to some degree due to the cost of the employee time required. With that in mind, let’s look at the methodology which is spelled out in more detail than the DBIR, before we consider the new top 10 list Roytman generated. First, his notes on the data examined:

The first is a convenience sample that includes 2,442,792 assets (defined as: workstations, servers, databases, ips, mobile devices, etc) and 264,912,235 vulnerabilities associated to those assets. The vulnerabilities are generated by 8 different scanners, they are: Beyond Security, Tripwire, McAfee VM, Qualys, Tenable, BeyondTrust, Rapid7, and OpenVAS . This dataset is used in determining remediation rates and the normalized open rate of vulnerabilities.

I am curious if it is normal practice to consider an IP address an asset, when the system that addresses the IP is largely considered the asset. Moving past that, one point that sticks in my mind is the tools that generate the data. From the list above, consider that Beyond Security claims to have “what is arguably the world’s most complete database of security vulnerabilities.” Click around their site and you see that “the AVDS database includes over 10,000 known vulnerabilities and the updates include discoveries by our own team and those discovered by corporate and private security teams around the world.” That is less than 25% of what CVE has, and less than 10% of what VulnDB has. They even show you that they only cover 200 CVE IDs for 2016, as compared to the 1,474 open 2016 CVEs.

They don’t specify which Tripwire product, include McAfee Vulnerability Manager which was declared End of Life in October last year, and don’t specify which Qualys product. So it is a start as far as explaining what tools generate the data, but still leaves a lot of guess-work.

Roytman describes the second data set used as:

The second is a convenience sample that includes 3,615,706,022 successful exploitation events which all take place in 2015 which come from Dell Secureworks’ Counter-Threat Unit and Alienvault’s Open Threat Exchange.

The third qualification, describing the methodology is perhaps the most important, and was lacking in the DBIR:

Please note the methodology of data collection: Successful Exploitation is defined as one successful technical exploitation of a vulnerability on one machine at a particular timestamp. The event is defined as: 1. An asset has a known CVE open. 2. An attack come in that matches the signature for that CVE on that asset and 3. One or more IOCs are detected/correlated post attack. It is not necessarily a loss of a data, or even root on a machine. It is just the successful use of whatever condition is outlined in a CVE.

I have reached out to Michael and requested a sampling of data for two of the CVE IDs on his new list, and he is going to do so. In the meantime, I had a discussion with several people more familiar with IDS than myself and asked about how they would detect attacks for CVE-2013-0229 and CVE-2001-0540 as an example. Sure, detecting a specific type of packet meant to trigger this is one thing, but what is the threshold for the IDS to say it is an attack, when exploitation requires “a large number of malformed Remote Desktop Protocol (RDP) requests“. Is there a specific number of packets for it to flag an attack in progress? If too low, it may be prone to a high number of false positives. If too high, it may not detect a successful exploitation of the issue. Which leads into the second part of the methodology, comparing it with “one or more IOCs [that] are detected/correlated post attack”. In this case, it would presumably be the targeted service not responding, which could be detected a number of ways (e.g. probing the port, seeing specific errors in the logs, noticing a given process not running). Once the service is down, is the IDS that isn’t aware of the host state still cataloging a single attack? Or is it generating alerts every X minutes it detects an attack ongoing? Hopefully the data Michael sends will help me better understand how that correlation is being made, as it represents a source of incredible bias for the resulting data analysis.

Moving on to Roytman’s new list using the above data and methodology, here is the top 10 list he sees in the data:

  1. 2015-03-05 – 2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – 2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2014-04-07 – 2014-0160 – OpenSSL TLS Heartbeat Extension Packets Handling Out-of-bounds Read Remote Memory Disclosure (Heartbleed)
  4. 2012-03-13 – 2012-0152 – Microsoft Windows Remote Desktop Protocol Terminal Server RDP Packet Parsing Remote DoS
  5. 2009-05-16 – 2013-0229 – MiniUPnPd SSDP Handler minissdp.c ProcessSSDPRequest Function Malformed Input Handling Remote DoS
  6. 2002-02-12 – 2002-0012 – Multiple Vendor Malformed SNMP Trap Handling DoS
  7. 2002-02-12 – 2002-0013 – Multiple Vendor Malformed SNMP Message-Handling Remote DoS
  8. 2001-12-20 – 2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-12-20 – 2001-0877 – Microsoft Windows Universal Plug and Play NOTIFY Request Remote DoS
  10. 2001-07-25 – 2001-0540 – Microsoft Windows Terminal Server RDP Request Handling Memory Exhaustion Remote DoS

Roytman prefaces that list with a comment that the “top 10 list perhaps more in line with what was expected – but equally unexpected!” Indeed, that is certainly true. Expected? A high-profile vulnerability disclosed in 2014 that was seen being widely exploited then, and subsequent years, something sorely missing from the DBIR list even after the corrected CVE IDs were factored in. This is the type of vulnerability almost everyone expected to top the list due to how easy it was to exploit, and knowing that it had been used heavily. This speaks to the original point and reason for the first blog; all the data ‘science’ in the world that produces highly questionable results should not be taken as gospel. Even if the methodology was sound, it doesn’t mean the data being used was.

Unfortunately, Roytman’s revised list deviates quickly into the “equally unexpected“. The first DBIR report list had a single denial of service (DoS) attack on the list, which stood out as odd. The revised list bumped that number to two which was a bit odd. The most recent list expands that to six DoS attacks, which is highly questionable for one reason or another. First, you can question the data set and methodology leading to this conclusion, but let’s say that passes muster solely for the sake of argument. Second, you can question why so many denial of service attacks are on a top 10 most exploited vulnerabilities list, framed in a context that uses the word ‘compromise’, riding on the back of a report centered around data breaches. These attacks are not causing attackers to gain privileges or steal data. They are a nuisance most of the time, or potentially used in serious DoS attacks other times. It makes one question why DoS attacks weren’t dropped from the results completely, and disclaimed as such! Or, generate two lists; one with results based on raw data, DoS and everything, and a second list that focuses on vulnerabilities that may allow for privileges to be gained and a real compromise to happen.

I cannot stress this enough. Using a term like ‘Indicator of Compromise’ (IOC) in the context of DoS attacks is disingenuous and misleading. Going back to Roytman’s introduction into this section where he makes a comment about seeing the trees (referring to the classic metaphor), I find it ironic as that sums up the purpose of my blog. The DBIR was written as if they could only see the trees (data points), and not the forest (bigger picture), which is what many people took issue with.

One point that I overlooked on the original list, and still appears on the new list, is the presence of FREAK (twice even). Fortunately for me, Thomas Ptacek does a great job explaining why FREAK is likely on the list, but absolutely should not be. Using Roytman’s blog and data, he calculates that attackers would have spent $332,183,325 using Amazon EC2 to exploit FREAK. He continues by citing one of the researchers who discovered FREAK explaining one way that a high number of false positives are generated on that particular vulnerability. He goes on to drive the point home, quoting the researcher and commenting that it likely has not been exploited in the wild by an average attacker.

tqbf-freak

Roytman essentially dismisses all of this in his blog post while saying that I am correct, that “IDS alerts generate a ton of false positives, vulnerability scanners often don’t revisit signatures, CVE is not a complete list of vulnerability definitions. But those are just the trees, and we’ll get to them later.” Unfortunately, he doesn’t get to it later in a way that provides meaningful insight into the questions about the data and conclusions. Dan Guido wrote an excellent summary of why the DBIR vulnerability section has issues, and factors in Roytman’s latest blog, breaking it all down in a manner that highlights the flaws. Even with the revised list, it is still missing the US-CERT top 30 previously cited, the Microsoft data, and the recently disclosed ‘top PoC exploits distributed on social media‘. At some point, one would logically conclude these lists should have more overlap. One thing I would love to see from Verizon and Kenna is a detailed explanation of their methodology as it relates to detecting client-side exploits, that appear to be the defacto standard for infecting tens, maybe hundreds of thousands of hosts, every year to create botnets.

I want to look at this from one more perspective, because I think it beautifully highlights how vulnerability analysis is a moving target, but in this case for all the wrong reasons. While most vulnerability aggregators and analysts are constantly adapting to new variations of vulnerabilities, new sources of vulnerability information, and new players in the game with wildly different styles of disclosure, others that come along after the data is generated and do analysis frequently seem to lose perspective in my experience. I believe this is such a case and is best illustrated in what the DBIR top 10 looks like over three revisions in less than two weeks. Yes, I know Roytman’s list isn’t officially the DBIR list, but he generated the initial data and then opted to do a different form of analysis putting it forth as something that is more representative due to applying his analysis to a more limited dataset that he presumably trusts more (i.e. the Kenna aggregate dataset). One has to wonder if this was brought up to Verizon as a better way to approach the list, and if so, why was it rejected.

The following lists show the evolution of the CVEs that appear on the top 10, with strikeout denoting the typos between the original DBIR and Gabe’s clarification which is reflected in current downloads, underlining to show any denial of service attacks, and bold to show the new CVE IDs that appear with Roytman’s reworking.

DBIR      - DBIR Revised   - Roytman Blog
2015-1637 – 2015-1637     - 2015-1637
2015-0204 – 2015-0204     - 2015-0204
2012-1054 – 2002-1054     - 2014-0160
2011-0877 – 2001-0877     - 2012-0152
2003-0818 – 2003-0818     - 2013-0229
2002-0126 – 2002-0126     - 2002-0012
2002-0953 – 2002-0953     - 2002-0013
2001-0876 – 2001-0876     - 2001-0876
2001-0680 – 2001-0680     - 2001-0877
1999-10581999-1058     - 2001-0540

In my mail to Roytman asking for a sample data set, I suggested that it would be interesting to see him generate a list using his methodology, but remove any DoS attack (six of his ten), so the top list only included exploits that could achieve remote privileges of some sort. He replied to me with:

… and again, awesome idea. One of those all-too-simple in retrospect, damnit why didn’t I think of it earlier things.

I found this interesting thinking back to his use of the forest and the trees metaphor.

A Note on the Verizon DBIR 2016 Vulnerabilities Claims

[Updated 4/28/2016]

Verizon released their yearly Data Breach Investigations Report (DBIR) and it wasn’t too long before I started getting asked about their “Vulnerabilities” section (page 13). After bringing up some highly questionable points about last year’s report regarding vulnerabilities, several people felt that the report did not stand up to scrutiny. With a few questions leveled at me, I was curious if Verizon and partners learned from last year.

This year’s vulnerability data was provided by Kenna Security (formerly Risk I/O), and Verizon “also utilized vulnerability scan data provided by Beyond Trust, Qualys and Tripwire in support of this section.” So the data isn’t from a single vendor, but at least four vendors, giving the impression that the data should be well-rounded, and have less questions than last year.

From the report:

Secondly, attackers automate certain weaponized vulnerabilities and spray and pray them across the internet, sometimes yielding incredible success. The distribution is very similar to last year, with the top 10 vulnerabilities accounting for 85% of successful exploit traffic. While being aware of and fixing these mega-vulns is a solid first step, don’t forget that the other 15% consists of over 900 CVEs, which are also being actively exploited in the wild.

This is not encouraging, as they have 10 vulnerabilities that account for an incredible amount of traffic, and the footnoted list of CVE IDs suggests the same problems as last year. And just like last year, the report does not explain the methodology for detecting the vulnerabilities, does not include details about the generation of the statistics, and provides a loose definition of what “successfully exploited” means. Without more detail it is impossible for others to reproduce their results, and extremely difficult to explain or disclaim them as a third party reading the report. Going to the Kenna Security page about this report doesn’t really yield much clarity, but does highlight another potential flaw in the methodology:

Kenna’s Chief Data Scientist Michael Roytman was the primary author of this year’s “Vulnerabilities” chapter, analyzing a correlated threat data set that spans 200M+ successful exploitations across 500+ common vulnerabilities and exposures from over 20,000 enterprises in more than 150 countries.

It’s subtle, but notice they went through a data set that spans exploitations across “500+ common vulnerabilities and exposures”, also known as CVE. If the data is only looking for CVEs, then there is an incredibly large bias at play from the start, since they are missing at least half of the disclosed vulnerabilities. More importantly, this becomes a game of fractions that the industry is keen to overlook at every opportunity:

  • CVE represents approximately half of the disclosed vulnerabilities.
  • Vulnerability scanners and IPS/IDS don’t have signatures for all CVE IDs, so they look for some fraction of CVE.
  • Detection signatures are often flawed, leading to false positives and false negatives, meaning they are actually detecting a fraction of the CVE IDs they intend to.

Another crucial factor in how this data is generated is in the detection of the exploits. Of the four companies contributing data, one was founded in 2009 (Risk I/O / Kenna) and another in 2006 (Beyond Trust, in the context of this discussion). That leaves Qualys (founded 1999) and Tripwire (founded 1997), who are likely the sources of the signatures that detected the vulnerabilities. For those around in the late 90s, the vulnerability landscape was very different than today, and security products based on signatures back then are in some ways considered rudimentary compared to today. Over time, most security products do not revisit older signatures to improve them unless they have to, often due to customer demand. Newly formed companies basically never go back and write signatures for vulnerabilities from 1999. So it stands to reason that the detection of these issues are based on Qualys and/or Tripwire’s detection capabilities, and the signatures detecting these vulnerabilities are likely outdated and not as well-constructed as compared to their more recent signatures.

That leads us to ask, how many vulnerabilities are these companies really looking for? Where did the detection signatures originate and how accurate are they? While the DBIR does disclaim that the data used is a sample, they also admit “bias undoubtedly exists”. However, they don’t warn the reader of these extremely limiting caveats that put the entire data set into a perspective clearly showing strong bias. This, combined with the lack of detailed methodology for how these vulnerabilities are detected and correlated to measure ‘success’, ultimately means this data has little value other than for inclusion in pedestrian reports on vulnerabilities.

With that in mind, I can only go by what information is available. We’ll start with the concise list of the top 10 CVE IDs these four vulnerability intelligence providers say are being exploited the most, and Verizon labels as “successfully exploited”:

  1. 2015-03-05 – CVE-2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – CVE-2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2012-02-25 – CVE-2012-1054 – Puppet k5login File Symlink File Overwrite Local Privilege Escalation
  4. 2011-07-19 – CVE-2011-0877 – Oracle Enterprise Manager Grid Control Instance Management Unspecified Remote Issue (2011-0877)
  5. 2004-02-10 – CVE-2003-0818 – Microsoft Windows ASN.1 Library (MSASN1.DLL) BER Encoding Handling Remote Integer Overflows
  6. 2002-01-15 – CVE-2002-0126 – BlackMoon FTP Server Multiple Command Remote Overflow
  7. 2001-12-26 – CVE-2002-0953 – PHPAddress globals.php LangCookie Parameter Remote File Inclusion
  8. 2001-12-20 – CVE-2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-04-13 – CVE-2001-0680 – QVT/Net / Term FTP Server LIST Command Traversal Remote File Access
  10. 1999-11-22 – CVE-1999-1058 – Vermillion FTPD Long CWD Command Handling Remote Overflow DoS

This list should raise serious red flags for anyone passingly familiar with vulnerabilities. Not only do we have very odd ‘top 10’ lists from last year and this year, but there is little overlap between them. How does 2015 show a top 10 list exploiting eight vulnerabilities with CVE identifiers between 1999 and 2002, meaning they had been exploited so much as many as thirteen years later, only to see them all drop off the list this year, to be replaced by new 15+ year old vulnerabilities? In addition to this oddity, there are more considerations leading to my top 10 list of questions about their list:

  1. How does a local vulnerability based on a symlink overwrite flaw (CVE-2012-1054) make it into a top 10 list of “85% of successful exploit traffic“?
  2. How does a local vulnerability in Puppet rank #3 on this list, given the install base of Puppet as compared to Adobe or Java?
  3. If they are detecting exploits on the wire, shouldn’t we see Java, Adobe Reader, and Adobe Flash somewhere on the list? The “Slow and steady—but how slow?” section even talks about time-to-exploit for Adobe.
  4. Why doesn’t this list remotely match US-CERT’s “Top 30 Targeted High Risk Vulnerabilities” that includes vulnerabilities back to 2006, but not a single one listed above?
  5. How does a vulnerability that by all accounts is so vague, that it has to be distinguished by the vendor issued CVE ID (CVE-2011-0877), have a signature and get exploited so much?
  6. How does a vulnerability in Oracle Enterprise Manager Grid Control show up as #4, when no Oracle Database vulnerabilities appear?
  7. How do you distinguish an FTP LIST command exploit from one vendor to another? (e.g. CVE-2005-2726, CVE-2002-0558, CVE-2001-0933, CVE-2001-0680) According to the one-liner methodology, this is done via pairing SIEM data, suggesting that BlackMoon and Vermillion are that popular today.
  8. Yet, how does a remote DoS in an Windows-based FTP program that doesn’t appear to have been distributed for a decade make it on this list? Are people really conducting targeted DoS attacks against this software?
  9. Is BlackMoon FTP Server really that prevalent to be exploited so often?
  10. Or is there a problem in generating this data, which would be more easily attributed to loose signatures detecting FTP attacks regardless of vendor?

Figure 12 in the report, which is described as “Count of CVEs exploited in 2015 by CVE publication date” is a curious thing to include, as the CVE publication date is very distinct from the vulnerability disclosure date. While a large percentage of CVE publication dates are within seven days of disclosure, many are not (e.g. CVE-2015-8852 disclosed 2015-03-23 and CVE publication on 2016-04-26). Enough to make this chart questionable as far as the insight it provides. Taking the data as presented, are they really saying that only ~ 73 vulnerabilities with a 2015 ID were successfully exploited in 2015 across “millions of successful real-world exploitations“? Given that 40 vulnerabilities were discovered in the wild, 33 of which have 2015 CVE IDs, that means that only ~40 other 2015 vulnerabilities were successfully exploited? If that is the takeaway, how is the security industry unable to stop the increasing wave of data breaches, the same kinds that led to this report? Something doesn’t add up here.

While people are cheering about the DBIR disclaiming there is sample bias (and not really enumerating it), they ignore the measurement bias, don’t speak to publication bias, don’t explain the attrition bias between 2015 and 2016, or potential chaining bias. As usual, the media is happy to glom onto such reports without asking any of these questions or providing critical analysis. As an industry, we need to keep challenging metrics and statistics to ensure they are not only accurate, but provide meaning that benefits us.


4/28/2016According to Gabe (@gdbassett), the list of CVEs in the DBIR is incorrect. He posted a new list of CVEs (mostly the same) via Twitter in a reply to Andreas Lindh who was surprised at the top ten list of vulnerabilities as well. Gabe also confirmed that afterwards they “compared the figure CVEs (listed above) against the raw data. After removing non-confirmed breaches, they match.He went on to link to another source showing “data” about one of the CVEs, that really doesn’t mean anything without more context. Meanwhile, Michael Roytman, who did the vulnerability section of the report confirmed that he/Kenna would be responding to this blog with one of their own.

I hate to harp on simple transposition style mistakes in a report, but given the severity behind using numeric identifiers for vulnerabilities, it seems like that should have been double and triple-checked. Even then, I don’t understand how someone familiar with vulnerabilities could see either list and not ask many of the same questions I did, or provide more information in the report to back the claims. That said, let’s look at Gabe’s new list of CVEs. Bold and links are used to highlight the new ones:

  1. 2015-03-05 – 2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – 2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2004-02-10 – 2003-0818 – Microsoft Windows ASN.1 Library (MSASN1.DLL) BER Encoding Handling Remote Integer Overflows
  4. 2002-07-22 – 2002-1054 – Pablo FTP Server LIST Command Arbitrary Directory Listing Remote Information Disclosure
  5. 2002-01-15 – 2002-0126 – BlackMoon FTP Server Multiple Command Remote Overflow
  6. 2001-12-26 – 2002-0953 – PHPAddress globals.php LangCookie Parameter Remote File Inclusion
  7. 2001-12-20 – 2001-0877 – Microsoft Windows Universal Plug and Play NOTIFY Request Remote DoS
  8. 2001-12-20 – 2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-04-13 – 2001-0680 – QVT/Net / Term FTP Server LIST Command Traversal Remote File Access
  10. 1999-11-22 – 1999-1058 – Vermillion FTPD Long CWD Command Handling Remote Overflow DoS

The list gained another FTP server issue that doesn’t necessarily lead to privileges, and another remote denial of service attack, while losing the Puppet symlink (CVE-2012-1054) and the vague Oracle Enterprise Manager issue (CVE-2011-0877). All said and done, the list is just as confusing as before, perhaps more so. That gives us four FTP vulnerabilities, only one of which leads to remote code execution, and two denial of service attacks, that gain no real privileges for an attacker. As Andreas Lindh points out, that I failed to highlight, having a man-in-the-middle vulnerability occupy two spots on this list is also baffling in the context of the volume of attacks stated. Also note that with the addition of CVE-2002-1054 (Pablo FTP), there are now two vulnerabilities that appear on the DBIR 2015 and DBIR 2016 top ten CVE list.

Hopefully the forthcoming blog from Michael Roytman will shed some light on these issues.

2015, A Record Year For Vulnerabilities

Risk Based Security released the VulnDB QuickView report that shows 2015 broke the previous all-time record for the highest number of reported vulnerabilities. The 14,185 vulnerabilities cataloged during 2015 by Risk Based Security eclipsed the total covered by the National Vulnerability Database (NVD) and CVE by over 6,000.

Risk Based Security’s newly released 2015 Year End VulnDB QuickView report shows that 20.5% of reported vulnerabilities received CVSS scores between 9.0 and 10.0 and the number of vulnerabilities and the CVSS scores are both trending higher over the last four years.

It comes as no real surprise that Web-related vulnerabilities account for nearly 60% of the total reported in 2015, with cross-site scripting (XSS) making up 39% of those.

The VulnDB QuickView report also revealed that vulnerabilities disclosed in a coordinated fashion with the vendor rose to 42% in 2015 compared to 28% in 2014, the previous record. Another interesting fact in the report is that third-party Bug Bounty programs outpaced Vendor managed bounty programs 4:1 in 2015, when details were made available to the public.

A Note on the Verizon DBIR 2015, “Incident Counting”, and VDBs

Recently, the Verizon 2015 Data Breach Investigations Report (DBIR) was released to much fanfare as usual, prompting a variety of media outlets to analyze the analysis. A few days after the release, I caught a Tweet linking to a blog from @raesene (Rory McCune) that challenged one aspect of the report. On page 16 of the report, Verizon lists the top 10 CVE ID exploited.

dbir-top10-cve

The bottom of the table says “Top 10 CVEs Exploited” which can be interpreted many ways. The paragraph above qualifies it as the “ten CVEs account for almost 97% of the exploits observed in 2014“. This is problematic because neither fully explains what that means. Is this the top 10 detected from sensors around the world? Meaning the exploits were launched and detected, with no indication if they were successful? Or does this mean these are the top ten exploits that were launched and resulted in a successful compromise of some form?

This gets more confusing when you read McCune’s blog that lists out what those ten CVE correspond to. I’ll get to the one that drew his attention, but will start with a different one. CVE-1999-0517 is an identifier for “an SNMP community name is the default (e.g. public), null, or missing.” This is very curious CVE identifier related to SNMP as it is specific to a default string. Any device that has a default community name, but does not allow remote manipulation is a rather trivial information disclosure vulnerability. This begins to speak to what the chart means, as “exploit” is being used in a broad fashion and does not track to the usage many in our industry associate with it. McCune goes on to point another on their list, CVE-2001-0540 which is an identifier for a “memory leak in Terminal servers in Windows NT and Windows 2000 allows remote attackers to cause a denial of service (memory exhaustion) via a large number of malformed Remote Desktop Protocol (RDP) requests to port 3389“. This is considerably more specific, giving us affected operating systems and the ultimate impact; a denial of service. This brings us back to why this section is in the report when part (or all) of it has nothing to do with actual breaches. After these two, McCune points out numerous other issues that directly challenge how this data was generated.

Since Verizon does not explain their methodology in generating these numbers, we’re left with our best guesses and a small attempt at an explanation via Twitter, that leads to as many questions as answers. You can read the full Twitter chat between @raesene, @vzdbir, and @mroytman (RiskIO Data Scientist who provided the data to generate this section of the report). The two pages of ‘methodology’ Verizon provides in the report (pages 59-60) are too high-level to be useful for the section mentioned above. Even after the conversation, the ‘top 10 exploited’ is still highly suspicious and does not seem accurate. The final bit from Roytman may make sense in RiskIO’s world, but doesn’t to others in the vulnerability world.

That brings me to the first vulnerability McCune pointed out, that started a four hour rabbit hole journey for me last night. From his blog:

The best example of this problem is CVE-2002-1931 which gets listed at number nine. This is a Cross-Site Scripting issue in version 2.1.1 and 1.1.3 of a product called PHP Arena and specifically the pafileDB area of that product. Now I struggled to find out too much about that product because the site that used to host it http://www.phparena.net is now a gambling site (I presume that the domain name lapsed and was picked up as it got decent traffic). Searching via google for information, most of the results seemed to be from vulnerability databases(!) and using a google dork of inurl:pafiledb shows a total of 156 results, which seems low for one of the most exploited issues on the Internet.

This prompted me to look at CVE-2002-1931 and the corresponding OSVDB entry. Then I looked at the ‘pafiledb.php’ cross-site scripting issues in general, and immediately noticed problems:

pafiledb-before

This is a good testament to how far vulnerability databases (VDBs) have come over the years, and how our data earlier on is not so hot sometimes. Regardless of vulnerability age, I want database completeness and accuracy so I set out to fix it. What I thought would be a simple 15 minute fix took much longer. After reading the original disclosures of a few early paFileDB XSS issues, it became clear that the one visible script being tested was actually more complex, and calling additional PHP files. It also became quite clear that several databases, ours included, had mixed up references and done a poor job abstracting. Next step, take extensive notes on every disclosure including dates, exploit strings, versions affected, solution if available, and more. The rough notes for some, but not all of the issues to give a feel for what this entails:

pafiledb-notes1

Going back to this script being ‘more complex’, and to try to answer some questions about the disclosure, the next trick was to find a copy of paFileDB 3.1 or before to see what it entailed. This is harder than it sounds, given the age of the software and the fact the vendor site has been gone for seven years or more. With the archive finally in hand, it became more clear what the various ‘action’ parameters really meant.

pafiledb31

Going through each disclosure and following links to links, it also became obvious that every VDB missed the inclusion of some disclosures, and/or did not properly abstract them. To be fair, many VDBs have modified their abstraction rules over the years, including us. So using today’s standards along with yesterday’s data, we get a very different picture of those same XSS vulnerabilities:

pafiledb-after1

With this in mind, reconsider the Verizon report that says CVE-2002-1931 is a top 10 exploited vulnerability in 2014. That CVE is very specific, based on a 2002-10-20 disclosure that starts out referring to a vulnerability that we don’t have details on. Either it was publicly disclosed and the VDBs missed it, it was mentioned on the vendor site somewhere (and doesn’t appear to be there now, even with the great help of archive.org), or it was mentioned in private or restricted channels, but known to some. We’re left with what that Oct, 2002 post said:

Some of you may be familiar with Pafiledb provided by PHP arena. Well they just released a new version that fixed a problem with their counting of files. Along with that they said they fixed a possible security bug involving using Javascript as a search string. I checked it on my old version and it is infact there, so I updated to the new version so the bugs can be fixed and I checked it and it no longer works.

We know it is “pafiledb.php” only because that is the base script that in turn calls others. In reality, based on the other digging, it is very likely making a call to includes/search.php which contains the vulnerability. Without performing a code-level analysis, we can only include a technical note with our submission and move on. Continuing the train of thought, what data collection methods are being performed that assign that CVE to an attack that does not appear to be publicly disclosed? The vulnerability scanners from around that time, many of which are still used today, do not specifically test for and exploit that issue. Rather, they look for the additional three XSS disclosed in the same post further down. All three “pafiledb.php” exploits that call a specific action that correspond to /includes/rate.php, /includes/download.php, and /includes/email.php. It is logical that intrusion detection systems and vulnerability scanners would be looking for those three issues (likely lumped into one ID), but not the vague “search string” issue.

McCune’s observation and singling this vulnerability out is spot on. While he questioned the data in a different way, my method gives additional evidence that the ‘top 10’ is built on faulty data at best. I hope that this blog is both educational from the VDB side of things, and further encourages Verizon to be more forthcoming with their methodology for this data. As it stands, it simply isn’t trustworthy.

Reviewing the Secunia 2015 Vulnerability Review (A Redux)

It’s that time of year again! Vulnerability databases whip up reports touting statistics and observations based on their last year of collecting data. It’s understandable, especially for a commercial database, to show why your data source is the best. In the past, we haven’t had a strong desire to whip up a flashy PDF with our take on vulnerability disclosure statistics. Instead, we’ve focused more on educating others on why their statistics are often so horribly wrong or misleading. It should come as no surprise then, as we find ourselves forced to point out the errors in Secunia’s ways. We did the same thing last year in a long blog, giving more perspective and our own numbers for balance.

After reading this year’s report from Secunia, we could basically use last year’s blog as a template and just plug in new numbers. Instead, it is important to point out that after last year’s review, Secunia opted not to revise their methodology. Worse, they did not take any pointers from a presentation on vulnerability statistics Steve Christey (CVE / MITRE) and I did at BlackHat before last year’s report, pointing out the most common flaws in statistics and where the bias comes from. It is easy to chalk up last year’s report as being naive and not fully appreciating or understanding such statistics. This year though, they have no excuse. The numbers they report seem purposefully misleading and are a disservice to their customers and the community.

The headline of their press release and crux of their report is that they identified 15,435 vulnerabilities in 2014. This is entirely inaccurate, and completely misleading. This is fundamentally due to the same very flawed methodology they used last year, where they count a single vulnerability as many as 210 times (e.g. this year CVE-2014-0160, aka Heartbleed). Then factor in all of the other high profile “logo” vulnerabilities (e.g. FREAK, ShellShock), especially in OpenSSL, and the same handful of vulnerabilities will get counted hundreds or even a thousand times. To be abundantly clear, a vulnerability in a third-party library such as OpenSSL is one vulnerability. It doesn’t matter how many other products use and integrate that code, the fundamental flaw is in the library. Counting each product that implements OpenSSL as a distinct vulnerability, rather than a distinct occurrence of a vulnerability, is wrong. Worse, it actually highlights just how poor their statistics are, if you do accept their flawed methodology, as it is heavily used among thousands of applications that Secunia doesn’t cover, even when a vendor like IBM issues numerous advisories that they miss. No matter how you cut it, their numbers are invalid.

So poking at our database, understanding we don’t quite have a 100% mapping to Secunia, but feel it is pretty close, we see that around 5,280 of our entries have a Secunia reference and 4,530 of those also have a CVE reference (compared to more than 8,500 entries we have with a CVE reference). Even counting for a good margin of error, it still appears that Secunia does not cover almost 3,000 vulnerabilities in 2014, which are covered by CVE entries. Those familiar with this blog know that we are more critical of CVE than any other Vulnerability Database. If a commercial VDB with hundreds of high-dollar clients can’t even keep a 100% mapping with an over-funded, U.S. government run VDB that is considered the “lowest common denominator” by many, that speaks volumes. I could end the blog on that note alone, because anything less than mediocrity is a disgrace. How do you trust, and pay for, a vulnerability intelligence service missing thousands of publicly disclosed vulnerabilities, that are served up on a platter and available to anyone for free use and inclusion in their own database? You don’t. After all, remember this quote from the Secunia report last year?

CVE has become a de facto industry standard used to uniquely identify vulnerabilities, which have achieved wide acceptance in the security industry.

All of the above is not only important, it is absolutely critical to appropriately frame their statistics. Not only does Secunia avoid using the minimum industry standard for vulnerability aggregation, they opt to use their own methodology, which they now know beyond doubt seriously inflates their ‘vulnerability’ count. Last year, it was by almost three times more than reality. This year, it was also by about three times more than reality, as they only covered approximately 5,500 unique vulnerabilities. That is almost 3,000 less than CVE and 8,000 less than OSVDB. Any excuses of Secunia coverage lacking that are based on them claiming to verify vulnerabilities and “not covering vulnerabilities in beta software” are also invalid, as a majority of vulnerabilities reported are in stable software, and regularly come from trusted researchers or the vendors themselves. Regardless, it should be clear to anyone passingly familiar with the aggregation of vulnerabilities that Secunia is playing well outside the bounds of reality. Of course, every provider of vulnerability intelligence has some motive, or at least desire, to see the numbers go up year to year. It further justifies their customers spending money on the solution.

“Every year, we see an increase in the number of vulnerabilities discovered, emphasizing the need for organizations to stay on top of their environment. IT teams need to have complete visibility of the applications that are in use, and they need firm policies and procedures in place, in order to deal with the vulnerabilities as they are disclosed.” — Kasper Lindgaard, Director of Research and Security at Secunia.

While Mr. Lindgaard’s quote is flashy and compelling, it is also completely wrong, even according to their own historical reports. If Mr. Lindgaard had read Secunia’s own report from 2010 and 2011, he might have avoided this blunder.

This report presents global vulnerability data from the last five years and identifies trends found in 2010. The total number of vulnerabilities disclosed in 2010 shows a slight decrease of 3% compared to 2009. – Yearly Report 2010 (Secunia)

Analysing the long-term and short-term trends of all products from all vendors in the Secunia database over the last six years reveals that the total number of vulnerabilities decreased slightly in 2011 compared to 2010. – Vulnerabilities are Resilient (2011, Secunia Report)

In 2007 OSVDB cataloged 9,574 vulnerabilities as compared to 11,050 in 2006. In 2009 they cataloged 8,175 vulnerabilities as compared to 9,807 in 2008. In 2011 they cataloged 7,913 vulnerabilities compared to 9,161 in 2010. That is three distinct years that were not an increase at all. Taking Mr. Lindgaard’s quote into account, and comparing it to our own data that is based on a more standard method of abstraction, we also show that it has not been a steady rise every year:

osvdbbyyear

With highly questionable statistics, based on a flawed methodology, Secunia also lucked out this year, as vendors are more prudent about reporting vulnerabilities in used third-party libraries (e.g. OpenSSL, PHP). This plays out nicely for a company that incorrectly counts each vendor’s use of vulnerable software as a distinct issue.

Getting down to the statistics they share, we’ll give a few additional perspectives. As with all vulnerability statistics, they should be properly explained and disclaimed, or they are essentially meaningless.

In 2014, a total of 15,435 vulnerabilities were discovered in 3,870 products from 500 vendors.

As stated previously, this is absolutely false. Rather than track unique vulnerabilities, Secunia’s model and methodology is to focus more on products and operating systems. For example, they will issue one advisory for a vulnerability in OpenSSL, then dozens of additional advisories for the exact same vulnerability. The only difference is that each subsequent advisory covers a product or operating system that uses it. Above I mentioned the redundant ‘HeartBleed’ entries, but to further illustrate the point and demonstrate how this practice can lead to wildly inaccurate statistics, let’s examine the OpenSSL ‘FREAK’ vulnerability as well (29 Secunia advisories). The flaw is in the OpenSSL code, not the implementation from each vendor, so it is still that same one vulnerability. Depending on how good their coverage is, this could be more problematic. Looking at IBM, who uses OpenSSL in a wide variety of their products, we know that at least 56 are impacted by the vulnerability, which may mean that many more Secunia advisories cover them. If we had a perfect glimpse into what products use OpenSSL, it would likely be thousands. Trying to suggest that is ‘thousands of unique vulnerabilities’ is simply ignorant.

83% of vulnerabilities in all products had patches available on the day of disclosure in 2014.

We assume that by ‘patches’, they mean a viable vendor-provided solution (e.g. upgrade) as well. Based on our aggregation, it is closer to 64% that have a solution available, and that is not counting “the day of disclosure”. That includes vendors who have patched since the issue was disclosed. This big gap comes from the number of vulnerabilities aggregated by each database since the start of 2014.

25 zero-day vulnerabilities were discovered in total in 2014, compared to 14 the year before.

Assuming we use the same definition of “zero-day” as Secunia, this is considerably under what we tracked. We flag such issues “discovered in the wild”, meaning the vulnerability was being actively exploited by attackers when it was discovered and/or disclosed. In 2014, we show 43 entries with this designation, compared to 73 in 2013. That is a sharp decrease, not increase. Part of that large swing for us comes from a rash of attacker-trojaned software distributions that were used to target mobile users in 2013. Since trojaned software from a legitimate vendor meets the criteria to be called a vulnerability (it can be abused by a bad actor to cross privilege boundaries), we include it where Secunia and other databases generally ignore it.

20 of the 25 zero-day vulnerabilities were discovered in the 25 most popular products – 7 of these in operating systems.

Since we got into the “popular products” bit in last year’s rebuttal to their statistics, we’ll skip that this year. However, in 2014 we show that 16 of those “zero day” were in Microsoft products (two in Windows, six in the bundled MSIE), and four were in Adobe products.

In 2014, 1,035 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari. That is a 42% increase from 2013.

Without searching our data, it is likely very safe to say these numbers are way off, and that Secunia’s coverage of browser security is woefully behind. In addition to not aggregating as much data, they likely don’t properly take into account vulnerabilities in WebKit, which potentially affects Chrome, Safari, and Opera. Are WebKit vulnerabilities attributed to one, the other, or all three? With that in mind, let’s compare numbers. We see ~200 for Mozilla, ~250 for Microsoft IE, and not even 10 between Safari and Opera. In reality, Opera is not forthcoming in their information about vulnerabilities they patch, so their actual vulnerability count is likely much higher. That leaves us with Chrome, which had over 300 vulnerabilities in their own code. As mentioned, factor in another 100+ vulnerabilities in WebKit that may impact up to three browsers. That comes to a total of 889, far less than their claimed 1,035. That makes us curious if they are using the same flawed methodology with browser vulnerabilities, as they do with libraries (i.e. counting WebKit / Blink issues multiple times for each browser using it). Some vulnerabilities in these browsers are also due to flaws in other 3rd party libraries and not in the browsers’ own code (e.g. ICU4C and OpenJPEG). Finally, Google Chrome provides Adobe Flash Player, so is Secunia adding the number of vulnerabilities addressed in Flash Player to their Google Chrome count as well, further bloating it? So many unknowns and another reminder of why vulnerability statistics are often misleading or serve no value. Without explaining the methodology, we’re left with guesses and assumptions that strongly suggest this statistic can’t be trusted.

Even more offensive, is that the “vulnerability intelligence experts at Secunia Research” don’t demonstrate any expertise in informing readers why Google Chrome numbers are so high. Instead, they publish the information, withhold details of the methodology, and let the media run wild parroting their pedestrian statistics and understanding. As a lesson to the media, and Secunia, consider these additional pieces of information that help explain those numbers, and why you simple can’t compare disclosed vulnerability totals:

  1. Google releases details about the vulnerabilities fixed in Chrome, unlike some other browser vendors. For example, Opera releases almost no information. Microsoft typically releases information on the more serious vulnerabilities, and none from their internal auditing. Of the 310 Chrome vulnerabilities we tracked, 66 were CVSSv2 5.0 or less.
  2. Google has perhaps the most aggressive bug bounty program in the industry, which encourages more researchers to audit their code. How aggressive? They paid out more than US$550,000 in 2014, just for Chrome vulnerabilities.
  3. Google’s turnaround in fixing Chrome vulnerabilities is quite respectable. Since many of the vulnerabilities were reported to them and fixed promptly, it is easy to argue with the claim it is the “least secure” browser.
  4. Google fields several teams of security experts that audit their code, including Chrome. They find and fix potential issues before disclosing them. By doing that, they are ensuring the security of the browser, yet effectively getting punished by Secunia for doing so.
  5. In addition to all of the issues disclosed, a tiny fraction were actually proven to be exploitable. Like many software vendors, they find it is easier and more efficient to spend a small amount of time to fix the possible vulnerability rather than the larger amount of time required to prove it is exploitable.

There are many pitfalls of using simple disclosed vulnerability totals to compare products. This is explained over and over by those familiar with the vulnerability disclosure world, and should be common knowledge to ‘experts’ in vulnerability research. Instead, Secunia not only fuels media articles that aren’t critical of their report, they Tweet little snippets without any context. They also have their partners repeat their statistics, giving them more exposure, misleading the customers of other companies.

Vulnerability statistics can be useful. But they must be presented in a responsible and educated manner to serve their purpose. Those not familiar with vulnerability aggregation and generating the resulting statistics typically do a poor job, and add confusion to the matter. We like to call such people “vulnerability tourists”, and we can now add Secunia to the growing list of them.

[Update: 2015-04-01. One additional quote from the 2010 Secunia Report was added, and clarification of wording regarding the yearly vuln total chart was made.]

SQLi Disclosures and the Last Five Years (Transparent Statistics)

Nothing like waking up to a new article purporting to show vulnerability statistics and having someone ask us for comment. But hey, we love giving additional perspective on such statistics since they are often without proper context and disclaimers. This morning, the new article comes from Help Net Security and is titled “SQL injection vulnerabilities surge to highest levels in three years“. It cites “DB Networks’ research” who did the usual, parsed NVD data. As we all know, that data comes from CVE who is a frequent topic of rant on Twitter and occasional blogs. Cliff notes: CVE does not promise comprehensive coverage of public disclosures. They openly admit this and Steve Christey has repeatedly said that CVE may not be the best for statistics, even as far back as 2006. Early in 2013, Christey again publicly stated that CVE “can no longer guarantee full coverage of all public vulnerabilities.” I bring this up to remind everyone, again, that it is important to add such disclaimers about your data set, and ultimately your published research.

Getting back to the statistics and DB Research, the second thing I wondered after their data source, was who they were and why they were doing this analysis (SQLi specifically). I probably shouldn’t be surprised to learn this comes over a year after they announced they have an appliance that blocks SQL injection attacks. This too should be disclaimed in any article covering their analysis, as it adds perspective why they are choosing a single vulnerability type, and may also explain why they chose NVD over another data sources (since it fits more neatly into their narrative). Following that, they paid Ponemon to help them conduct a study on the threat of SQL injection. Or wait, was it two of them, the second with a bent towards retail breaches? Here are the results of the first one it seems, which again shows that they have a pretty specific bias toward SQL injection.

Now that we know they have a vested interest in the results of their analysis coming out a specific way, let’s look at the results. First, the Help Net Security article does not describe their methodology, at all. Reading their self-written, paid-for news announcement (PR Newswire) about the analysis, it becomes very clear this is a gimmick for advertising, not actual vulnerability statistics research. It’s 2015, years after Steve Christey and I have both ranted about such statistics, and they don’t explain their methodology. This makes it apparent that “CVE abstraction bias” is possibly the biggest factor here. I have blogged about using CVE/NVD as a dataset before, because it contains one of the biggest pitfalls in such statistic generation.

Rather than debunk these stats directly, since it has been done many times in the past, I can say that they are basically meaningless at this point. Even without their methodology, I am sure someone can trivially reproduce their results and figure out if they abstracted per CVE, or per actual SQLi mentioned. As a recent example, CVE-2014-7137 is a single entry that actually covers 54 distinct SQL injection vulnerabilities. If you count just the CVE candidate versus the vulnerabilities that may be listed within them, your numbers will vary greatly. That said, I will assume that their results can be reproduced since we know their data source and their bias in desired results.

With that in mind, let’s first look at what the numbers look like when using a database that clearly abstracts those issues, and covers a couple thousand sources more than CVE officially does:

sqli-five-year-view

“SQL Injection Vulnerabilities Surge to Highest Levels in Three Years” is the title of DB Networks’ press release, and is summarized by Help Net Security as “last year produced the most SQL vulnerabilities identified since 2011 and 104% more than were identified in 2013.” In reality (or at least, using a more comprehensive data set), we see that isn’t the case since the available statistics a) don’t show 2014 being more than 2011 and b) 2011 not holding a candle to 2010. No big surprise really, given that we actually do vulnerability aggregation while NVD and DB Networks does not. But really, I digress. These ‘statistics’ are nothing more than a thinly veiled excuse to further advertise themselves. Notice in the press release they go from the statistics that help justify their products right into the third paragraph quoting the CEO being “truly honored to be selected as a finalist for SC Magazine’s Best Database Security Solution.” He follows that up with another line calling out their specific product that helps “database security in some of the world’s largest mission critical datacenters.

Yep, these statistics are very transparent. They are based on a convenient data source, maintained by an agency that doesn’t actually aggregate the information, that doesn’t have the experience their data-benefactors have (CVE). They are advertised with a single goal in mind; selling their product. The fact they use the word “research” in the context of generating these statistics is a joke.

As always, I encourage companies and individuals to keep publishing vulnerability statistics. But I stress that it should be done responsibly. Disclaim your data source, explain your methodology, be clear if you are curious about the results coming out one way (bias is fine, just disclaim it), and realize that different data sources will produce different results. Dare to use multiple sources and compare the results, even if it doesn’t fully back your desired opinion. Why? Because if you disclaim your data sources and results, the logical and simple conclusion is that you may still be right. We just don’t have the perfect vulnerability disclosure data source yet. Fortunately, some of us are working harder than others to find that unicorn.

Reviewing the Secunia 2013 Vulnerability Review

On February 26, Secunia released their annual vulnerability report (link to report PDF) summarizing the computer security vulnerabilities they had cataloged over the 2013 calendar year. For those not familiar with their vulnerability database (VDB), we consider them a ‘specialty’ VDB rather than a ‘comprehensive’ VDB (e.g. OSVDB, X-Force). They may disagree on that point, but it is a simple matter of numbers that leads us to designate them as such. That also tends to explain why some of our conclusions and numbers are considerably different and complete than theirs.

In past years this type of blog post would not need a disclaimer, but it does now. OSVDB, while the website is mostly open to the public, is also the foundation of the VulnDB offering from our commercial partner and sponsor Risk Based Security (RBS). As such, we are now a direct competitor to Secunia, so any criticism leveled at them or their report may be biased. On the other hand, many people know that I am consistently critical of just about any vulnerability statistics published. Poor vulnerability statistics have plagued our industry for a long time. So much so that Steven Christey from CVE and I gave a presentation last year at the BlackHat briefings in Las Vegas on the topic.

One of the most important messages and take-aways from that talk is that all vulnerability statistics should be disclaimed and explained in advance. That means that a vulnerability report should start out by explaining where the data came from, applicable definitions, and the methodology of generating the statistics. This puts the subsequent statistics in context to better explain and disclaim them, as a level of bias enters any set of vulnerability statistics. Rather than follow the Secunia report in the order they publish them, I feel it is important to skip to the very end first. For that is where they finally explain their methodology to some degree, which is absolutely critical in understanding how their statistics were derived.

On page 16 (out of 20) of the report, in the Appendix “Secunia Vulnerability Tracking Process”, Secunia qualifies their methodology for counting vulnerabilities.

A vulnerability count is added to each Secunia Advisory to indicate the number of vulnerabilities covered by the Secunia Advisory. Using this count for statistical purposes is more accurate than counting CVE identifiers. Using vulnerability counts is, however, also not ideal as this is assigned per advisory. This means that one advisory may cover multiple products, but multiple advisories may also cover the same vulnerabilities in the same code-base shared across different programs and even different vendors.

First, the ‘vulnerability count’ referenced is not part of a public Secunia advisory, so their results cannot be realistically duplicated. The next few lines are important, as they invalidate the Secunia data set for making any type of real conclusion on the state of vulnerabilities. Not only can one advisory cover multiple products, multiple advisories can cover the same single vulnerability, just across different major versions. This high rate of duplicates and lack of unique identifiers make the data set too convoluted for meaningful statistics.

CVE has become a de facto industry standard used to uniquely identify vulnerabilities which have achieved wide acceptance in the security industry.

This is interesting to us because Secunia is not fully mapped to CVE historically. Meaning, there are thousands of vulnerabilities that CVE has cataloged, that Secunia has not included. CVE is a de facto industry standard, but also a drastically incomplete one. At the bare minimum, Secunia should have a 100% mapping to them and they do not. This further calls into question any statistics generated off this data set, when they knowingly ignore such a large number of vulnerabilities.

From Remote
From remote describes other vulnerabilities where the attacker is not required to have access to the system or a local network in order to exploit the vulnerability. This category covers services that are acceptable to be exposed and reachable to the Internet (e.g. HTTP, HTTPS, SMTP). It also covers client applications used on the Internet and certain vulnerabilities where it is reasonable to assume that a security conscious user can be tricked into performing certain actions.

Classification for the location of vulnerability exploitation is important as this heavily factors into criticality; either via common usage, or through scoring systems such as CVSS. In their methodology, we see that Secunia does not make a distinction between ‘remote’ and ‘context-dependent’ (or ‘user-assisted’ by some). This means that the need for user interaction is not factored into an issue and ultimately, scoring and statistics become based on network, local (adjacent) network, or local vectors.

Secunia further breaks down their classification in the appendix under “Secunia Vulnerability Criticality Classification“. However, it is important to note that their breakdown does not really jibe with any other scoring system. Looking past the flaw of using the word ‘critical’ in all five classifications, the distinction between ‘Extremely Critical’ and ‘Highly Critical’ is minor; it appears to be solely based on if Secunia is aware of exploit code existing for that issue based on their descriptions. This mindset is straight out of the mid 90s in regards to threat modeling. In today’s landscape, if details are available about a vulnerability then it is a given that a skilled attacker can either write or purchase a vulnerability for the issue within a few days, for a majority of disclosed issues. In many cases, even when details aren’t public but a patch is, that is enough to reliably reverse it and leverage it for working exploit code in a short amount of time. Finally, both of these designations still do not abstract on if user interaction is required. In each case, it may or may not be. In reality, I imagine that the difference between ‘Extremely’ and ‘Highly’ is supposed to be based on if exploits are happening in the wild at time of disclosure (i.e. zero day).

Now that we have determined their statistics cannot be reproduced, use a flawed methodology, and are based on drastically incomplete data, let’s examine their conclusions anyway!


The blog announcing the report is titled “1,208 vulnerabilities in the 50 most popular programs – 76% from third-party programs” and immediately calls into question their perspective. Reading down a bit, we find out what they mean by “third-party programs”:

“And the findings in the Secunia Vulnerability Review 2014 support that, once again, the biggest vulnerability threat to corporate and private security comes from third-party – i.e. non-Microsoft – programs.”

Unfortunately, this is not the definition of a third-party program by most in our industry. On a higher more general level, a “third-party software component” is a “is a reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform” (Wikipedia). In the world of VDBs, we frequently refer to a third-party component a ‘library‘ that is integrated into a bigger package. For example, Adobe Reader 10 which is found on many desktop computers is actually built on Adobe’s own code, but also as many as 212 other pieces of software. The notion that “non-Microsoft” software is “third-party” is very weird for lack of better words, and shows the mindset and perspective of Secunia. This completely discounts users of Apple, Linux, VMs (e.g. Oracle, VMware, Citrix), and mobile devices among others. Such a Microsoft-centric report should clearly be labeled as such, not as a general vulnerability report.

In the Top 50 programs, a total of 1,208 vulnerabilities were discovered in 2013. Third-party programs were responsible for 76% of those vulnerabilities, although these programs only account for 34% of the 50 most popular programs on private PCs. The share of Microsoft programs (including the Windows 7 operating system) in the Top 50 is a prominent 33 products – 66%. Even so, Microsoft programs are only responsible for 24% of the vulnerabilities in the Top 50 programs in 2013.

This is aiming for the most convoluted summary award apparently. I really can’t begin to describe how poorly this comes across. If you want to know the ‘Top 50 programs’, you have to read down to page 18 of the PDF and then resolve a lot of questions, some of which will be touched on below. When you read the list, and see that several ‘Microsoft’ programs actually had 0 vulnerabilities, it will call into question the “prominent 33 products” and show how the 66% is incorrectly weighted.

“However, another very important security factor is how easy it is to update Microsoft programs compared to third-party programs.” — Secunia CTO, Morten R. Stengaard.

When debunking vulnerability statistics, I tend to focus on the actual numbers. This is a case where I simply have to branch out and question how a ‘CTO’ could make this absurd statement. In one sentence, he implies that updating Microsoft is easy, while third-party programs (i.e. non-Microsoft programs per their definition) are not. Apparently Mr. Stengaard does not use Oracle Java, Adobe Flash player, Adobe Air, Adobe Reader, Mozilla Firefox, Mozilla Thunderbird, Google Chrome, Opera, or a wide range of other non-Microsoft desktop software, all of which have the same one-click patching/upgrade ability. Either Mr. Stengaard is not qualified to speak on this topic, or he is being extremely disingenuous in his characterization of non-Microsoft products to suit the needs of supporting this report and patch management business model. If he means that patching Windows is easier on an enterprise scale (e.g. via SCCM or WSUS), then that is frequently true, but such qualifications should be clear.

This is a case where using a valid and accepted definition of ‘third-party programs’ (e.g. a computing library) would make this quote more reasonable. Trying to upgrade ffmpeg, libav, or WebKit in the context of the programs that rely on them as libraries is not something that can be done by the average user. The problem is further compounded when portions of desktop software are used as a library in another program, such as AutoCad which appears in the Adobe Reader third-party license document linked above. However, these are the kinds of distinctions that any VDB should be fully aware of, and be able to disclaim and explain more readily.


Moving on to the actual ‘Secunia Vulnerability Review 2014‘ report, the very first line opens up a huge can of worms as the number is incorrect and entirely misleading. The flawed methodology used to generate the statistic cascades down into a wide variety of other incorrect conclusions.

The absolute number of vulnerabilities detected was 13,073, discovered in 2,289 products from 539 vendors.

It is clear that there are a significant amount of vulnerabilities that are being counted multiple times. While this number is generated from Secunia’s internal ‘vulnerability count’ number associated with each advisory, they miss the most obvious flaw; that many of their advisories cover the exact same vulnerability. Rather than abstract so that one advisory is updated to reflect additional products impacted, Secunia will release additional advisories. This is immediately visible in cases where a protocol is found to have a vulnerability, such as the “TLS / DTLS Protocol CBC-mode Ciphersuite Timing Analysis Plaintext Recovery Cryptanalysis Attack” (OSVDB 89848). This one vulnerability impacts any product that implements that protocol, so it is expected to be widespread. As such, that one vulnerability tracks to 175 different Secunia advisories. This is not a case where 175 different vendors coded the same vulnerability or the issue is distinct in their products. This is a case of a handful of base products (e.g. OpenSSL, GnuTLS, PolarSSL) implementing the flawed protocol, and hundreds of vendors using that software bundled as part of their own.

While that is an extreme example, the problem is certainly front-and-center due to their frequent multi-advisory coverage of the same issue. Consider that one OpenSSL vulnerability may be covered in 11 Secunia advisories. Then look at other products that are frequently used as libraries or found on multiple Linux distributions, each of which get their own advisory. Below is a quick chart showing examples of a single vulnerability in one of several products, along with the number of Secunia advisories that references that one vulnerability:

Example w/ 1 Vuln # of Secunia Adv
CVE-2013-6367 Linux Kernel 15
CVE-2013-6644 Google Chrome 5
CVE-2013-5609 Mozilla 14
CVE-2013-4408 Samba 10
CVE-2013-6415 Ruby on Rails 10
CVE-2014-0368 Oracle Java 27

This problem is further compounded when you consider the number of vulnerabilities in those products in 2013, where each one received multiple Secunia advisories. This table shows the products from above, and the number of unique vulnerabilities as tracked by OSVDB for that product in 2013 that had at least one associated Secunia advisory:

Software # of Vulns in product in 2013 w/ Secunia Ref
Linux Kernel 131
Google Chrome 267
Mozilla 148
Samba 10
Ruby on Rails 14
Oracle Java 194

It is easy to see how Secunia quickly jumped to 13,073 vulnerabilities while only issuing 3,327 advisories in 2013. If there is any doubt about vulnerability count inflation, consider these four Secunia advisories that cover the same set of vulnerabilities, each titled “WebSphere Application Server Multiple Java Vulnerabilities“. Secunia created four advisories for the same vulnerabilities simply to abstract based on the major versions affected, as seen in this table:

Secunia Advisory # of Vulns in product in 2013
56778 reported in versions 8.5.0.0 through 8.5.5.1.
56852 reported in versions 8.0.0.0 through 8.0.0.8.
56891 reported in version 7.0.0.0 through 7.0.0.31.
56897 reported in versions 6.1.0.0 through 6.1.0.47.

The internal ‘vulnerability count’ for these advisories are very likely 25, 25, 25, and 27, adding up to 102. Applied against IBM, you have 27 vulnerabilities inflated greatly and counting for 102 instead. Then consider that IBM has several hundred products that use Java, OpenSSL, and other common software. It is easy to see how Secunia could jump to erroneous conclusions:

The 32% year-on-year increase in the total number of vulnerabilities from 2012 to 2013 is mainly due to a vulnerability increase in IBM products of 442% (from 772 vulnerabilities in 2012 to 4,181 in 2013).

This unfair and potentially libelous statement was picked up by InformationWeek in an article titled “IBM Software Vulnerabilities Spiked In 2013“, which Secunia was quick to Tweet about.


The next set of statistics is convoluted on the surface, but even more confusing when you read the details and explanations for how they were derived:

Numbers – Top 50 portfolio
The number of vulnerabilities in the Top 50 portfolio was 1,208, discovered in 27 products from 7 vendors plus the most used operating system, Microsoft Windows 7.

To assess how exposed endpoints are, we analyze the types of products typically found on an endpoint. Throughout 2013, anonymous data has been gathered from scans of the millions of private computers which have the Secunia Personal Software Inspector (PSI) installed. Secunia data shows that the computer of a typical PSI user has an average of 75 programs installed on it. Naturally, there are country- and region-based variations regarding which programs are installed. Therefore, for the sake of clarity, we chose to focus on a representative portfolio of the 50 most common products found on a typical computer and the most used operating system, and analyze the state of this portfolio and operating system throughout the course of 2013. These 50 programs are comprised of 33 Microsoft programs and 17 non-Microsoft (third-party) programs.

Reading down to page 18 of the full report, you see the table listing the “Top 50” software installed as determined by their PSI software. On the list is a wide variety of software that are either components of Windows (meaning they come installed by default, but show up in the “Programs” list e.g. Microsoft Visual C++ Redistributable) or in a few cases third-party software (e.g. Google Toolbar), many of which have 0 associated vulnerabilities. In other cases they include product driver support tools (e.g. Realtek AC 97 Update and Remove Driver Tool) or ActiveX components that are generally not installed via traditional means (e.g. comdlg32 ActiveX Control). With approximately half of the Top 50 software having vulnerabilities, and mixing different types of software components, it causes summary put forth by Secunia to be misleading. Since they include Google Chrome on the list, by their current logic, they should also include WebKit which is a third-party library wrapped into Chrome, just as they include ‘Microsoft Powerpoint Viewer’ (33) which is a component of ‘Microsoft Powerpoint’ (14) and does not install separately.

Perhaps the most disturbing thing about this Top 50 summary is that Secunia only counts 7 vendors in their list. Reading through the list carefully, you see that there are actually 10 vendors represented: Microsoft, Adobe, Oracle, Mozilla, Google, Realtek, Apple, Piriform (CCleaner), VideoLAN, and Flexera (InstallShield). This seriously calls into question any conclusions put forth by Secunia regarding their Top 50 list and challenges their convoluted and irreproducible methodology.


Rather than offer a rebuttal line by line for the rest of the report and blog, we’ll just look at some of the included statistics that are questionable, wrong, or just further highlight that Secunia has missed some vulnerabilities.

In 2013, 727 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari.

By our count, there were at least 756 vulnerabilities in these browsers: Google Chrome (295), Mozilla Firefox (155), Internet Explorer (138), Opera (9), Apple Safari (8 on desktop, 4 on mobile), and WebKit (component of Chrome and Safari, 147). The count in Opera is likely very low though. In July 2013, Opera issued the first browser based on Blink, so it’s very likely that it has been affected by the vast majority of the Blink vulnerability fixes by Google. However, Opera is not very good at clearly reporting vulnerabilities, so this very likely accounts for the very low count that both we and Secunia have; something they should clearly have disclaimed.

In 2013, 70 vulnerabilities were discovered in the 5 most popular PDF readers: Adobe Reader, Foxit Reader, PDF-XChange Viewer, Sumatra PDF and Nitro PDF Reader.

By our count, there were at least 76 vulnerabilities in these PDF readers: Adobe Reader (69), Foxit (2), PDF-XChange (1), Sumatra (0), and Nitro (4).

The actual vulnerability count in Microsoft programs was 192 in 2013; 128.6% higher than in 2012.

Based on our data, there were 363 vulnerabilities in Microsoft software in 2013, not 192. This is up from 207 in 2012, giving us a 175.3% increase.

As in 2012, not many zero-day vulnerabilities were identified in 2013: 10 in total in the Top 50 software portfolio, and 14 in All products.
A zero-day vulnerability is a vulnerability that is actively exploited by hackers before it is publicly known, and before the vendor has published a patch for it.

By that definition, which we share, we tracked 72 vulnerabilities that were “discovered in the wild” in 2013. To be fair, our number is considerably higher because we actually track mobile vulnerabilities, something Secunia typically ignores. More curious is that based on a cursory search, we find 17 of their advisories that qualify as 0-day by their definition, suggesting they do not have a method for accurately counting them: SA51820 (1), SA52064 (1), SA52116 (2), SA52196 (2), SA52374 (2), SA52451 (1), SA53314 (1), SA54060 (1), SA54274 (1), SA54884 (2), SA55584 (1), SA55611 (1), and SA55809 (1).

Find out how quickly software vendors issue fixes – so-called patches – when vulnerabilities are discovered in All products.

This comes from their “Time to Patch for all products” summary page. This statement seems pretty clear; How fast do vendors issue fixes when vulnerabilities are discovered? However, Secunia does not track that specifically! The more appropriate question that can be answered by their data is “When are patches available at or after the time of public disclosure?” These are two very different metrics. The information on this page is generated using PSI/CSI statistics. So if a vulnerability is disclosed and a fix is already available at that time, it counts as within 24 hours. It doesn’t factor in that the vendor may have spent months fixing the issue before disclosure and patch.


In conclusion, while we appreciate companies sharing vulnerability intelligence, the Secunia 2013 vulnerability report is ultimately fluff that provides no benefit to organizations. The flawed methodology and inability for them to parse their own data means that the conclusions cannot be relied upon for making business decisions. When generating vulnerability statistics, a wide variety of bias will always be present. It is absolutely critical that your vulnerability aggregation methodology be clearly explained so that results are qualified and have more meaning.

I could do this all day… (Poor vuln stats from @GFISoftware)

Despite the talk given at BlackHat 2013 by Steve Christey and myself, companies continue to produce pedestrian and inaccurate statistics. This batch comes from Cristian Florian at GFI Software and offers little more than confusing and misleading statistics. Florian falls into many of the traps and pitfalls outlined previously.

These are compiled from data from the National Vulnerability Database (NVD).

There’s your first problem, using a drastically inferior data set than is available. The next bit really invalidates the rest of the article:

On average, 13 new vulnerabilities per day were reported in 2013, for a total of 4,794 security vulnerabilities: the highest number in the last five years.

This is laughable. OSVDB cataloged 10,472 disclosed vulnerabilities for 2013 (average of 28 a day), meaning these statistics were generated with less than half of known vulnerabilities. 2013 was our third year of breaking 10,000 vulnerabilities, where the rest have a single year (2006) if any at all. Seriously; what is the point of generating statistics when you knowingly use a data set lacking so much? Given that 2012 was another ’10k’ year, the statement about it being the highest number in the last five years is also wrong.

Around one-third of these vulnerabilities were classified ‘high severity’, meaning that an exploit for these vulnerabilities would have a high impact on the attacked systems.

By who? Who generated these CVSS scores exactly, and why isn’t that disclaimed in the article? Why no mention of the ‘CVSS 10′ scoring problem as VDBs must default to that for a completely unspecified issue? With a serious number of vulnerabilities either scored by vendors with a history of incorrect scoring, or VDBs forced to use ’10’ for unspecified issues, these numbers are completely meaningless and skewed.

The vulnerabilities were discovered in software provided by 760 different vendors, but the top 10 vendors were found to have 50% of the vulnerabilities:

I would imagine Oracle is accurate on this table, as we have cataloged 570 vulnerabilites in 2013 from them. However, the rest of the table is inaccurate because #2 is wrong. You say Cisco with 373, I say ffmpeg with 490. You say #10 is HP with 112 and I counter that WebKit had 139 (which in turn adds to Apple and Google among others). You do factor in that whole “software library” thing, right? For example, what products incorporate ffmpeg that have their own vulnerabilities? These are contenders for taking the #1 and #2 spot on the table.

Most Targeted Operating Systems in 2013

As we frequently see, no mention of severity here. Of the 363 Microsoft vulnerabilities in 2013, compared to the 161 Linux Kernel issues, impact and severity is important to look at. Privilege escalation and code execution is typical in Microsoft, while authenticated local denial of service accounts for 22% of the Linux issues (and only 1% for Microsoft).

In 2013 web browsers continued to justle – as in previous years – for first place on the list of third-party applications with the most security vulnerabilities. If Mozilla Firefox had the most security vulnerabilities reported last year and in 2009, Google Chrome had the “honor” in 2010 and 2011, it is now the turn of Microsoft Internet Explorer to lead with 128 vulnerabilities, 117 of them ‘critical’.

We already know your numbers are horribly wrong, as you don’t factor in WebKit vulnerabilities that affect multiple browsers. Further, what is with the sorting of this table putting MSIE up top despite it not being reported with the most vulnerabilities?

Sticking to just the browsers, Google Chrome had 297 reported vulnerabilities in 2013 and that does not count additional WebKit issues that very likely affect it. Next is Mozilla and then Microsoft IE with Safari at the lowest (again, ignoring the WebKit issue).

An open letter to Ashley Carman, @SCMagazine, and @SkyboxSecurity

[Sent to Ashley directly via email. Posting for the rest of the world as yet another example of how vulnerability statistics are typically done poorly. In this case, a company that does not aggregate vulnerabilities themselves, and has no particular expertise in vulnerability metrics weighs in on 2013 “statistics”. They obviously did not attend Steve Christey and my talk at BlackHat last year titled “Buying Into the Bias: Why Vulnerability Statistics Suck“. If we do this talk again, we have a fresh example to use courtesy of Skybox.]

[Update: SkyboxSecurity has quickly written a second blog in response to this one, clarifying a lot of their methodology. No word from Carman or SC Magazine. Not surprised; they have a dismal history as far as printing corrections, retractions, or even addressing criticism.]


Ashley,

In your recent article “Microsoft leads vendors with most critical vulnerabilities“, you cite research that is factually incorrect, and I fully expect a retraction to be printed. In fact, the list of errata in this article is considerably longer than the article itself. Some of this may seem to be semantics to you, but I assure you that in our industry they are anything but. Read down, where I show you how their research is *entirely wrong* and Microsoft is not ‘number one’ here.

1. If Skybox is only comparing vendors based on their database, as maps to CVE identifiers, then their database for this purpose is nothing but a copy of CVE. It is important to note this because aggregating vulnerability information is considerably more demanding than aggregating a few databases that do that work for you.

2. You say “More than half of the company’s 414 vulnerabilities were critical.” First, you do not disclaim that this number is limited to 2013 until your last paragraph. Second, Microsoft had 490 disclosed vulnerabilities in 2013 according to OSVDB.org, apparently not one of the “20” sources Skybox checked. And we don’t claim to have all of the disclosed vulnerabilities.

3. You cite “critical vulnerability” and refer to Microsoft’s definition of that as “one that allows code execution without user interaction.” Yet Skybox did not define ‘critical’. This is amateur hour in the world of vulnerabilities. For example, if Microsoft’s definition were taken at face value, then code execution in a sandbox would still qualify, while being considerably less severe than without. If you go for what I believe is the ‘spirit’ of the research, then you are talking about vulnerabilities with a CVSS score of 10.0 (network, no user interaction, no authentication, full code execution to impact confidentiality / integrity / availability completely), then Microsoft had 10 vulnerabilities. Yes, only 10. If you add the ‘user interaction’ component, giving it a CVSS score of 9.3, they had 176. That is closer to the ‘216’ Skybox is claiming. So again, how can you cite their research when they don’t define what ‘critical’ is exactly? As we frequently see, companies like to throw around vulnerability statistics but give no way to reproduce their findings.

4. You say, “The lab’s findings weren’t particularly surprising, considering the vendors’ market shares. Microsoft, for instance, is the largest company and its products are the most widely used.” This is completely subjective and arbitrary. While Microsoft captures the desktop OS market share, they do not capture the browser share for example. Further, like all of the vendors in this study, they use third-party code from other people. I point this line out because when you consider that another vendor/software is really ‘number one’, it makes this line seem to
be the basis of an anecdotal fallacy.

5. You finish by largely parroting Skybox, “Skybox analyzed more than 20 sources of data to determine the number of vulnerabilities that occurred in 2013. The lab found that about 700 critical vulnerabilities occurred in 2013, and more than 500 of them were from four vendors.” We’ve covered the ‘critical’ fallacy already, as they never define what that means. I mentioned the “CVE” angle above. Now, I question why you didn’t challenge them further on this. As a security writer, the notion that “20” sources has any meaning in that context should be suspect. Did they simply look to 20 other vulnerability databases (that do all the initial data aggregation) and then aggregate them? Did they look at 20 unique sources of vulnerability information themselves (e.g. the MS / Adobe / Oracle advisory pages)? This matters greatly. Why? OSVDB.org monitors over 1,500 sources for vulnerability information. Monitoring CVE, BID, Secunia, and X-Force (other large vulnerability databases) is considered to be 4 of those sources. So what does 20 mean exactly? To me, it means they are amateurs at best.

6. Jumping to the Skybox blog, “Oracle had the highest total number of vulnerabilities at 568, but only 18 percent of their total vulnerabilities were deemed critical.” This is nothing short of a big red warning flag to anyone familiar with vulnerabilities. This line alone should have made you steer clear from their ‘research’ and demanded you challenge them. It is well known that Oracle does not follow the CVSS standards when scoring a majority of their vulnerabilities. It has been shown time and time again that what they scored is not grounded in reality, when compared to the
researcher report that is eventually released. Every aspect of a CVSS score is frequently botched. Microsoft and Adobe do not have that reputation; they are known for generally providing accurate scoring. Since that scoring is the quickest way to determine criticality, it is important to note here.

7. Now for what you are likely waiting for. If not Microsoft, who? Before I answer that, let me qualify my statements since no one else at this table did. Based on vulnerabilities initially disclosed in 2013, that have a CVSS score of 10.0 (meaning full remote code execution without user interaction), we get this:

Oracle: 48
Adobe: 29
Microsoft: 10

Two vendors place higher than Microsoft based on this. Now, if we consider “context-dependent code execution”, meaning that user interaction is required but it leads to full code execution (e.g. click this malicious PDF/DOC/GIF and we base that on a 9.3 CVSS score (CVSS2#AV:N/AC:M/Au:N/C:C/I:C/A:C”)) or full remote code execution (CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C) we get the following:

Microsoft 176
Adobe: 132
Oracle: 122

I know, Microsoft is back on top. But wait…

ffmpeg: 326
libav: 286

Do you like apples?

Brian Martin
OSF / OSVDB.org

Buying Into the Bias: Why Vulnerability Statistics Suck

Last week, Steve Christey and I gave a presentation at Black Hat Briefings 2013 in Las Vegas about vulnerability statistics. We submitted a brief whitepaper on the topic, reproduced below, to accompany the slides that are now available.

buying_into_bias


Buying Into the Bias: Why Vulnerability Statistics Suck
By Steve Christey (MITRE) and Brian Martin (Open Security Foundation)
July 11, 2013

Academic researchers, journalists, security vendors, software vendors, and professional analysts often analyze vulnerability statistics using large repositories of vulnerability data, such as “Common Vulnerabilities and Exposures” (CVE), the Open Sourced Vulnerability Database (OSVDB), and other sources of aggregated vulnerability information. These statistics are claimed to demonstrate trends in vulnerability disclosure, such as the number or type of vulnerabilities, or their relative severity. Worse, they are typically misused to compare competing products to assess which one offers the best security.

Most of these statistical analyses demonstrate a serious fault in methodology, or are pure speculation in the long run. They use the easily-available, but drastically misunderstood data to craft irrelevant questions based on wild assumptions, while never figuring out (or even asking the sources about) the limitations of the data. This leads to a wide variety of bias that typically goes unchallenged, that ultimately forms statistics that make headlines and, far worse, are used to justify security budget and spending.

As maintainers of two well-known vulnerability information repositories, we’re sick of hearing about research that is quickly determined to be sloppy after it’s been released and gained public attention. In almost every case, the research casts aside any logical approach to generating the statistics. They frequently do not release their methodology, and they rarely disclaim the serious pitfalls in their conclusions. This stems from their serious lack of understanding about the data source they use, and how it operates. In short, vulnerability databases (VDBs) are very different and very fickle creatures. They are constantly evolving and see the world of vulnerabilities through very different glasses.

This paper and its associated presentation introduce a framework in which vulnerability statistics can be judged and improved. The better we get about talking about the issues, the better the chances of truly improving how vulnerability statistics are generated and interpreted.

Bias, We All Have It

Bias is inherent in everything humans do. Even the most rigorous and well-documented process can be affected by levels of bias that we simply do not understand are working against us. This is part of human nature. As with all things, bias is present in the creation of the VDBs, how the databases are populated with vulnerability data, and the subsequent analysis of that data. Not all bias is bad; for example, VDBs have a bias to avoid providing inaccurate information whenever possible, and each VDB effectively has a customer base whose needs directly drive what content is published.

Bias comes in many forms that we see as strongly influencing vulnerability statistics, via a number of actors involved in the process. It is important to remember that VDBs catalog the public disclosure of security vulnerabilities by a wide variety of people with vastly different skills and motivations. The disclosure process varies from person to person and introduces bias for sure, but even before the disclosure occurs, bias has already entered the picture.

Consider the general sequence of events that lead to a vulnerability being cataloged in a VDB.

  1. A researcher chooses a piece of software to examine.
  2. Each researcher operates with a different skill set and focus, using tools or techniques with varying strengths and weaknesses; these differences can impact which vulnerabilities are capable of being discovered.
  3. During the process, the researcher will find at least one vulnerability, often more.
  4. The researcher may or may not opt for vendor involvement in verifying or fixing the issue.
  5. At some point, the researcher may choose to disclose the vulnerability. That disclosure will not be in a common format, may suffer from language barriers, may not be technically accurate, may leave out critical details that impact the severity of the vulnerability (e.g. administrator authentication required), may be a duplicate of prior research, or introduce a number of other problems.
  6. Many VDBs attempt to catalog all public disclosures of information. This is a “best effort” activity, as there are simply too many sources for any one VDB to monitor, and accuracy problems can increase the expense of analyzing a single disclosure.
  7. If the VDB maintainers see the disclosure mentioned above, they will add it to the database if it meets their criteria, which is not always public. If the VDB does not see it, they will not add it. If the VDB disagrees with the disclosure (i.e. believes it to be inaccurate), they may not add it.

By this point, there are a number of criteria that may prevent the disclosure from ever making it into a VDB. Without using the word, the above steps have introduced several types of bias that impact the process. These biases carry forward into any subsequent examination of the database in any manner.

Types of Bias

Specific to the vulnerability disclosure aggregation process that VDBs go through every day, there are four primary types of bias that enter the picture. Note that while each of these can be seen in researchers, vendors, and VDBs, some are more common to one than the others. There are other types of bias that could also apply, but they are beyond the scope of this paper.

Selection bias covers what gets selected for study. In the case of disclosure, this refers to the researcher’s bias in selecting software and the methodology used to test the software for vulnerabilities; for example, a researcher might only investigate software written in a specific language and only look for a handful of the most common vulnerability types. In the case of VDBs, this involves how the VDB discovers and handles vulnerability disclosures from researchers and vendors. Perhaps the largest influence on selection bias is that many VDBs monitor a limited source of disclosures. It is not necessary to argue what “limited” means. Suffice it to say, no VDB is remotely complete on monitoring every source of vulnerability data that is public on the net. Lack of resources – primarily the time of those working on the database – causes a VDB to prioritize sources of information. With an increasing number of regional or country-based CERT groups disclosing vulnerabilities in their native tongue, VDBs have a harder time processing the information. Each vulnerability that is disclosed but does not end up in the VDB, ultimately factors into statistics such as “there were X vulnerabilities disclosed last year”.

Publication bias governs what portion of the research gets published. This ranges from “none”, to sparse information, to incredible technical detail about every finding. Somewhere between selection and publication bias, the researcher will determine how much time they are spending on this particular product, what vulnerabilities they are interested in, and more. All of this folds into what gets published. VDBs may discover a researcher’s disclosure, but then decide not to publish the vulnerability due to other criteria.

Abstraction bias is a term that we crafted to explain the process that VDBs use to assign identifiers to vulnerabilities. Depending on the purpose and stated goal of the VDB, the same 10 vulnerabilities may be given a single identifier by one database, and 10 identifiers by a different one. This level of abstraction is an absolutely critical factor when analyzing the data to generate vulnerability statistics. This is also the most prevalent source of problems for analysis, as researchers rarely understand the concept of abstraction, why it varies, and how to overcome it as an obstacle in generating meaningful statistics. Researchers will use whichever abstraction is most appropriate or convenient for them; after all, there are many different consumers for a researcher advisory, not just VDBs. Abstraction bias is also frequently seen in vendors, and occasionally researchers in the way they disclose one vulnerability multiple times, as it affects different software that bundles additional vendor’s software in it.

Measurement bias refers to potential errors in how a vulnerability is analyzed, verified, and catalogued. For example, with researchers, this bias might be in the form of failing to verify that a potential issue is actually a vulnerability, or in over-estimating the severity of the issue compared to how consumers might prioritize the issue. With vendors, measurement bias may affect how the vendor prioritizes an issue to be fixed, or in under-estimating the severity of the issue. With VDBs, measurement bias may also occur if analysts do not appropriately reflect the severity of the issue, or if inaccuracies are introduced while studying incomplete vulnerability disclosures, such as missing a version of the product that is affected by the vulnerability. It could be argued that abstraction bias is a certain type of measurement bias (since it involves using inconsistent “units of measurement”), but for the purposes of understanding vulnerability statistics, abstraction bias deserves special attention.

Measurement bias, as it affects statistics, is arguably the domain of VDBs, since most statistics are calculated using an underlying VDB instead of the original disclosures. As the primary sources of vulnerability data aggregation, several factors come into play when performing database updates.

Why Bias Matters, in Detail

These forms of bias can work together to create interesting spikes in vulnerability disclosure trends. To the VDB worker, they are typically apparent and sometimes amusing. To an outsider just using a data set to generate statistics, they can be a serious pitfall.

In August, 2008, a single researcher using rudimentary, yet effective methods for finding symlink vulnerabilities single handedly caused a significant spike in symlink vulnerability disclosures over the past 10 years. Starting in 2012 and continuing up to the publication of this paper, a pair of researchers have significantly impacted the number of disclosures in a single product. Not only has this caused a huge spike for the vulnerability count related to the product, it has led to them being ranked as two of the top vulnerability disclosers since January, 2012. Later this year, we expect there to be articles written regarding the number of supervisory control and data acquisition (SCADA) vulnerabilities disclosed from 2012 to 2013. Those articles will be based purely on vulnerability counts as determined from VDBs, likely with no mention of why the numbers are skewed. One prominent researcher who published many SCADA flaws has changed his personal disclosure policy. Instead of publicly disclosing details, he now keeps them private as part of a competitive advantage of his new business.

Another popular place for vulnerability statistics to break down is related to vulnerability severity. Researchers and journalists like to mention the raw number of vulnerabilities in two products and try to compare their relative security. They frequently overlook the severity of the vulnerabilities and may not note that while one product had twice as many disclosures, a significant percentage of them were low severity. Further, they do not understand how the industry-standard CVSSv2 scoring system works, or the bias that can creep in when using it to score vulnerabilities. Considering that a vague disclosure that has little actionable details will frequently be scored for the worst possible impact, that also drastically skews the severity ratings.

Conclusion

The forms of bias and how they may impact vulnerability statistics outlined in this paper are just the beginning. For each party involved, for each type of bias, there are many considerations that must be made. Accurate and meaningful vulnerability statistics are not impossible; they are just very difficult to accurately generate and disclaim.

Our 2013 BlackHat Briefings USA talk hopes to explore many of these points, outline the types of bias, and show concrete examples of misleading statistics. In addition, we will show how you can easily spot questionable statistics, and give some tips on generating and disclaiming good statistics.

Follow

Get every new post delivered to your Inbox.

Join 6,582 other followers