Tag Archives: DBIR

Response to Kenna Security’s Explanation of the DBIR Vulnerability Mess

Earlier this week, Michael Roytman of Kenna Security wrote a blog with more details about the vulnerability section of the Verizon DBIR report, partially in response to my last blog here questioning how some of the data was generated and the conclusions put forth. The one real criticism I will note, is that Roytman’s blog does not acknowledge or warn that the list of CVE IDs included in the DBIR had typos, causing the wrong IDs to be included. In the world of vulnerability databases, that unique ID is designed specifically to avoid such confusion. Carrying the wrong IDs undermines the integrity of the data being presented.

In addition to my comments, Roytman had a long call with Adrian Sanabria which led to generating a new set of data with a different scope. From the Kenna blog:

We had an excellent offline discussion in which he dove deeply into the assumptions of my work, asked thoughtful, deep questions in private, and together, we came up with a better metric for generating a top 10 vulnerabilities list. To address these issues, I scaled the total successful exploitation count for every vulnerability in 2015 by the number of observed occurrences of that vulnerability in Kenna’s aggregate dataset. Sifting through 265 million vulnerabilities gives us a top 10 list perhaps more in line with what was expected – but equally unexpected! The takeaway here is that datasets like the one explored in the DBIR might be noisy, might have false positives and the like, but carefully applied to your enterprise the additional context successful exploitation data lends to vulnerability management is priceless.

I won’t go into much detail in this blog, but will say that I disagree with the statement that severely flawed data can produce takeaways that are “priceless”. Organizations acting on these top 10 lists may be spending time and resources chasing vulnerabilities that do not impact them, or pose very little risk compared to other threats they face. That action most certainly has a price; one that can be enumerated to some degree due to the cost of the employee time required. With that in mind, let’s look at the methodology which is spelled out in more detail than the DBIR, before we consider the new top 10 list Roytman generated. First, his notes on the data examined:

The first is a convenience sample that includes 2,442,792 assets (defined as: workstations, servers, databases, ips, mobile devices, etc) and 264,912,235 vulnerabilities associated to those assets. The vulnerabilities are generated by 8 different scanners, they are: Beyond Security, Tripwire, McAfee VM, Qualys, Tenable, BeyondTrust, Rapid7, and OpenVAS . This dataset is used in determining remediation rates and the normalized open rate of vulnerabilities.

I am curious if it is normal practice to consider an IP address an asset, when the system that addresses the IP is largely considered the asset. Moving past that, one point that sticks in my mind is the tools that generate the data. From the list above, consider that Beyond Security claims to have “what is arguably the world’s most complete database of security vulnerabilities.” Click around their site and you see that “the AVDS database includes over 10,000 known vulnerabilities and the updates include discoveries by our own team and those discovered by corporate and private security teams around the world.” That is less than 25% of what CVE has, and less than 10% of what VulnDB has. They even show you that they only cover 200 CVE IDs for 2016, as compared to the 1,474 open 2016 CVEs.

They don’t specify which Tripwire product, include McAfee Vulnerability Manager which was declared End of Life in October last year, and don’t specify which Qualys product. So it is a start as far as explaining what tools generate the data, but still leaves a lot of guess-work.

Roytman describes the second data set used as:

The second is a convenience sample that includes 3,615,706,022 successful exploitation events which all take place in 2015 which come from Dell Secureworks’ Counter-Threat Unit and Alienvault’s Open Threat Exchange.

The third qualification, describing the methodology is perhaps the most important, and was lacking in the DBIR:

Please note the methodology of data collection: Successful Exploitation is defined as one successful technical exploitation of a vulnerability on one machine at a particular timestamp. The event is defined as: 1. An asset has a known CVE open. 2. An attack come in that matches the signature for that CVE on that asset and 3. One or more IOCs are detected/correlated post attack. It is not necessarily a loss of a data, or even root on a machine. It is just the successful use of whatever condition is outlined in a CVE.

I have reached out to Michael and requested a sampling of data for two of the CVE IDs on his new list, and he is going to do so. In the meantime, I had a discussion with several people more familiar with IDS than myself and asked about how they would detect attacks for CVE-2013-0229 and CVE-2001-0540 as an example. Sure, detecting a specific type of packet meant to trigger this is one thing, but what is the threshold for the IDS to say it is an attack, when exploitation requires “a large number of malformed Remote Desktop Protocol (RDP) requests“. Is there a specific number of packets for it to flag an attack in progress? If too low, it may be prone to a high number of false positives. If too high, it may not detect a successful exploitation of the issue. Which leads into the second part of the methodology, comparing it with “one or more IOCs [that] are detected/correlated post attack”. In this case, it would presumably be the targeted service not responding, which could be detected a number of ways (e.g. probing the port, seeing specific errors in the logs, noticing a given process not running). Once the service is down, is the IDS that isn’t aware of the host state still cataloging a single attack? Or is it generating alerts every X minutes it detects an attack ongoing? Hopefully the data Michael sends will help me better understand how that correlation is being made, as it represents a source of incredible bias for the resulting data analysis.

Moving on to Roytman’s new list using the above data and methodology, here is the top 10 list he sees in the data:

  1. 2015-03-05 – 2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – 2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2014-04-07 – 2014-0160 – OpenSSL TLS Heartbeat Extension Packets Handling Out-of-bounds Read Remote Memory Disclosure (Heartbleed)
  4. 2012-03-13 – 2012-0152 – Microsoft Windows Remote Desktop Protocol Terminal Server RDP Packet Parsing Remote DoS
  5. 2009-05-16 – 2013-0229 – MiniUPnPd SSDP Handler minissdp.c ProcessSSDPRequest Function Malformed Input Handling Remote DoS
  6. 2002-02-12 – 2002-0012 – Multiple Vendor Malformed SNMP Trap Handling DoS
  7. 2002-02-12 – 2002-0013 – Multiple Vendor Malformed SNMP Message-Handling Remote DoS
  8. 2001-12-20 – 2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-12-20 – 2001-0877 – Microsoft Windows Universal Plug and Play NOTIFY Request Remote DoS
  10. 2001-07-25 – 2001-0540 – Microsoft Windows Terminal Server RDP Request Handling Memory Exhaustion Remote DoS

Roytman prefaces that list with a comment that the “top 10 list perhaps more in line with what was expected – but equally unexpected!” Indeed, that is certainly true. Expected? A high-profile vulnerability disclosed in 2014 that was seen being widely exploited then, and subsequent years, something sorely missing from the DBIR list even after the corrected CVE IDs were factored in. This is the type of vulnerability almost everyone expected to top the list due to how easy it was to exploit, and knowing that it had been used heavily. This speaks to the original point and reason for the first blog; all the data ‘science’ in the world that produces highly questionable results should not be taken as gospel. Even if the methodology was sound, it doesn’t mean the data being used was.

Unfortunately, Roytman’s revised list deviates quickly into the “equally unexpected“. The first DBIR report list had a single denial of service (DoS) attack on the list, which stood out as odd. The revised list bumped that number to two which was a bit odd. The most recent list expands that to six DoS attacks, which is highly questionable for one reason or another. First, you can question the data set and methodology leading to this conclusion, but let’s say that passes muster solely for the sake of argument. Second, you can question why so many denial of service attacks are on a top 10 most exploited vulnerabilities list, framed in a context that uses the word ‘compromise’, riding on the back of a report centered around data breaches. These attacks are not causing attackers to gain privileges or steal data. They are a nuisance most of the time, or potentially used in serious DoS attacks other times. It makes one question why DoS attacks weren’t dropped from the results completely, and disclaimed as such! Or, generate two lists; one with results based on raw data, DoS and everything, and a second list that focuses on vulnerabilities that may allow for privileges to be gained and a real compromise to happen.

I cannot stress this enough. Using a term like ‘Indicator of Compromise’ (IOC) in the context of DoS attacks is disingenuous and misleading. Going back to Roytman’s introduction into this section where he makes a comment about seeing the trees (referring to the classic metaphor), I find it ironic as that sums up the purpose of my blog. The DBIR was written as if they could only see the trees (data points), and not the forest (bigger picture), which is what many people took issue with.

One point that I overlooked on the original list, and still appears on the new list, is the presence of FREAK (twice even). Fortunately for me, Thomas Ptacek does a great job explaining why FREAK is likely on the list, but absolutely should not be. Using Roytman’s blog and data, he calculates that attackers would have spent $332,183,325 using Amazon EC2 to exploit FREAK. He continues by citing one of the researchers who discovered FREAK explaining one way that a high number of false positives are generated on that particular vulnerability. He goes on to drive the point home, quoting the researcher and commenting that it likely has not been exploited in the wild by an average attacker.

tqbf-freak

Roytman essentially dismisses all of this in his blog post while saying that I am correct, that “IDS alerts generate a ton of false positives, vulnerability scanners often don’t revisit signatures, CVE is not a complete list of vulnerability definitions. But those are just the trees, and we’ll get to them later.” Unfortunately, he doesn’t get to it later in a way that provides meaningful insight into the questions about the data and conclusions. Dan Guido wrote an excellent summary of why the DBIR vulnerability section has issues, and factors in Roytman’s latest blog, breaking it all down in a manner that highlights the flaws. Even with the revised list, it is still missing the US-CERT top 30 previously cited, the Microsoft data, and the recently disclosed ‘top PoC exploits distributed on social media‘. At some point, one would logically conclude these lists should have more overlap. One thing I would love to see from Verizon and Kenna is a detailed explanation of their methodology as it relates to detecting client-side exploits, that appear to be the defacto standard for infecting tens, maybe hundreds of thousands of hosts, every year to create botnets.

I want to look at this from one more perspective, because I think it beautifully highlights how vulnerability analysis is a moving target, but in this case for all the wrong reasons. While most vulnerability aggregators and analysts are constantly adapting to new variations of vulnerabilities, new sources of vulnerability information, and new players in the game with wildly different styles of disclosure, others that come along after the data is generated and do analysis frequently seem to lose perspective in my experience. I believe this is such a case and is best illustrated in what the DBIR top 10 looks like over three revisions in less than two weeks. Yes, I know Roytman’s list isn’t officially the DBIR list, but he generated the initial data and then opted to do a different form of analysis putting it forth as something that is more representative due to applying his analysis to a more limited dataset that he presumably trusts more (i.e. the Kenna aggregate dataset). One has to wonder if this was brought up to Verizon as a better way to approach the list, and if so, why was it rejected.

The following lists show the evolution of the CVEs that appear on the top 10, with strikeout denoting the typos between the original DBIR and Gabe’s clarification which is reflected in current downloads, underlining to show any denial of service attacks, and bold to show the new CVE IDs that appear with Roytman’s reworking.

DBIR      - DBIR Revised   - Roytman Blog
2015-1637 – 2015-1637     - 2015-1637
2015-0204 – 2015-0204     - 2015-0204
2012-1054 – 2002-1054     - 2014-0160
2011-0877 – 2001-0877     - 2012-0152
2003-0818 – 2003-0818     - 2013-0229
2002-0126 – 2002-0126     - 2002-0012
2002-0953 – 2002-0953     - 2002-0013
2001-0876 – 2001-0876     - 2001-0876
2001-0680 – 2001-0680     - 2001-0877
1999-10581999-1058     - 2001-0540

In my mail to Roytman asking for a sample data set, I suggested that it would be interesting to see him generate a list using his methodology, but remove any DoS attack (six of his ten), so the top list only included exploits that could achieve remote privileges of some sort. He replied to me with:

… and again, awesome idea. One of those all-too-simple in retrospect, damnit why didn’t I think of it earlier things.

I found this interesting thinking back to his use of the forest and the trees metaphor.

A Note on the Verizon DBIR 2016 Vulnerabilities Claims

[Updated 4/28/2016]

Verizon released their yearly Data Breach Investigations Report (DBIR) and it wasn’t too long before I started getting asked about their “Vulnerabilities” section (page 13). After bringing up some highly questionable points about last year’s report regarding vulnerabilities, several people felt that the report did not stand up to scrutiny. With a few questions leveled at me, I was curious if Verizon and partners learned from last year.

This year’s vulnerability data was provided by Kenna Security (formerly Risk I/O), and Verizon “also utilized vulnerability scan data provided by Beyond Trust, Qualys and Tripwire in support of this section.” So the data isn’t from a single vendor, but at least four vendors, giving the impression that the data should be well-rounded, and have less questions than last year.

From the report:

Secondly, attackers automate certain weaponized vulnerabilities and spray and pray them across the internet, sometimes yielding incredible success. The distribution is very similar to last year, with the top 10 vulnerabilities accounting for 85% of successful exploit traffic. While being aware of and fixing these mega-vulns is a solid first step, don’t forget that the other 15% consists of over 900 CVEs, which are also being actively exploited in the wild.

This is not encouraging, as they have 10 vulnerabilities that account for an incredible amount of traffic, and the footnoted list of CVE IDs suggests the same problems as last year. And just like last year, the report does not explain the methodology for detecting the vulnerabilities, does not include details about the generation of the statistics, and provides a loose definition of what “successfully exploited” means. Without more detail it is impossible for others to reproduce their results, and extremely difficult to explain or disclaim them as a third party reading the report. Going to the Kenna Security page about this report doesn’t really yield much clarity, but does highlight another potential flaw in the methodology:

Kenna’s Chief Data Scientist Michael Roytman was the primary author of this year’s “Vulnerabilities” chapter, analyzing a correlated threat data set that spans 200M+ successful exploitations across 500+ common vulnerabilities and exposures from over 20,000 enterprises in more than 150 countries.

It’s subtle, but notice they went through a data set that spans exploitations across “500+ common vulnerabilities and exposures”, also known as CVE. If the data is only looking for CVEs, then there is an incredibly large bias at play from the start, since they are missing at least half of the disclosed vulnerabilities. More importantly, this becomes a game of fractions that the industry is keen to overlook at every opportunity:

  • CVE represents approximately half of the disclosed vulnerabilities.
  • Vulnerability scanners and IPS/IDS don’t have signatures for all CVE IDs, so they look for some fraction of CVE.
  • Detection signatures are often flawed, leading to false positives and false negatives, meaning they are actually detecting a fraction of the CVE IDs they intend to.

Another crucial factor in how this data is generated is in the detection of the exploits. Of the four companies contributing data, one was founded in 2009 (Risk I/O / Kenna) and another in 2006 (Beyond Trust, in the context of this discussion). That leaves Qualys (founded 1999) and Tripwire (founded 1997), who are likely the sources of the signatures that detected the vulnerabilities. For those around in the late 90s, the vulnerability landscape was very different than today, and security products based on signatures back then are in some ways considered rudimentary compared to today. Over time, most security products do not revisit older signatures to improve them unless they have to, often due to customer demand. Newly formed companies basically never go back and write signatures for vulnerabilities from 1999. So it stands to reason that the detection of these issues are based on Qualys and/or Tripwire’s detection capabilities, and the signatures detecting these vulnerabilities are likely outdated and not as well-constructed as compared to their more recent signatures.

That leads us to ask, how many vulnerabilities are these companies really looking for? Where did the detection signatures originate and how accurate are they? While the DBIR does disclaim that the data used is a sample, they also admit “bias undoubtedly exists”. However, they don’t warn the reader of these extremely limiting caveats that put the entire data set into a perspective clearly showing strong bias. This, combined with the lack of detailed methodology for how these vulnerabilities are detected and correlated to measure ‘success’, ultimately means this data has little value other than for inclusion in pedestrian reports on vulnerabilities.

With that in mind, I can only go by what information is available. We’ll start with the concise list of the top 10 CVE IDs these four vulnerability intelligence providers say are being exploited the most, and Verizon labels as “successfully exploited”:

  1. 2015-03-05 – CVE-2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – CVE-2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2012-02-25 – CVE-2012-1054 – Puppet k5login File Symlink File Overwrite Local Privilege Escalation
  4. 2011-07-19 – CVE-2011-0877 – Oracle Enterprise Manager Grid Control Instance Management Unspecified Remote Issue (2011-0877)
  5. 2004-02-10 – CVE-2003-0818 – Microsoft Windows ASN.1 Library (MSASN1.DLL) BER Encoding Handling Remote Integer Overflows
  6. 2002-01-15 – CVE-2002-0126 – BlackMoon FTP Server Multiple Command Remote Overflow
  7. 2001-12-26 – CVE-2002-0953 – PHPAddress globals.php LangCookie Parameter Remote File Inclusion
  8. 2001-12-20 – CVE-2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-04-13 – CVE-2001-0680 – QVT/Net / Term FTP Server LIST Command Traversal Remote File Access
  10. 1999-11-22 – CVE-1999-1058 – Vermillion FTPD Long CWD Command Handling Remote Overflow DoS

This list should raise serious red flags for anyone passingly familiar with vulnerabilities. Not only do we have very odd ‘top 10’ lists from last year and this year, but there is little overlap between them. How does 2015 show a top 10 list exploiting eight vulnerabilities with CVE identifiers between 1999 and 2002, meaning they had been exploited so much as many as thirteen years later, only to see them all drop off the list this year, to be replaced by new 15+ year old vulnerabilities? In addition to this oddity, there are more considerations leading to my top 10 list of questions about their list:

  1. How does a local vulnerability based on a symlink overwrite flaw (CVE-2012-1054) make it into a top 10 list of “85% of successful exploit traffic“?
  2. How does a local vulnerability in Puppet rank #3 on this list, given the install base of Puppet as compared to Adobe or Java?
  3. If they are detecting exploits on the wire, shouldn’t we see Java, Adobe Reader, and Adobe Flash somewhere on the list? The “Slow and steady—but how slow?” section even talks about time-to-exploit for Adobe.
  4. Why doesn’t this list remotely match US-CERT’s “Top 30 Targeted High Risk Vulnerabilities” that includes vulnerabilities back to 2006, but not a single one listed above?
  5. How does a vulnerability that by all accounts is so vague, that it has to be distinguished by the vendor issued CVE ID (CVE-2011-0877), have a signature and get exploited so much?
  6. How does a vulnerability in Oracle Enterprise Manager Grid Control show up as #4, when no Oracle Database vulnerabilities appear?
  7. How do you distinguish an FTP LIST command exploit from one vendor to another? (e.g. CVE-2005-2726, CVE-2002-0558, CVE-2001-0933, CVE-2001-0680) According to the one-liner methodology, this is done via pairing SIEM data, suggesting that BlackMoon and Vermillion are that popular today.
  8. Yet, how does a remote DoS in an Windows-based FTP program that doesn’t appear to have been distributed for a decade make it on this list? Are people really conducting targeted DoS attacks against this software?
  9. Is BlackMoon FTP Server really that prevalent to be exploited so often?
  10. Or is there a problem in generating this data, which would be more easily attributed to loose signatures detecting FTP attacks regardless of vendor?

Figure 12 in the report, which is described as “Count of CVEs exploited in 2015 by CVE publication date” is a curious thing to include, as the CVE publication date is very distinct from the vulnerability disclosure date. While a large percentage of CVE publication dates are within seven days of disclosure, many are not (e.g. CVE-2015-8852 disclosed 2015-03-23 and CVE publication on 2016-04-26). Enough to make this chart questionable as far as the insight it provides. Taking the data as presented, are they really saying that only ~ 73 vulnerabilities with a 2015 ID were successfully exploited in 2015 across “millions of successful real-world exploitations“? Given that 40 vulnerabilities were discovered in the wild, 33 of which have 2015 CVE IDs, that means that only ~40 other 2015 vulnerabilities were successfully exploited? If that is the takeaway, how is the security industry unable to stop the increasing wave of data breaches, the same kinds that led to this report? Something doesn’t add up here.

While people are cheering about the DBIR disclaiming there is sample bias (and not really enumerating it), they ignore the measurement bias, don’t speak to publication bias, don’t explain the attrition bias between 2015 and 2016, or potential chaining bias. As usual, the media is happy to glom onto such reports without asking any of these questions or providing critical analysis. As an industry, we need to keep challenging metrics and statistics to ensure they are not only accurate, but provide meaning that benefits us.


4/28/2016According to Gabe (@gdbassett), the list of CVEs in the DBIR is incorrect. He posted a new list of CVEs (mostly the same) via Twitter in a reply to Andreas Lindh who was surprised at the top ten list of vulnerabilities as well. Gabe also confirmed that afterwards they “compared the figure CVEs (listed above) against the raw data. After removing non-confirmed breaches, they match.He went on to link to another source showing “data” about one of the CVEs, that really doesn’t mean anything without more context. Meanwhile, Michael Roytman, who did the vulnerability section of the report confirmed that he/Kenna would be responding to this blog with one of their own.

I hate to harp on simple transposition style mistakes in a report, but given the severity behind using numeric identifiers for vulnerabilities, it seems like that should have been double and triple-checked. Even then, I don’t understand how someone familiar with vulnerabilities could see either list and not ask many of the same questions I did, or provide more information in the report to back the claims. That said, let’s look at Gabe’s new list of CVEs. Bold and links are used to highlight the new ones:

  1. 2015-03-05 – 2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – 2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2004-02-10 – 2003-0818 – Microsoft Windows ASN.1 Library (MSASN1.DLL) BER Encoding Handling Remote Integer Overflows
  4. 2002-07-22 – 2002-1054 – Pablo FTP Server LIST Command Arbitrary Directory Listing Remote Information Disclosure
  5. 2002-01-15 – 2002-0126 – BlackMoon FTP Server Multiple Command Remote Overflow
  6. 2001-12-26 – 2002-0953 – PHPAddress globals.php LangCookie Parameter Remote File Inclusion
  7. 2001-12-20 – 2001-0877 – Microsoft Windows Universal Plug and Play NOTIFY Request Remote DoS
  8. 2001-12-20 – 2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-04-13 – 2001-0680 – QVT/Net / Term FTP Server LIST Command Traversal Remote File Access
  10. 1999-11-22 – 1999-1058 – Vermillion FTPD Long CWD Command Handling Remote Overflow DoS

The list gained another FTP server issue that doesn’t necessarily lead to privileges, and another remote denial of service attack, while losing the Puppet symlink (CVE-2012-1054) and the vague Oracle Enterprise Manager issue (CVE-2011-0877). All said and done, the list is just as confusing as before, perhaps more so. That gives us four FTP vulnerabilities, only one of which leads to remote code execution, and two denial of service attacks, that gain no real privileges for an attacker. As Andreas Lindh points out, that I failed to highlight, having a man-in-the-middle vulnerability occupy two spots on this list is also baffling in the context of the volume of attacks stated. Also note that with the addition of CVE-2002-1054 (Pablo FTP), there are now two vulnerabilities that appear on the DBIR 2015 and DBIR 2016 top ten CVE list.

Hopefully the forthcoming blog from Michael Roytman will shed some light on these issues.

A Note on the Verizon DBIR 2015, “Incident Counting”, and VDBs

Recently, the Verizon 2015 Data Breach Investigations Report (DBIR) was released to much fanfare as usual, prompting a variety of media outlets to analyze the analysis. A few days after the release, I caught a Tweet linking to a blog from @raesene (Rory McCune) that challenged one aspect of the report. On page 16 of the report, Verizon lists the top 10 CVE ID exploited.

dbir-top10-cve

The bottom of the table says “Top 10 CVEs Exploited” which can be interpreted many ways. The paragraph above qualifies it as the “ten CVEs account for almost 97% of the exploits observed in 2014“. This is problematic because neither fully explains what that means. Is this the top 10 detected from sensors around the world? Meaning the exploits were launched and detected, with no indication if they were successful? Or does this mean these are the top ten exploits that were launched and resulted in a successful compromise of some form?

This gets more confusing when you read McCune’s blog that lists out what those ten CVE correspond to. I’ll get to the one that drew his attention, but will start with a different one. CVE-1999-0517 is an identifier for “an SNMP community name is the default (e.g. public), null, or missing.” This is very curious CVE identifier related to SNMP as it is specific to a default string. Any device that has a default community name, but does not allow remote manipulation is a rather trivial information disclosure vulnerability. This begins to speak to what the chart means, as “exploit” is being used in a broad fashion and does not track to the usage many in our industry associate with it. McCune goes on to point another on their list, CVE-2001-0540 which is an identifier for a “memory leak in Terminal servers in Windows NT and Windows 2000 allows remote attackers to cause a denial of service (memory exhaustion) via a large number of malformed Remote Desktop Protocol (RDP) requests to port 3389“. This is considerably more specific, giving us affected operating systems and the ultimate impact; a denial of service. This brings us back to why this section is in the report when part (or all) of it has nothing to do with actual breaches. After these two, McCune points out numerous other issues that directly challenge how this data was generated.

Since Verizon does not explain their methodology in generating these numbers, we’re left with our best guesses and a small attempt at an explanation via Twitter, that leads to as many questions as answers. You can read the full Twitter chat between @raesene, @vzdbir, and @mroytman (RiskIO Data Scientist who provided the data to generate this section of the report). The two pages of ‘methodology’ Verizon provides in the report (pages 59-60) are too high-level to be useful for the section mentioned above. Even after the conversation, the ‘top 10 exploited’ is still highly suspicious and does not seem accurate. The final bit from Roytman may make sense in RiskIO’s world, but doesn’t to others in the vulnerability world.

That brings me to the first vulnerability McCune pointed out, that started a four hour rabbit hole journey for me last night. From his blog:

The best example of this problem is CVE-2002-1931 which gets listed at number nine. This is a Cross-Site Scripting issue in version 2.1.1 and 1.1.3 of a product called PHP Arena and specifically the pafileDB area of that product. Now I struggled to find out too much about that product because the site that used to host it http://www.phparena.net is now a gambling site (I presume that the domain name lapsed and was picked up as it got decent traffic). Searching via google for information, most of the results seemed to be from vulnerability databases(!) and using a google dork of inurl:pafiledb shows a total of 156 results, which seems low for one of the most exploited issues on the Internet.

This prompted me to look at CVE-2002-1931 and the corresponding OSVDB entry. Then I looked at the ‘pafiledb.php’ cross-site scripting issues in general, and immediately noticed problems:

pafiledb-before

This is a good testament to how far vulnerability databases (VDBs) have come over the years, and how our data earlier on is not so hot sometimes. Regardless of vulnerability age, I want database completeness and accuracy so I set out to fix it. What I thought would be a simple 15 minute fix took much longer. After reading the original disclosures of a few early paFileDB XSS issues, it became clear that the one visible script being tested was actually more complex, and calling additional PHP files. It also became quite clear that several databases, ours included, had mixed up references and done a poor job abstracting. Next step, take extensive notes on every disclosure including dates, exploit strings, versions affected, solution if available, and more. The rough notes for some, but not all of the issues to give a feel for what this entails:

pafiledb-notes1

Going back to this script being ‘more complex’, and to try to answer some questions about the disclosure, the next trick was to find a copy of paFileDB 3.1 or before to see what it entailed. This is harder than it sounds, given the age of the software and the fact the vendor site has been gone for seven years or more. With the archive finally in hand, it became more clear what the various ‘action’ parameters really meant.

pafiledb31

Going through each disclosure and following links to links, it also became obvious that every VDB missed the inclusion of some disclosures, and/or did not properly abstract them. To be fair, many VDBs have modified their abstraction rules over the years, including us. So using today’s standards along with yesterday’s data, we get a very different picture of those same XSS vulnerabilities:

pafiledb-after1

With this in mind, reconsider the Verizon report that says CVE-2002-1931 is a top 10 exploited vulnerability in 2014. That CVE is very specific, based on a 2002-10-20 disclosure that starts out referring to a vulnerability that we don’t have details on. Either it was publicly disclosed and the VDBs missed it, it was mentioned on the vendor site somewhere (and doesn’t appear to be there now, even with the great help of archive.org), or it was mentioned in private or restricted channels, but known to some. We’re left with what that Oct, 2002 post said:

Some of you may be familiar with Pafiledb provided by PHP arena. Well they just released a new version that fixed a problem with their counting of files. Along with that they said they fixed a possible security bug involving using Javascript as a search string. I checked it on my old version and it is infact there, so I updated to the new version so the bugs can be fixed and I checked it and it no longer works.

We know it is “pafiledb.php” only because that is the base script that in turn calls others. In reality, based on the other digging, it is very likely making a call to includes/search.php which contains the vulnerability. Without performing a code-level analysis, we can only include a technical note with our submission and move on. Continuing the train of thought, what data collection methods are being performed that assign that CVE to an attack that does not appear to be publicly disclosed? The vulnerability scanners from around that time, many of which are still used today, do not specifically test for and exploit that issue. Rather, they look for the additional three XSS disclosed in the same post further down. All three “pafiledb.php” exploits that call a specific action that correspond to /includes/rate.php, /includes/download.php, and /includes/email.php. It is logical that intrusion detection systems and vulnerability scanners would be looking for those three issues (likely lumped into one ID), but not the vague “search string” issue.

McCune’s observation and singling this vulnerability out is spot on. While he questioned the data in a different way, my method gives additional evidence that the ‘top 10’ is built on faulty data at best. I hope that this blog is both educational from the VDB side of things, and further encourages Verizon to be more forthcoming with their methodology for this data. As it stands, it simply isn’t trustworthy.