A Note on the Verizon DBIR 2015, “Incident Counting”, and VDBs

Recently, the Verizon 2015 Data Breach Investigations Report (DBIR) was released to much fanfare as usual, prompting a variety of media outlets to analyze the analysis. A few days after the release, I caught a Tweet linking to a blog from @raesene (Rory McCune) that challenged one aspect of the report. On page 16 of the report, Verizon lists the top 10 CVE ID exploited.

dbir-top10-cve

The bottom of the table says “Top 10 CVEs Exploited” which can be interpreted many ways. The paragraph above qualifies it as the “ten CVEs account for almost 97% of the exploits observed in 2014“. This is problematic because neither fully explains what that means. Is this the top 10 detected from sensors around the world? Meaning the exploits were launched and detected, with no indication if they were successful? Or does this mean these are the top ten exploits that were launched and resulted in a successful compromise of some form?

This gets more confusing when you read McCune’s blog that lists out what those ten CVE correspond to. I’ll get to the one that drew his attention, but will start with a different one. CVE-1999-0517 is an identifier for “an SNMP community name is the default (e.g. public), null, or missing.” This is very curious CVE identifier related to SNMP as it is specific to a default string. Any device that has a default community name, but does not allow remote manipulation is a rather trivial information disclosure vulnerability. This begins to speak to what the chart means, as “exploit” is being used in a broad fashion and does not track to the usage many in our industry associate with it. McCune goes on to point another on their list, CVE-2001-0540 which is an identifier for a “memory leak in Terminal servers in Windows NT and Windows 2000 allows remote attackers to cause a denial of service (memory exhaustion) via a large number of malformed Remote Desktop Protocol (RDP) requests to port 3389“. This is considerably more specific, giving us affected operating systems and the ultimate impact; a denial of service. This brings us back to why this section is in the report when part (or all) of it has nothing to do with actual breaches. After these two, McCune points out numerous other issues that directly challenge how this data was generated.

Since Verizon does not explain their methodology in generating these numbers, we’re left with our best guesses and a small attempt at an explanation via Twitter, that leads to as many questions as answers. You can read the full Twitter chat between @raesene, @vzdbir, and @mroytman (RiskIO Data Scientist who provided the data to generate this section of the report). The two pages of ‘methodology’ Verizon provides in the report (pages 59-60) are too high-level to be useful for the section mentioned above. Even after the conversation, the ‘top 10 exploited’ is still highly suspicious and does not seem accurate. The final bit from Roytman may make sense in RiskIO’s world, but doesn’t to others in the vulnerability world.

That brings me to the first vulnerability McCune pointed out, that started a four hour rabbit hole journey for me last night. From his blog:

The best example of this problem is CVE-2002-1931 which gets listed at number nine. This is a Cross-Site Scripting issue in version 2.1.1 and 1.1.3 of a product called PHP Arena and specifically the pafileDB area of that product. Now I struggled to find out too much about that product because the site that used to host it http://www.phparena.net is now a gambling site (I presume that the domain name lapsed and was picked up as it got decent traffic). Searching via google for information, most of the results seemed to be from vulnerability databases(!) and using a google dork of inurl:pafiledb shows a total of 156 results, which seems low for one of the most exploited issues on the Internet.

This prompted me to look at CVE-2002-1931 and the corresponding OSVDB entry. Then I looked at the ‘pafiledb.php’ cross-site scripting issues in general, and immediately noticed problems:

pafiledb-before

This is a good testament to how far vulnerability databases (VDBs) have come over the years, and how our data earlier on is not so hot sometimes. Regardless of vulnerability age, I want database completeness and accuracy so I set out to fix it. What I thought would be a simple 15 minute fix took much longer. After reading the original disclosures of a few early paFileDB XSS issues, it became clear that the one visible script being tested was actually more complex, and calling additional PHP files. It also became quite clear that several databases, ours included, had mixed up references and done a poor job abstracting. Next step, take extensive notes on every disclosure including dates, exploit strings, versions affected, solution if available, and more. The rough notes for some, but not all of the issues to give a feel for what this entails:

pafiledb-notes1

Going back to this script being ‘more complex’, and to try to answer some questions about the disclosure, the next trick was to find a copy of paFileDB 3.1 or before to see what it entailed. This is harder than it sounds, given the age of the software and the fact the vendor site has been gone for seven years or more. With the archive finally in hand, it became more clear what the various ‘action’ parameters really meant.

pafiledb31

Going through each disclosure and following links to links, it also became obvious that every VDB missed the inclusion of some disclosures, and/or did not properly abstract them. To be fair, many VDBs have modified their abstraction rules over the years, including us. So using today’s standards along with yesterday’s data, we get a very different picture of those same XSS vulnerabilities:

pafiledb-after1

With this in mind, reconsider the Verizon report that says CVE-2002-1931 is a top 10 exploited vulnerability in 2014. That CVE is very specific, based on a 2002-10-20 disclosure that starts out referring to a vulnerability that we don’t have details on. Either it was publicly disclosed and the VDBs missed it, it was mentioned on the vendor site somewhere (and doesn’t appear to be there now, even with the great help of archive.org), or it was mentioned in private or restricted channels, but known to some. We’re left with what that Oct, 2002 post said:

Some of you may be familiar with Pafiledb provided by PHP arena. Well they just released a new version that fixed a problem with their counting of files. Along with that they said they fixed a possible security bug involving using Javascript as a search string. I checked it on my old version and it is infact there, so I updated to the new version so the bugs can be fixed and I checked it and it no longer works.

We know it is “pafiledb.php” only because that is the base script that in turn calls others. In reality, based on the other digging, it is very likely making a call to includes/search.php which contains the vulnerability. Without performing a code-level analysis, we can only include a technical note with our submission and move on. Continuing the train of thought, what data collection methods are being performed that assign that CVE to an attack that does not appear to be publicly disclosed? The vulnerability scanners from around that time, many of which are still used today, do not specifically test for and exploit that issue. Rather, they look for the additional three XSS disclosed in the same post further down. All three “pafiledb.php” exploits that call a specific action that correspond to /includes/rate.php, /includes/download.php, and /includes/email.php. It is logical that intrusion detection systems and vulnerability scanners would be looking for those three issues (likely lumped into one ID), but not the vague “search string” issue.

McCune’s observation and singling this vulnerability out is spot on. While he questioned the data in a different way, my method gives additional evidence that the ‘top 10′ is built on faulty data at best. I hope that this blog is both educational from the VDB side of things, and further encourages Verizon to be more forthcoming with their methodology for this data. As it stands, it simply isn’t trustworthy.

Reviewing the Secunia 2015 Vulnerability Review (A Redux)

It’s that time of year again! Vulnerability databases whip up reports touting statistics and observations based on their last year of collecting data. It’s understandable, especially for a commercial database, to show why your data source is the best. In the past, we haven’t had a strong desire to whip up a flashy PDF with our take on vulnerability disclosure statistics. Instead, we’ve focused more on educating others on why their statistics are often so horribly wrong or misleading. It should come as no surprise then, as we find ourselves forced to point out the errors in Secunia’s ways. We did the same thing last year in a long blog, giving more perspective and our own numbers for balance.

After reading this year’s report from Secunia, we could basically use last year’s blog as a template and just plug in new numbers. Instead, it is important to point out that after last year’s review, Secunia opted not to revise their methodology. Worse, they did not take any pointers from a presentation on vulnerability statistics Steve Christey (CVE / MITRE) and I did at BlackHat before last year’s report, pointing out the most common flaws in statistics and where the bias comes from. It is easy to chalk up last year’s report as being naive and not fully appreciating or understanding such statistics. This year though, they have no excuse. The numbers they report seem purposefully misleading and are a disservice to their customers and the community.

The headline of their press release and crux of their report is that they identified 15,435 vulnerabilities in 2014. This is entirely inaccurate, and completely misleading. This is fundamentally due to the same very flawed methodology they used last year, where they count a single vulnerability as many as 210 times (e.g. this year CVE-2014-0160, aka Heartbleed). Then factor in all of the other high profile “logo” vulnerabilities (e.g. FREAK, ShellShock), especially in OpenSSL, and the same handful of vulnerabilities will get counted hundreds or even a thousand times. To be abundantly clear, a vulnerability in a third-party library such as OpenSSL is one vulnerability. It doesn’t matter how many other products use and integrate that code, the fundamental flaw is in the library. Counting each product that implements OpenSSL as a distinct vulnerability, rather than a distinct occurrence of a vulnerability, is wrong. Worse, it actually highlights just how poor their statistics are, if you do accept their flawed methodology, as it is heavily used among thousands of applications that Secunia doesn’t cover, even when a vendor like IBM issues numerous advisories that they miss. No matter how you cut it, their numbers are invalid.

So poking at our database, understanding we don’t quite have a 100% mapping to Secunia, but feel it is pretty close, we see that around 5,280 of our entries have a Secunia reference and 4,530 of those also have a CVE reference (compared to more than 8,500 entries we have with a CVE reference). Even counting for a good margin of error, it still appears that Secunia does not cover almost 3,000 vulnerabilities in 2014, which are covered by CVE entries. Those familiar with this blog know that we are more critical of CVE than any other Vulnerability Database. If a commercial VDB with hundreds of high-dollar clients can’t even keep a 100% mapping with an over-funded, U.S. government run VDB that is considered the “lowest common denominator” by many, that speaks volumes. I could end the blog on that note alone, because anything less than mediocrity is a disgrace. How do you trust, and pay for, a vulnerability intelligence service missing thousands of publicly disclosed vulnerabilities, that are served up on a platter and available to anyone for free use and inclusion in their own database? You don’t. After all, remember this quote from the Secunia report last year?

CVE has become a de facto industry standard used to uniquely identify vulnerabilities, which have achieved wide acceptance in the security industry.

All of the above is not only important, it is absolutely critical to appropriately frame their statistics. Not only does Secunia avoid using the minimum industry standard for vulnerability aggregation, they opt to use their own methodology, which they now know beyond doubt seriously inflates their ‘vulnerability’ count. Last year, it was by almost three times more than reality. This year, it was also by about three times more than reality, as they only covered approximately 5,500 unique vulnerabilities. That is almost 3,000 less than CVE and 8,000 less than OSVDB. Any excuses of Secunia coverage lacking that are based on them claiming to verify vulnerabilities and “not covering vulnerabilities in beta software” are also invalid, as a majority of vulnerabilities reported are in stable software, and regularly come from trusted researchers or the vendors themselves. Regardless, it should be clear to anyone passingly familiar with the aggregation of vulnerabilities that Secunia is playing well outside the bounds of reality. Of course, every provider of vulnerability intelligence has some motive, or at least desire, to see the numbers go up year to year. It further justifies their customers spending money on the solution.

“Every year, we see an increase in the number of vulnerabilities discovered, emphasizing the need for organizations to stay on top of their environment. IT teams need to have complete visibility of the applications that are in use, and they need firm policies and procedures in place, in order to deal with the vulnerabilities as they are disclosed.” — Kasper Lindgaard, Director of Research and Security at Secunia.

While Mr. Lindgaard’s quote is flashy and compelling, it is also completely wrong, even according to their own historical reports. If Mr. Lindgaard had read Secunia’s own report from 2010 and 2011, he might have avoided this blunder.

This report presents global vulnerability data from the last five years and identifies trends found in 2010. The total number of vulnerabilities disclosed in 2010 shows a slight decrease of 3% compared to 2009. – Yearly Report 2010 (Secunia)

Analysing the long-term and short-term trends of all products from all vendors in the Secunia database over the last six years reveals that the total number of vulnerabilities decreased slightly in 2011 compared to 2010. – Vulnerabilities are Resilient (2011, Secunia Report)

In 2007 OSVDB cataloged 9,574 vulnerabilities as compared to 11,050 in 2006. In 2009 they cataloged 8,175 vulnerabilities as compared to 9,807 in 2008. In 2011 they cataloged 7,913 vulnerabilities compared to 9,161 in 2010. That is three distinct years that were not an increase at all. Taking Mr. Lindgaard’s quote into account, and comparing it to our own data that is based on a more standard method of abstraction, we also show that it has not been a steady rise every year:

osvdbbyyear

With highly questionable statistics, based on a flawed methodology, Secunia also lucked out this year, as vendors are more prudent about reporting vulnerabilities in used third-party libraries (e.g. OpenSSL, PHP). This plays out nicely for a company that incorrectly counts each vendor’s use of vulnerable software as a distinct issue.

Getting down to the statistics they share, we’ll give a few additional perspectives. As with all vulnerability statistics, they should be properly explained and disclaimed, or they are essentially meaningless.

In 2014, a total of 15,435 vulnerabilities were discovered in 3,870 products from 500 vendors.

As stated previously, this is absolutely false. Rather than track unique vulnerabilities, Secunia’s model and methodology is to focus more on products and operating systems. For example, they will issue one advisory for a vulnerability in OpenSSL, then dozens of additional advisories for the exact same vulnerability. The only difference is that each subsequent advisory covers a product or operating system that uses it. Above I mentioned the redundant ‘HeartBleed’ entries, but to further illustrate the point and demonstrate how this practice can lead to wildly inaccurate statistics, let’s examine the OpenSSL ‘FREAK’ vulnerability as well (29 Secunia advisories). The flaw is in the OpenSSL code, not the implementation from each vendor, so it is still that same one vulnerability. Depending on how good their coverage is, this could be more problematic. Looking at IBM, who uses OpenSSL in a wide variety of their products, we know that at least 56 are impacted by the vulnerability, which may mean that many more Secunia advisories cover them. If we had a perfect glimpse into what products use OpenSSL, it would likely be thousands. Trying to suggest that is ‘thousands of unique vulnerabilities’ is simply ignorant.

83% of vulnerabilities in all products had patches available on the day of disclosure in 2014.

We assume that by ‘patches’, they mean a viable vendor-provided solution (e.g. upgrade) as well. Based on our aggregation, it is closer to 64% that have a solution available, and that is not counting “the day of disclosure”. That includes vendors who have patched since the issue was disclosed. This big gap comes from the number of vulnerabilities aggregated by each database since the start of 2014.

25 zero-day vulnerabilities were discovered in total in 2014, compared to 14 the year before.

Assuming we use the same definition of “zero-day” as Secunia, this is considerably under what we tracked. We flag such issues “discovered in the wild”, meaning the vulnerability was being actively exploited by attackers when it was discovered and/or disclosed. In 2014, we show 43 entries with this designation, compared to 73 in 2013. That is a sharp decrease, not increase. Part of that large swing for us comes from a rash of attacker-trojaned software distributions that were used to target mobile users in 2013. Since trojaned software from a legitimate vendor meets the criteria to be called a vulnerability (it can be abused by a bad actor to cross privilege boundaries), we include it where Secunia and other databases generally ignore it.

20 of the 25 zero-day vulnerabilities were discovered in the 25 most popular products – 7 of these in operating systems.

Since we got into the “popular products” bit in last year’s rebuttal to their statistics, we’ll skip that this year. However, in 2014 we show that 16 of those “zero day” were in Microsoft products (two in Windows, six in the bundled MSIE), and four were in Adobe products.

In 2014, 1,035 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari. That is a 42% increase from 2013.

Without searching our data, it is likely very safe to say these numbers are way off, and that Secunia’s coverage of browser security is woefully behind. In addition to not aggregating as much data, they likely don’t properly take into account vulnerabilities in WebKit, which potentially affects Chrome, Safari, and Opera. Are WebKit vulnerabilities attributed to one, the other, or all three? With that in mind, let’s compare numbers. We see ~200 for Mozilla, ~250 for Microsoft IE, and not even 10 between Safari and Opera. In reality, Opera is not forthcoming in their information about vulnerabilities they patch, so their actual vulnerability count is likely much higher. That leaves us with Chrome, which had over 300 vulnerabilities in their own code. As mentioned, factor in another 100+ vulnerabilities in WebKit that may impact up to three browsers. That comes to a total of 889, far less than their claimed 1,035. That makes us curious if they are using the same flawed methodology with browser vulnerabilities, as they do with libraries (i.e. counting WebKit / Blink issues multiple times for each browser using it). Some vulnerabilities in these browsers are also due to flaws in other 3rd party libraries and not in the browsers’ own code (e.g. ICU4C and OpenJPEG). Finally, Google Chrome provides Adobe Flash Player, so is Secunia adding the number of vulnerabilities addressed in Flash Player to their Google Chrome count as well, further bloating it? So many unknowns and another reminder of why vulnerability statistics are often misleading or serve no value. Without explaining the methodology, we’re left with guesses and assumptions that strongly suggest this statistic can’t be trusted.

Even more offensive, is that the “vulnerability intelligence experts at Secunia Research” don’t demonstrate any expertise in informing readers why Google Chrome numbers are so high. Instead, they publish the information, withhold details of the methodology, and let the media run wild parroting their pedestrian statistics and understanding. As a lesson to the media, and Secunia, consider these additional pieces of information that help explain those numbers, and why you simple can’t compare disclosed vulnerability totals:

  1. Google releases details about the vulnerabilities fixed in Chrome, unlike some other browser vendors. For example, Opera releases almost no information. Microsoft typically releases information on the more serious vulnerabilities, and none from their internal auditing. Of the 310 Chrome vulnerabilities we tracked, 66 were CVSSv2 5.0 or less.
  2. Google has perhaps the most aggressive bug bounty program in the industry, which encourages more researchers to audit their code. How aggressive? They paid out more than US$550,000 in 2014, just for Chrome vulnerabilities.
  3. Google’s turnaround in fixing Chrome vulnerabilities is quite respectable. Since many of the vulnerabilities were reported to them and fixed promptly, it is easy to argue with the claim it is the “least secure” browser.
  4. Google fields several teams of security experts that audit their code, including Chrome. They find and fix potential issues before disclosing them. By doing that, they are ensuring the security of the browser, yet effectively getting punished by Secunia for doing so.
  5. In addition to all of the issues disclosed, a tiny fraction were actually proven to be exploitable. Like many software vendors, they find it is easier and more efficient to spend a small amount of time to fix the possible vulnerability rather than the larger amount of time required to prove it is exploitable.

There are many pitfalls of using simple disclosed vulnerability totals to compare products. This is explained over and over by those familiar with the vulnerability disclosure world, and should be common knowledge to ‘experts’ in vulnerability research. Instead, Secunia not only fuels media articles that aren’t critical of their report, they Tweet little snippets without any context. They also have their partners repeat their statistics, giving them more exposure, misleading the customers of other companies.

Vulnerability statistics can be useful. But they must be presented in a responsible and educated manner to serve their purpose. Those not familiar with vulnerability aggregation and generating the resulting statistics typically do a poor job, and add confusion to the matter. We like to call such people “vulnerability tourists”, and we can now add Secunia to the growing list of them.

[Update: 2015-04-01. One additional quote from the 2010 Secunia Report was added, and clarification of wording regarding the yearly vuln total chart was made.]

Vendors sure like to wave the “coordination” flag… (revisiting the ‘perfect storm’)

I’ve written about coordinated disclosure and the debate around it many times in the past. I like to think that I do so in a way that is above and beyond the usual old debate. This is another blog dedicated to an aspect of “coordinated” disclosure that vendors fail to see. Even when a vendor is proudly waving their own coordination flag, decrying the actions of another vendor, they still miss out on the most obvious.

In order to understand just how absurd these vendors can be, let’s remember what the purpose of “coordinated disclosure” is. At a high level, it is to protect their consumers. The idea is to provide a solution for a security issue at the same time a vulnerability becomes publicly known (or ideally, days before the disclosure). For example, in the link above we see Microsoft complaining that Google disclosed a vulnerability two days before a patch was available, putting their customers at risk. Fair enough, we’re not going to debate how long is enough for such patches. Skip past that.

There is another simple truth about the disclosure cycle that has been outlined plenty of times before. After a vendor patch becomes public, it takes less than 24 hours for a skilled researcher to reverse it to determine what the vulnerability is. From here, it could be a matter of hours or days before functional exploit code is created based on the complexity of the issue. Factor in all of the researchers and criminals capable of doing this, and the worst case scenario is that within very few hours a working exploit is created. Best case scenario, customers may have two or three days.

Years ago, Steve Christey pointed out that multiple vendors had released patches on the same day, leading me to write about how that was problematic. So jump to today, and that has become the reality that organizations face at least once a year. But… it got even worse. On October 14, 2014, customers got to witness the dark side of “coordinated disclosure”, one that these vendors are quick to espouse, but equally quick to disregard themselves. In one day we received 25 Microsoft vulnerabilities, 117 Oracle vulnerabilities, 12 SAP vulnerabilities, 8 Mozilla advisories, 6 adobe vulnerabilities, 1 Cisco vulnerability, 1 Chrome OS vulnerability, 1 Google V8 vulnerability, and 3 Linux Kernel vulnerabilities disclosed. That covers every major IT asset in an organization almost and forces administrators to triage in ways that were unheard of years prior.

Do any of these vendors feel that an IT organization is capable of patching all of that in a single day? Two days? Three days? Is it more likely that a full week would be an impressive task while some organizations must run patches through their own testing before deployment and might get it done in two to four weeks? Do they forget that with these patches, bad guys can reverse them and have working exploit in as little as a day, putting their customers at serious risk?

So basically, these vendors who consistently or frequently release on a Tuesday (e.g. Microsoft, Oracle, Adobe) have been coordinating the exact opposite of what they frequently preach. They are not necessarily helping customers by having scheduled patches. This year, we can look forward to Oracle quarterly patches on April 14 and July 14. Both of which are the “second Tuesday” Microsoft / Adobe patch times. Throw in other vendors like IBM (that has been publishing as many as 150 advisories in 48 hours lately), SAP, Google, Apple, Mozilla, and countless others that release frequently, and administrators are in for a world of hurt.

Oh, did I forget to mention that kicker about all of this? October 14, 2014 has 254 vulnerabilities disclosed. On the same day that the dreaded POODLE vulnerability was disclosed, impacting thousands of different vendors and products. That same day, OpenSSL, perhaps the most oft used SSL library released a patch for the vulnerability as well, perfectly “coordinated” with all of the other issues.

2013 Superdome Outage a Hack? The Value of Post-Incident Investigations.

As we approach the pinnacle of U.S. sportsball, I am reminded of the complete scandal from a past Superbowl. No, not the obviously-setup wardrobe malfunction scandal. No, not the one where we might have been subjected to a pre-recorded half-time show. The one in 2013 where hackers terrorism who-knows-why caused the stadium lights to go out for 34 minutes. That day, and the days after, everyone sure seemed to ‘know’ what happened. Since many were throwing around claims of ‘hacking’ or ‘cyber terrorism’ at the time, this incident caught my attention.

Here’s what we know, with selected highlights:

  • February 3, 2013: Superbowl happened.
  • February 3, 2013: Anonymous takes credit for the blackout.
  • February 3, 2013: Because theories of hacking or terrorism aren’t enough, Mashable comes up with 13 more things that may have caused it.
  • February 4, 2013: A day later, we’re once again reminded that “inside sources” are often full of it. Baltimore Sun initial report claimed a “power-intensive” halftime show might have been a factor.
  • February 4, 2013: The FBI makes a statement saying that terrorism was not a factor.
  • February 4, 2013: We learn that such a failure may have been predicted in 2012.
  • February 4, 2013: Of course the outage doesn’t really matter. A little game delay, and it is a “boon for super bowl ratings“, the most critical thing to the corrupt NFL.
  • February 4, 2013: By this point, people are pretty sure hackers didn’t do it. They probably didn’t, but they could have!
  • February 4, 2013: Oh sorry, it could still be hackers. The Christian Science monitor actually covers the likely reason, yet that isn’t sexy. Chinese hacker ploy seems more reasonable to cover…
  • February 4, 2013: Not only Anonymous, but ‘Rustle League’ claimed to hack the super bowl. A day later we learn that notorious Rustle League trolls were … wait for it … trolling.
  • February 5, 2013: Officials at Entergy, who provide power for that property clearly state “There was no Internet or remote computer access to the piece of equipment inside the stadium that sensed an abnormality in the electrical system and partially cut power to the Superdome…”
  • February 6, 2013: While the Superdome was not hacked on Sunday, the U.S. Federal Reserve was.
  • February 8, 2013: Multiple sources begin covering the real reason for the Superdome outage.
  • February 8, 2013: We now have a good idea what caused it, but let the blame game begin. Manufacturer error, or user error?
  • March 21, 2013: The official Entergy report is released (PDF), giving a very technical analysis and summary of what happened. Everyone but conspiracy theorists can sleep well.

The reason for this blog is that Chris Sistrunk, a noted SCADA security researcher, pinged me the other day about the report. We were curious if the failure described could be considered a vulnerability by OSVDB standards. After reading the report and several questions for him, this seems like a simple case of device malfunction / failure. Quoting relevant bits from the report:

During the testing, behavior of the relay was not entirely consistent with the function described in the instruction manual. Under some circumstances, when the current exceeded the trip
setting and then decreased below the trip setting even after the timer had expired, the relay did not operate.

This instability was observed on all of the relays tested (during testing by this engineer, ENOI, and others in coordination with S&C at Vault 24 on March 1, 2013), including the subject
(Bay 8) relay and two identical (exemplar) relays. Behavior of the device in a manner contrary to the published functionality of the device constitutes a design defect.

Interesting read and glimpse into the world of SCADA / ICS. While the notion that the outage was due to hackers, the reality is far more mundane. We could certainly learn from this case, along with thousands of others… but who am I kidding. News covering the mundane and real doesn’t sell.

We’re “critical”, not “immature”.

Recently, we got feedback via Twitter that we come across as “immature”. On the surface, perhaps. Not all of our Tweets are critical of CVE though. I replied pretty quickly that said criticism is also us “pushing for them to improve since so much of the industry relies on them.” When I Tweeted that, a post to the CVE Editorial Board wasn’t public on the web site, so I couldn’t quote it. But it was a great and timely example of one way our team is pushing CVE to improve.

That said, let me better explain our criticism. It isn’t the first time, it won’t be the last time, but I am not sure if some of these thoughts have been published via blog before. Put simply, we would not be so critical of CVE if it wasn’t a crumbling cornerstone of the Information Security industry. Countless organizations and products use CVE as a bible of public vulnerabilities. A majority of security technology, including firewalls, vulnerability scanners, IDS, IPS, and everything else is built on their database. Our industry assumes they are doing their job, doing their best, and cataloging public vulnerabilities. That simply is not the case, and it hasn’t been for more than a year. Everyone uses it as a benchmark, trusts it, and relies on it. Yet no one questions it except us. Last I checked, our profession was built on “trust but verify“. We verify, thus, we’re critical.

That said, we have a long history of sending corrections, feedback, and improvement ideas to all of the major VDBs including CVE, BID, Secunia, ISS, SecTracker, EDB, PacketStorm, and more. We currently have a great relationship with ISS and EDB. We have had a great-to-courteous relationship with CVE. SecTracker has been receptive of our feedback in the past, but given their minimal output we don’t try to provide cross-references to them anymore. PacketStorm is bad about approving our comments on their published disclosures. BID is and has been a lost cause for almost a decade. Further, we have decent relationships with US-CERT, IBM PSIRT, CERT (CM), and other companies teams. We give continued feedback to some of these organizations that are designed to help their process, help their disclosures, and thus help the industry. I mention this because for the most part, a majority of them are competitors to us. We have a commercial model, it is the only way to fund the database. Despite that, and a decade of disillusionment of the ‘open source’ model, we still give away a significant portion of our data. Until recently, our biggest competitor in licensing our data was ourselves, as potential customers would freely admit they could take our data from osvdb.org. Next to ourselves? CVE and NVD. It is astounding and scary how many companies will pass up on superior vulnerability intelligence because what they are using is free. In many cases, they tell their customers they provide the best intelligence, provide the most security, and a wide variety of other platitudes. In reality, they don’t want to spend a sliver of a fraction of their profits, to truly help their customers. It is a level of greed and unethical behavior that is absolutely disgusting. If I could expose them, I would, but they fall under dreaded NDAs.

Point is, we strive to push all of these organizations to be better. Jake and I gave a presentation back in 2005 at CanSecWest where we said that VDBs need to evolve. They still do. Ten years later, we’re still pushing them to do so while they resist with every ounce of their being. Fortunately, we have been pushing ourselves to be better during that time, and it shows. We’re no more critical of the other VDBs than we are of ourselves. It doesn’t help us one bit. In fact, it only serves to hurt us. Yet, it is the right thing to do for the industry, so we do it.

SQLi Disclosures and the Last Five Years (Transparent Statistics)

Nothing like waking up to a new article purporting to show vulnerability statistics and having someone ask us for comment. But hey, we love giving additional perspective on such statistics since they are often without proper context and disclaimers. This morning, the new article comes from Help Net Security and is titled “SQL injection vulnerabilities surge to highest levels in three years“. It cites “DB Networks’ research” who did the usual, parsed NVD data. As we all know, that data comes from CVE who is a frequent topic of rant on Twitter and occasional blogs. Cliff notes: CVE does not promise comprehensive coverage of public disclosures. They openly admit this and Steve Christey has repeatedly said that CVE may not be the best for statistics, even as far back as 2006. Early in 2013, Christey again publicly stated that CVE “can no longer guarantee full coverage of all public vulnerabilities.” I bring this up to remind everyone, again, that it is important to add such disclaimers about your data set, and ultimately your published research.

Getting back to the statistics and DB Research, the second thing I wondered after their data source, was who they were and why they were doing this analysis (SQLi specifically). I probably shouldn’t be surprised to learn this comes over a year after they announced they have an appliance that blocks SQL injection attacks. This too should be disclaimed in any article covering their analysis, as it adds perspective why they are choosing a single vulnerability type, and may also explain why they chose NVD over another data sources (since it fits more neatly into their narrative). Following that, they paid Ponemon to help them conduct a study on the threat of SQL injection. Or wait, was it two of them, the second with a bent towards retail breaches? Here are the results of the first one it seems, which again shows that they have a pretty specific bias toward SQL injection.

Now that we know they have a vested interest in the results of their analysis coming out a specific way, let’s look at the results. First, the Help Net Security article does not describe their methodology, at all. Reading their self-written, paid-for news announcement (PR Newswire) about the analysis, it becomes very clear this is a gimmick for advertising, not actual vulnerability statistics research. It’s 2015, years after Steve Christey and I have both ranted about such statistics, and they don’t explain their methodology. This makes it apparent that “CVE abstraction bias” is possibly the biggest factor here. I have blogged about using CVE/NVD as a dataset before, because it contains one of the biggest pitfalls in such statistic generation.

Rather than debunk these stats directly, since it has been done many times in the past, I can say that they are basically meaningless at this point. Even without their methodology, I am sure someone can trivially reproduce their results and figure out if they abstracted per CVE, or per actual SQLi mentioned. As a recent example, CVE-2014-7137 is a single entry that actually covers 54 distinct SQL injection vulnerabilities. If you count just the CVE candidate versus the vulnerabilities that may be listed within them, your numbers will vary greatly. That said, I will assume that their results can be reproduced since we know their data source and their bias in desired results.

With that in mind, let’s first look at what the numbers look like when using a database that clearly abstracts those issues, and covers a couple thousand sources more than CVE officially does:

sqli-five-year-view

“SQL Injection Vulnerabilities Surge to Highest Levels in Three Years” is the title of DB Networks’ press release, and is summarized by Help Net Security as “last year produced the most SQL vulnerabilities identified since 2011 and 104% more than were identified in 2013.” In reality (or at least, using a more comprehensive data set), we see that isn’t the case since the available statistics a) don’t show 2014 being more than 2011 and b) 2011 not holding a candle to 2010. No big surprise really, given that we actually do vulnerability aggregation while NVD and DB Networks does not. But really, I digress. These ‘statistics’ are nothing more than a thinly veiled excuse to further advertise themselves. Notice in the press release they go from the statistics that help justify their products right into the third paragraph quoting the CEO being “truly honored to be selected as a finalist for SC Magazine’s Best Database Security Solution.” He follows that up with another line calling out their specific product that helps “database security in some of the world’s largest mission critical datacenters.

Yep, these statistics are very transparent. They are based on a convenient data source, maintained by an agency that doesn’t actually aggregate the information, that doesn’t have the experience their data-benefactors have (CVE). They are advertised with a single goal in mind; selling their product. The fact they use the word “research” in the context of generating these statistics is a joke.

As always, I encourage companies and individuals to keep publishing vulnerability statistics. But I stress that it should be done responsibly. Disclaim your data source, explain your methodology, be clear if you are curious about the results coming out one way (bias is fine, just disclaim it), and realize that different data sources will produce different results. Dare to use multiple sources and compare the results, even if it doesn’t fully back your desired opinion. Why? Because if you disclaim your data sources and results, the logical and simple conclusion is that you may still be right. We just don’t have the perfect vulnerability disclosure data source yet. Fortunately, some of us are working harder than others to find that unicorn.

Microsoft’s latest plea for CVD is as much propaganda as sincere.

Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.

Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.

Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.

In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).

Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.

Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.

And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.

Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.

Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.

It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.

This is a great point. But, let’s read on and offer some context using your own words…

Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.

Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.

CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.

And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)

[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]

One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…

What’s right for Google is not always right for customers.

This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.

For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?

It is a zero sum game where all parties end up injured.

What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.

Betz’ blog goes on to quote the Microsoft CVD policy which states:

Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.

Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:

Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.

Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.

Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…

Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.

Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.

Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?

Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.

Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.

The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.

While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.

CVE Is Baffling Some Nights

CVE, managed by MITRE, a ‘sole-source’ government contractor, who gets as much as one million dollars a year from the government (or more) to run the project, is a confusing entity. Researchers who have reached out to CVE for assignment or clarification on current assignments, have gone 10 days without answer (as of 2014-11-15 late night). Yet look at their actual assignments the past week:

X N 924 Nov 10 cve@mitre.org (22K) [CVENEW] New CVE CANs: 2014/11/10 06:00 ; count=17
X N 931 Nov 11 cve@mitre.org (25K) [CVENEW] New CVE CANs: 2014/11/11 17:00 ; count=32
X N 932 Nov 11 cve@mitre.org (18K) [CVENEW] New CVE CANs: 2014/11/11 18:00 ; count=18
X N 938 Nov 12 cve@mitre.org (7514) [CVENEW] New CVE CANs: 2014/11/12 11:00 ; count=5
X N 962 Nov 13 cve@mitre.org (12K) [CVENEW] New CVE CANs: 2014/11/13 10:00 ; count=9
X N 986 Nov 13 cve@mitre.org (7139) [CVENEW] New CVE CANs: 2014/11/13 19:00 ; count=4
X N 995 Nov 14 cve@mitre.org (7191) [CVENEW] New CVE CANs: 2014/11/14 10:00 ; count=3
X N 1015 Nov 14 cve@mitre.org (6076) [CVENEW] New CVE CANs: 2014/11/14 21:00 ; count=3
X N 1035 Nov 15 cve@mitre.org (5859) [CVENEW] New CVE CANs: 2014/11/15 15:00 ; count=2
X N 1037 Nov 15 cve@mitre.org (9615) [CVENEW] New CVE CANs: 2014/11/15 16:00 ; count=7
X N 1043 Nov 15 cve@mitre.org (9034) [CVENEW] New CVE CANs: 2014/11/15 19:00 ; count=4
X N 1045 Nov 15 cve@mitre.org (7539) [CVENEW] New CVE CANs: 2014/11/15 20:00 ; count=3
X N 1046 Nov 15 cve@mitre.org (5885) [CVENEW] New CVE CANs: 2014/11/15 21:00 ; count=2

Impressive! Until you read between the lines. Mon, 17 entries. Tues, which is MS (32) / Adobe (18) release day, and those numbers are obvious… as it only covers those two vendors. Then 5 on Wed, 13 on Thurs, 6 on Friday… and then 21 on the weekend? Why is a government contractor, who has a long history of not working or answering mails on the weekend, doing what appears to be overtime on a weekend?

Meanwhile, the 10th we have 32 entries, 11th we have 100 entries, 12th we have 92 entries, 13th we have 56 entries, 14th we have 42 entries, and the 15th we have 11 entries. That is 109 entries this week from CVE, where 50 of them (almost half) were Microsoft and Adobe. Meanwhile, we have 337 entries over those same days. That doesn’t count our backfill for historical entries, from those ‘old days’ back in earlier 2014 or 2013, that we are constantly doing.

Tonight, when matching up the Nov 15 CVE entries, we had 100% of the CVE assignments already. Remind me, where is the value of CVE exactly? They are assigning these identifiers in advance, that is obvious. But for most disclosures they are simple and straight-forward. They aren’t being used to coordinate among multiple vendors. The researchers or vendors are including the CVE identifier *before* CVE actually publishes them.

What led me to this post is that CVE is actually working on a weekend, which is very odd. Unless you mail Steve directly you generally don’t hear back from CVE until later in the week. OSVDB / RBS has outstanding mail to both Steve and CVE regarding previous assignments and other things, un-answered for 10 or more days currently. The entire purpose of CVE is to provide this ID for coordination and clarity. When they ignore such a mail, especially from a ‘trusted’ source, it speaks poorly on them. Given the level of government funding they receive, how are they not keeping up with disclosures throughout the week and instead, turning to a Saturday?

And please remember, the Saturday CVE assignments mentioned above won’t appear on CVE’s site for another day, and won’t be in NVD for at least 24 hours. Once NVD gets them, they won’t have a CVSS score or CPE data for a bit after. By a ‘bit’ I mean between a few hours and a few weeks.

This is fail on top of fail. And your security solutions are built on top of this. Yeah, of course this is a losing battle.

The Five High-level Types of Vulnerability Reports

Based on a Twitter thread started by Aaron Portnoy that was replied to by @4Dgifts asking why people would debunk vulnerability reports, I offer this quick high-level summary of what we see, and how we handle it.

Note that OSVDB uses an extensive classification system (that is very close to being overhauled greatly for more clarity and granularity), in addition to CVSS scoring. Part of our classification system allows us to flag an entry as ‘not-a-vuln’ or ‘myth/fake’. I’d like to briefly explain the different, but also in the bigger picture. When we process vulnerability reports, we only have time to go through the information disclosed usually. In some cases we will spend extra time validating or debunking the issue, as well as digging up information the researcher left out such as vendor URL, affected version, script name, parameter name, etc. That leads to the high-level types of disclosures:

  • Invalid / Not Enough – We are seeing cases where a disclosure doesn’t have enough actionable information. There is no vendor URL, the stated product name doesn’t come up on various Google searches, the proof-of-concept (PoC) provided is only for one live site, etc. If we can’t replicate it or dig up the vendor in five minutes, we have to move on.
  • Site-specific – Some of the disclosures from above end up being specific to one web site. In a few rare cases, they impact several web sites due to the companies all using the same web hosting / design shop that re-uses templates. Site-specific does not qualify for inclusion in any of the big vulnerability databases (e.g. CVE, BID, Secunia, X-Force, OSVDB). We aggregate vulnerabilities in software and hardware that is available to multiple consumers, on their premises. That means that big offerings like Dropbox or Amazon or Facebook don’t get included either. OSF maintains a separate project that documents site-specific issues.
  • Vulnerability – There is enough actionable information to consider it valid, and nothing that sets off warnings that it may be an issue. This is the run-of-the-mill event we deal with in large volumes.
  • Not a Vulnerability – While a valid report, the described issue is just considered a bug of some kind. The most common example is a context-dependent ‘DoS’ that simply crashes the software, such as media player or browser. The issue was reported to crash the software, so that is valid. But in ‘exploiting’ the issue, the attacker has gained nothing. They have not crossed privilege boundaries, as the issue can quickly be recovered from. Note that if the issue is a persistent DoS condition, that becomes a valid issue.
  • Myth/Fake – This was originally created to handle rumors of older vulnerabilities that simply were not true. “Do you remember that remote Solaris 2.5 bug in squirreld??” Since then, we have started using this classification more to denote when a described issue is simply invalid. For example, the researcher claims code execution and provides a PoC that only shows a DoS. Subsequent analysis shows that it is not exploitable.

Before you start sending emails, as @4DGifts reminds us, you can rarely say with 100% assurance that something isn’t exploitable. We understand and agree with that completely. But it is also not our job to prove a negative. If a researcher is claiming code execution, then they must provide the evidence to back their claim. Either an additional PoC that is more than a stability crash, or fully explain the conditions required to exploit it. Often times when a researcher does this, we see that while it is an issue of some sort, it may not cross privilege boundaries. “So you need admin privs to exploit this…” and “If you get a user to type in that shell code into a prompt on local software, it executes code…” Sure, but that doesn’t cross privilege boundaries.

That is why we encourage people like Aaron to help debunk invalid vulnerability reports. We’re all about accuracy, and we simply don’t have time to test and figure out every vulnerability disclosed. If it is a valid issue but requires dancing with a chicken at midnight, we want that caveat in our entry. If it is a code execution issue, but only with the same privileges as the attacker exploiting it, we want to properly label that too. We do not use CVSS to score bogus reports as valid. Instead, we reflect that they do not impact confidentiality, integrity, or availability which gives it a 0.0 score.

The Scraping Problem and Ethics

[2014-05-09 Update: We’d like to thank both McAfee and S21sec for promptly reaching out to work with us and to inform us that they are both investigating the incident, and taking steps to ensure that future access and data use complies with our license.]

Every day we get requests for an account on OSVDB, and every day we have to turn more and more people away. In many cases the intended use is clearly commercial, so we tell them they can license our data via our commercial partner Risk Based Security. While we were a fully open project for many years, the volunteer model we wanted didn’t work out. People wanted our data, but largely did not want to give their time or resources. A few years back we restricted exports and limited the API due to ongoing abuse from a variety of organizations. Our current model is designed to be free for individual, non-commercial use. Anything else requires a license and paying for the access and data usage. This is the only way we can keep the project going and continue to provide superior vulnerability intelligence.

As more and more organizations rely on automated scraping of our data in violation of our license, it has forced us to restrict some of the information we provide. As the systematic abuse rises, one of our only options is to further restrict the information while trying to find a balance of helping the end user, but crippling commercial (ab)use. We spend about half an hour a week looking at our logs to identify abusive behavior and block them from accessing the database to help curb those using our data without a proper license. In most cases we simply identify and block them, and move on. In other cases, it is a stark reminder of just how far security companies will go to to take our information. Today brought us two different cases which illustrate what we’re facing, and why their unethical actions ultimately hurt the community as we further restrict access to our information.

This is not new in the VDB world. Secunia has recently restricted almost all unauthenticated free access to their database while SecurityFocus’ BID database continues to have a fraction of the information they make available to paying customers. Quite simply, the price of aggregating and normalizing this data is high.

In the first case, we received a routine request for an account from a commercial security company, S21sec, that wanted to use our data to augment their services:

From: Marcos xxxxxx (xxxxxxx@s21sec.com)
To: moderators osvdb.org
Date: Thu, 16 May 2013 11:26:28 +0200
Subject: [OSVDB Mods] Request for account on OSVDB.org

Hello,

I’m working on e-Crime and Malware Research for S21Sec (www.s21sec.com), a lead IT Security company from Spain. I would like to obtain an API key to use in research of phishing cases we need to investigate phishing and compromised sites. We want to use tools like “cms-explorer” and create our own internal tools.

Regards,

S21sec

*Marcos xxxxxx*
/e-Crime///

Tlf: +34 902 222 521
http://www.s21sec.com , blog.s21sec.com

As with most requests like this, they received a form letter reply indicating that our commercial partner would be in touch to figure out licensing:

From: Brian Martin (brian opensecurityfoundation.org)
To: Marcos xxxxxx (xxxxxxx@s21sec.com)
Cc: RBS Sales (sales riskbasedsecurity.com)
Date: Thu, 16 May 2013 15:26:04 -0500 (CDT)
Subject: Re: [OSVDB Mods] Request for account on OSVDB.org

Marcos,

The use you describe is considered commercial by the Open Security
Foundation (OSF).

We have partnered with Risk Based Security (in the CC) to handle
commercial licensing. In addition to this, RBS provides a separate portal
with more robust features, including an expansive watch list capability,
as well as a considerably more powerful API and database export options.
The OSVDB API is very limited in the number of calls due to a wide variety
of abuse over the years, and also why the free exports are no longer
available. RBS also offers additional analysis of vulnerabilities
including more detailed technical notes on conditions for exploitation and
more.

[..]

Thanks,

Brian Martin
OSF / OSVDB

He came back pretty quickly saying that he had no budget for this, and didn’t even wait to get a price quote or discuss options:

From: Marcos xxxxxx (xxxxxxx@s21sec.com)
Date: Mon, May 20, 2013 at 10:55 AM
Subject: Re: [OSVDB Mods] Request for account on OSVDB.org
To: Brian Martin (brian opensecurityfoundation.org)
Cc: RBS Sales (sales riskbasedsecurity.com)

Thanks for the answer, but I have no budget to get the license.

We figured that was the end of it really. Instead, jump to today when we noticed someone scraping our data and trying to hide their tracks to a limited degree. Standard enumeration of our entries, but they were forging the user-agent:

88.84.65.5 – – [07/May/2014:09:37:06 -0500] “GET /show/osvdb/106231 HTTP/1.1″ 200 20415 “-” “Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:26.0) Gecko/20100101 Firefox/26.0″
88.84.65.5 – – [07/May/2014:09:37:06 -0500] “GET /show/osvdb/106232 HTTP/1.1″ 200 20489 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko”
88.84.65.5 – – [07/May/2014:09:37:07 -0500] “GET /show/osvdb/106233 HTTP/1.1″ 200 20409 “-” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)”
88.84.65.5 – – [07/May/2014:09:37:08 -0500] “GET /show/osvdb/106235 HTTP/1.1″ 200 20463 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.76 Safari/537.36″

Visiting that IP told us who it was:

s21-warn

So after requesting data, and hearing that it would require a commercial license, they figure they will just scrape the data and use it without paying. 3,600 accesses between 09:18:30 and 09:43:19.

In the second case, and substantially more offensive, is the case of security giant McAfee. They approached us last year about obtaining a commercial feed to our data that culminated in a one hour phone call with someone who ran an internal VDB there. On the call, we discussed our methodology and our data set. While we had superior numbers to any other solution, they were hung up on the fact that we weren’t fully automated. The fact that we did a lot of our process manually struck them as odd. In addition to that, we employed less people than they did to aggregate and maintain the data. McAfee couldn’t wrap their heads around this, saying there was “no way” we could maintain the data we do. We offered them a free 30 day trial to utilize our entire data set and to come back to us if they still thought it was lacking.

They didn’t even give it a try. Instead they walked away thinking our solution must be inferior. Jump to today…

161.69.163.20 – – [04/May/2014:07:22:14 -0500] “GET /90703 HTTP/1.1″ 200 6042 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36″
161.69.163.20 – – [04/May/2014:07:22:16 -0500] “GET /90704 HTTP/1.1″ 200 6040 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36″
161.69.163.20 – – [04/May/2014:07:22:18 -0500] “GET /90705 HTTP/1.1″ 200 6039 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36″
161.69.163.20 – – [04/May/2014:07:22:20 -0500] “GET /90706 HTTP/1.1″ 200 6052 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36″

They made 2,219 requests between 06:25:24 on May 4 and 21:18:26 on May 6. Excuse us, you clearly didn’t want to try our service back then. If you would like to give a shot then we kindly ask you to contact RBS so that you can do it using our API, customer portal, and/or exports as intended.

Overall, it is entirely frustrating and disappointing to see security companies who sell their services based on reputation and integrity, who claim to have ethics completely disregard them in favor of saving a buck.

mcafee-ethics

Follow

Get every new post delivered to your Inbox.

Join 5,757 other followers