A Note on the Verizon DBIR 2016 Vulnerabilities Claims

[Updated 4/28/2016]

Verizon released their yearly Data Breach Investigations Report (DBIR) and it wasn’t too long before I started getting asked about their “Vulnerabilities” section (page 13). After bringing up some highly questionable points about last year’s report regarding vulnerabilities, several people felt that the report did not stand up to scrutiny. With a few questions leveled at me, I was curious if Verizon and partners learned from last year.

This year’s vulnerability data was provided by Kenna Security (formerly Risk I/O), and Verizon “also utilized vulnerability scan data provided by Beyond Trust, Qualys and Tripwire in support of this section.” So the data isn’t from a single vendor, but at least four vendors, giving the impression that the data should be well-rounded, and have less questions than last year.

From the report:

Secondly, attackers automate certain weaponized vulnerabilities and spray and pray them across the internet, sometimes yielding incredible success. The distribution is very similar to last year, with the top 10 vulnerabilities accounting for 85% of successful exploit traffic. While being aware of and fixing these mega-vulns is a solid first step, don’t forget that the other 15% consists of over 900 CVEs, which are also being actively exploited in the wild.

This is not encouraging, as they have 10 vulnerabilities that account for an incredible amount of traffic, and the footnoted list of CVE IDs suggests the same problems as last year. And just like last year, the report does not explain the methodology for detecting the vulnerabilities, does not include details about the generation of the statistics, and provides a loose definition of what “successfully exploited” means. Without more detail it is impossible for others to reproduce their results, and extremely difficult to explain or disclaim them as a third party reading the report. Going to the Kenna Security page about this report doesn’t really yield much clarity, but does highlight another potential flaw in the methodology:

Kenna’s Chief Data Scientist Michael Roytman was the primary author of this year’s “Vulnerabilities” chapter, analyzing a correlated threat data set that spans 200M+ successful exploitations across 500+ common vulnerabilities and exposures from over 20,000 enterprises in more than 150 countries.

It’s subtle, but notice they went through a data set that spans exploitations across “500+ common vulnerabilities and exposures”, also known as CVE. If the data is only looking for CVEs, then there is an incredibly large bias at play from the start, since they are missing at least half of the disclosed vulnerabilities. More importantly, this becomes a game of fractions that the industry is keen to overlook at every opportunity:

  • CVE represents approximately half of the disclosed vulnerabilities.
  • Vulnerability scanners and IPS/IDS don’t have signatures for all CVE IDs, so they look for some fraction of CVE.
  • Detection signatures are often flawed, leading to false positives and false negatives, meaning they are actually detecting a fraction of the CVE IDs they intend to.

Another crucial factor in how this data is generated is in the detection of the exploits. Of the four companies contributing data, one was founded in 2009 (Risk I/O / Kenna) and another in 2006 (Beyond Trust, in the context of this discussion). That leaves Qualys (founded 1999) and Tripwire (founded 1997), who are likely the sources of the signatures that detected the vulnerabilities. For those around in the late 90s, the vulnerability landscape was very different than today, and security products based on signatures back then are in some ways considered rudimentary compared to today. Over time, most security products do not revisit older signatures to improve them unless they have to, often due to customer demand. Newly formed companies basically never go back and write signatures for vulnerabilities from 1999. So it stands to reason that the detection of these issues are based on Qualys and/or Tripwire’s detection capabilities, and the signatures detecting these vulnerabilities are likely outdated and not as well-constructed as compared to their more recent signatures.

That leads us to ask, how many vulnerabilities are these companies really looking for? Where did the detection signatures originate and how accurate are they? While the DBIR does disclaim that the data used is a sample, they also admit “bias undoubtedly exists”. However, they don’t warn the reader of these extremely limiting caveats that put the entire data set into a perspective clearly showing strong bias. This, combined with the lack of detailed methodology for how these vulnerabilities are detected and correlated to measure ‘success’, ultimately means this data has little value other than for inclusion in pedestrian reports on vulnerabilities.

With that in mind, I can only go by what information is available. We’ll start with the concise list of the top 10 CVE IDs these four vulnerability intelligence providers say are being exploited the most, and Verizon labels as “successfully exploited”:

  1. 2015-03-05 – CVE-2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – CVE-2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2012-02-25 – CVE-2012-1054 – Puppet k5login File Symlink File Overwrite Local Privilege Escalation
  4. 2011-07-19 – CVE-2011-0877 – Oracle Enterprise Manager Grid Control Instance Management Unspecified Remote Issue (2011-0877)
  5. 2004-02-10 – CVE-2003-0818 – Microsoft Windows ASN.1 Library (MSASN1.DLL) BER Encoding Handling Remote Integer Overflows
  6. 2002-01-15 – CVE-2002-0126 – BlackMoon FTP Server Multiple Command Remote Overflow
  7. 2001-12-26 – CVE-2002-0953 – PHPAddress globals.php LangCookie Parameter Remote File Inclusion
  8. 2001-12-20 – CVE-2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-04-13 – CVE-2001-0680 – QVT/Net / Term FTP Server LIST Command Traversal Remote File Access
  10. 1999-11-22 – CVE-1999-1058 – Vermillion FTPD Long CWD Command Handling Remote Overflow DoS

This list should raise serious red flags for anyone passingly familiar with vulnerabilities. Not only do we have very odd ‘top 10’ lists from last year and this year, but there is little overlap between them. How does 2015 show a top 10 list exploiting eight vulnerabilities with CVE identifiers between 1999 and 2002, meaning they had been exploited so much as many as thirteen years later, only to see them all drop off the list this year, to be replaced by new 15+ year old vulnerabilities? In addition to this oddity, there are more considerations leading to my top 10 list of questions about their list:

  1. How does a local vulnerability based on a symlink overwrite flaw (CVE-2012-1054) make it into a top 10 list of “85% of successful exploit traffic“?
  2. How does a local vulnerability in Puppet rank #3 on this list, given the install base of Puppet as compared to Adobe or Java?
  3. If they are detecting exploits on the wire, shouldn’t we see Java, Adobe Reader, and Adobe Flash somewhere on the list? The “Slow and steady—but how slow?” section even talks about time-to-exploit for Adobe.
  4. Why doesn’t this list remotely match US-CERT’s “Top 30 Targeted High Risk Vulnerabilities” that includes vulnerabilities back to 2006, but not a single one listed above?
  5. How does a vulnerability that by all accounts is so vague, that it has to be distinguished by the vendor issued CVE ID (CVE-2011-0877), have a signature and get exploited so much?
  6. How does a vulnerability in Oracle Enterprise Manager Grid Control show up as #4, when no Oracle Database vulnerabilities appear?
  7. How do you distinguish an FTP LIST command exploit from one vendor to another? (e.g. CVE-2005-2726, CVE-2002-0558, CVE-2001-0933, CVE-2001-0680) According to the one-liner methodology, this is done via pairing SIEM data, suggesting that BlackMoon and Vermillion are that popular today.
  8. Yet, how does a remote DoS in an Windows-based FTP program that doesn’t appear to have been distributed for a decade make it on this list? Are people really conducting targeted DoS attacks against this software?
  9. Is BlackMoon FTP Server really that prevalent to be exploited so often?
  10. Or is there a problem in generating this data, which would be more easily attributed to loose signatures detecting FTP attacks regardless of vendor?

Figure 12 in the report, which is described as “Count of CVEs exploited in 2015 by CVE publication date” is a curious thing to include, as the CVE publication date is very distinct from the vulnerability disclosure date. While a large percentage of CVE publication dates are within seven days of disclosure, many are not (e.g. CVE-2015-8852 disclosed 2015-03-23 and CVE publication on 2016-04-26). Enough to make this chart questionable as far as the insight it provides. Taking the data as presented, are they really saying that only ~ 73 vulnerabilities with a 2015 ID were successfully exploited in 2015 across “millions of successful real-world exploitations“? Given that 40 vulnerabilities were discovered in the wild, 33 of which have 2015 CVE IDs, that means that only ~40 other 2015 vulnerabilities were successfully exploited? If that is the takeaway, how is the security industry unable to stop the increasing wave of data breaches, the same kinds that led to this report? Something doesn’t add up here.

While people are cheering about the DBIR disclaiming there is sample bias (and not really enumerating it), they ignore the measurement bias, don’t speak to publication bias, don’t explain the attrition bias between 2015 and 2016, or potential chaining bias. As usual, the media is happy to glom onto such reports without asking any of these questions or providing critical analysis. As an industry, we need to keep challenging metrics and statistics to ensure they are not only accurate, but provide meaning that benefits us.


4/28/2016According to Gabe (@gdbassett), the list of CVEs in the DBIR is incorrect. He posted a new list of CVEs (mostly the same) via Twitter in a reply to Andreas Lindh who was surprised at the top ten list of vulnerabilities as well. Gabe also confirmed that afterwards they “compared the figure CVEs (listed above) against the raw data. After removing non-confirmed breaches, they match.He went on to link to another source showing “data” about one of the CVEs, that really doesn’t mean anything without more context. Meanwhile, Michael Roytman, who did the vulnerability section of the report confirmed that he/Kenna would be responding to this blog with one of their own.

I hate to harp on simple transposition style mistakes in a report, but given the severity behind using numeric identifiers for vulnerabilities, it seems like that should have been double and triple-checked. Even then, I don’t understand how someone familiar with vulnerabilities could see either list and not ask many of the same questions I did, or provide more information in the report to back the claims. That said, let’s look at Gabe’s new list of CVEs. Bold and links are used to highlight the new ones:

  1. 2015-03-05 – 2015-1637 – Microsoft Windows Secure Channel (Schannel) RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  2. 2015-01-06 – 2015-0204 – OpenSSL RSA Temporary Key Handling EXPORT_RSA Ciphers Downgrade MitM (FREAK)
  3. 2004-02-10 – 2003-0818 – Microsoft Windows ASN.1 Library (MSASN1.DLL) BER Encoding Handling Remote Integer Overflows
  4. 2002-07-22 – 2002-1054 – Pablo FTP Server LIST Command Arbitrary Directory Listing Remote Information Disclosure
  5. 2002-01-15 – 2002-0126 – BlackMoon FTP Server Multiple Command Remote Overflow
  6. 2001-12-26 – 2002-0953 – PHPAddress globals.php LangCookie Parameter Remote File Inclusion
  7. 2001-12-20 – 2001-0877 – Microsoft Windows Universal Plug and Play NOTIFY Request Remote DoS
  8. 2001-12-20 – 2001-0876 – Microsoft Windows Universal Plug and Play NOTIFY Directive URL Handling Remote Overflow
  9. 2001-04-13 – 2001-0680 – QVT/Net / Term FTP Server LIST Command Traversal Remote File Access
  10. 1999-11-22 – 1999-1058 – Vermillion FTPD Long CWD Command Handling Remote Overflow DoS

The list gained another FTP server issue that doesn’t necessarily lead to privileges, and another remote denial of service attack, while losing the Puppet symlink (CVE-2012-1054) and the vague Oracle Enterprise Manager issue (CVE-2011-0877). All said and done, the list is just as confusing as before, perhaps more so. That gives us four FTP vulnerabilities, only one of which leads to remote code execution, and two denial of service attacks, that gain no real privileges for an attacker. As Andreas Lindh points out, that I failed to highlight, having a man-in-the-middle vulnerability occupy two spots on this list is also baffling in the context of the volume of attacks stated. Also note that with the addition of CVE-2002-1054 (Pablo FTP), there are now two vulnerabilities that appear on the DBIR 2015 and DBIR 2016 top ten CVE list.

Hopefully the forthcoming blog from Michael Roytman will shed some light on these issues.

OSVDB: FIN

As of today, a decision has been made to shut down the Open Sourced Vulnerability Database (OSVDB), and will not return. We are not looking for anyone to offer assistance at this point, and it will not be resurrected in its previous form.This was not an easy decision, and several of us struggled for well over ten years trying to make it work at great personal expense. The industry simply did not want to contribute and support such an effort. The OSVDB blog will continue to be a place for providing commentary on all things related to the vulnerability world.

mascot-bug-icon-transparent

MITRE’ Horrible New CVE ID Scheme and Spindoctoring

Today, The Register wrote an article on MITRE’s announcement of a new CVE ID scheme, and got many things wrong about the situation. As I began to write out the errata in an email, someone asked that I make it public so they could learn from the response as well.


From The Register:

The pilot platform will implement a new structure for issuance of Common Vulnerabilities and Exposures numbers. MITRE will directly issue the identifiers and bypass its editorial board.

The old system bypassed the Editorial Board. The Board isn’t there to give input on assignments, not for over 10 years. The criticism right now is that the new system was rolled out without consulting the Board, which the board is supposedly there for:

The MITRE Corporation created the CVE Editorial Board, moderates Board discussions, and provides guidance throughout the process to ensure that CVE serves the public interest.

The last ID scheme change was after more than a year of discussion, debate, and ultimately a vote by the board (11/2/2012 Start –> 6/7/2013 End). This ID scheme change was a knee-jerk response to recent criticism (only after it became more public than the Editorial Board list), without any board member input. From The Register article:

The CVE numbers are the numerical tags assigned to legitimate verified bugs that act as a single source of truth for security companies and engineers as they seek to describe and patch problems.

No… CVE IDs are assigned to security issues, period. Alleged, undetermined, possible, or validated. Many IDs are assigned to bogus / illegitimate issues, which are generally discovered after the assignment has been made. There is no rule or expectation that IDs are only assigned to legitimate verified bugs. To wit, search the CVE database for the word “REJECTED”.

More importantly, the notion it is a “single source of truth” is perhaps the worst characterization of CVE one could imagine. Many companies use alternate vulnerability databases that offer more features, better consumption methods, and/or much better coverage. With MITRE falling behind other VDBs by as many as 6,000 vulnerabilities in 2015 alone, it isn’t a single source for anything more than training wheels for your vulnerability program.

The platform will exist alongside the current slower but established CVE system.

At this point, with what has been published by MITRE, this is pure propoganda in line with the platitudes they have been giving the Editorial Board for the last year. The new format does not make things easier for them, in any way. The ‘old ID scheme’ was never the slowdown or choke point in their process. In fact, read their press release and they say they will be the only ones issuing federated IDs, meaning the same problem will happen in the interim. If it doesn’t, and they start issuing faster, it wasn’t the new scheme that fixed it. More important to this part of the story is that it wasn’t just about assignment speed. The last six months have seen MITRE flat out refuse assignments to more and more researchers, citing the vulnerable software isn’t on their list of monitored products. Worse, they actually blame the Editorial Board to a degree, which is a blatant and pathetic attempt at scape-goating. The list they refer to was voted on by the board, yes. But the list given to the board was tragically small and it was about arranging the last chairs on the Titanic so to speak.

He says MITRE is aiming for automated vulnerability identification, description, and processing, and welcomed input from board members and the security community.

This is amusing, disgusting, and absurd. MITRE, including the new person in the mix (Sain) have NOT truly welcomed input from the board. The board has been giving input, non-stop, and MITRE has been ignoring it every single time. Only after repeated mails asking for an update, do they give us the next brief platitude. Sain’s title, “MITRE CVE communications and adoption lead”, is also ironic, given that just last week Sain told the board he would contact them via telephone to discuss the issues facing CVE, and never did. Instead, there was no communication, and MITRE decided to roll out a scheme that was horribly designed without any input from the Board or the industry.

For those who want a better glimpse into just how bad MITRE is handling the CVE project, I encourage you to read the Editorial Board traffic (and hope it is updated, as MITRE still manually runs a script to update that archive).

2015, A Record Year For Vulnerabilities

Risk Based Security released the VulnDB QuickView report that shows 2015 broke the previous all-time record for the highest number of reported vulnerabilities. The 14,185 vulnerabilities cataloged during 2015 by Risk Based Security eclipsed the total covered by the National Vulnerability Database (NVD) and CVE by over 6,000.

Risk Based Security’s newly released 2015 Year End VulnDB QuickView report shows that 20.5% of reported vulnerabilities received CVSS scores between 9.0 and 10.0 and the number of vulnerabilities and the CVSS scores are both trending higher over the last four years.

It comes as no real surprise that Web-related vulnerabilities account for nearly 60% of the total reported in 2015, with cross-site scripting (XSS) making up 39% of those.

The VulnDB QuickView report also revealed that vulnerabilities disclosed in a coordinated fashion with the vendor rose to 42% in 2015 compared to 28% in 2014, the previous record. Another interesting fact in the report is that third-party Bug Bounty programs outpaced Vendor managed bounty programs 4:1 in 2015, when details were made available to the public.

Ruminations on David Weinstein’s “Ruminations on App CVEs”

David Weinstein, a researcher at NowSecure, has posted a blog titled “Ruminations on App CVEs“. Thanks to Will Dormann’s Tweet it came to our attention, and he is correct! We have opinions on this. Quoted material below is from Weinstein’s blog unless otherwise attributed.

CVE is well-positioned to play a critical role in tracking the risk level of mobile computing.

This is trivially to debate and counter in 2015. Until last year, CVE maintained a public list of sources that was virtually unchanged since its inception in 1999. After pressure from the public and the CVE Editorial Board, compromised of non-MITRE industry “advisors”, CVE revised their coverage policy and shifted to a new system of ‘full’ or ‘partial’ coverage based on the vendor, product, and/or vulnerability source. On the surface, this list looks promising, but upon any significant scrutiny, it is utterly lacking in adequate coverage. In order to head off complaints, their webpage even qualifies ‘full coverage’ sources by using the phrasing “nearly all issues disclosed” to “allow the flexibility to potentially postpone coverage of minor issues”.

In fact, Steve Christey may disagree with your assertion. In 2013, he posted to the VIM list saying:

Unfortunately, CVE can no longer guarantee full coverage of all public vulnerabilities… we are not well-prepared to handle the full volume of CVEs for all publicly-disclosed vulnerabilities… – Steve Christey, Editor of CVE

Given the 39,280 vulnerabilities we track that do not have a CVE assignment, assignment volume is specifically a problem for them. Further, the current state of CVE assignments is a disaster, and I have been in mails with CVE staff actively encouraging them to figure out the problem, as my now 27 day wait for three different CVE assignments is getting ridiculous.

However, CVEs have largely focused on tracking server-side and related flaws and yet the security community has evolved to track client-side vulnerabilities as a critical aspect of dealing with risk.

While I can’t offer you a set of handy statistics to back this claim, know that CVE, or their CNAs, assign identifiers to a considerable number of client applications. In fact, with the release of Apple iOS 9 yesterday, the public learned of 91 _new_ vulnerabilities affecting the platform, many of them client-side. I do not believe this statement to be a fair criticism of CVE’s assignment history and policy.

While CVE has worked well for tracking system flaws in mobile operating systems, there remain quirks with regard to mobile app flaws.

There are around 90 vulnerabilities in Google Android alone, that do not have CVE assignments, and that represents a fairly significant percentage overall. However, in this case we can’t blame CVE at all for this shortcoming, as the fault lies entirely with Google instead.

It seems like using CVE is the “right thing” here but MITRE has been very cryptic (in my opinion) about what qualifies as something that gets a CVE and what doesn’t. These discussions should not happen in some closed door meeting in my opinion, or at least should not end there…

First, please factor in that CVE and every other vulnerability database has gone through a world of headache, dealing with researchers who do not fully understand the concept of ‘crossing privilege boundaries’. The number of “not-a-vuln” vulnerability reports we see has skyrocketed this year. Further, even from individuals that are very technical, don’t always convey their information in terms that are easily understood. They clearly see a vulnerability, the report they send doesn’t show it to our eyes. It requires back-and-forth to figure it out and better understand the exact vulnerability. Bottom line, some are clear-cut cases and typically get a quick assignment. Other cases are murky at best.

Second, I fully agree that such a discussion should not be behind closed doors. At the very least, it should include the CVE editorial board, and ideally should include anyone in the industry who has an interest in it. If you want to bring up the discussion on the VIM mail list, which has over one hundred members, including at least one person from each of the major public vulnerability databases, please do.

Is MITRE consistently responding to other researchers’ requests for CVEs?

No. The turnaround time for requesting a CVE has gone up considerably. As mentioned above, I have been waiting for three assignments for almost one month now, while CVE assigned two others filed on the same day within hours. They are not currently consistent with the same researcher, even one familiar with CVE and their abstraction and assignment policy.

Attaching a CVE to a vulnerable app, even if it’s an old version, is actually a big part of tracking the reputation of the developer as well!

Just quoting to signal-boost this, as it is a very important comment, and absolutely true.

Who acts as a certified numbering authority (CNA) on behalf of all the app developers on the market?

Based on the current list of CNAs, there is no CNA that covers third-party apps really, regardless of platform. Also note that many CNAs are not currently fulfilling their duties properly, and I have been trying to address this with MITRE for months. There is no current policy on filing a complaint against a CNA, there is no method for warning or revoking CNA status, and MITRE has dropped the ball replying to me on several different threads on this topic.

Should Google step in here as a formal body to assist with coordinating CVEs etc for these apps?

Absolutely not. While they are positioned to be the ideal CNA, their history in CVE assignments has been dismal. Further, they have so many diverse products, each with their own mechanism for bug reporting (if one exists), own disclosure policy (if one exists), and disclosure method (which doesn’t exist for many). We can’t rely on Google to disclose the vulnerabilities, let alone act as a CNA for standardized and formalized ID assignment. Could Google fix this? Absolutely. They are obviously overflowing with brilliant individuals. However, this would require a central security body in the company that crosses all departments, and fulfills such a duty. Due to the volume of software and vulnerabilities they deal with, it would require a small team. The same kind that IBM, Cisco, and other large companies already stand up and have used for years.

Is the process MITRE established designed and prepared to handle the mountain of bugs that will be thrown at it when the community really focuses on this problem as we have?

No, as covered above. CVE is already backpedaling on coverage, and has been for a couple years. That does not give any level of comfort that they would be willing to, or even could handle such a workload.

Can the community better crowd source this effort with confirmation of vulnerability reporting in a more scalable and distributed manner that doesn’t place a single entity in a potentially critical path?

Possibly. However, in the context of mobile, Google is a better choice than crowd-sourcing via the community. The community largely doesn’t understand the standards and abstraction policy that helps define CVE and the value. Many CNAs do not either unfortunately, despite them being in the best position to handle such assignments. Moving to a model that relies on more trusted CNAs has merit, but it would also require better documentation and training from MITRE to ensure the CNA follows the standards.

Every vendor distributing an app on the Play Store should be required to provide a security related contact e.g., security@yourdomain.com

Except, statistically, I bet that not even half of them have a domain! The app stores are full of a wide variety of applications that are little more than a hobby of one person. They don’t get a web page or a formal anything. Given the raw number of apps, that is also a tall order. Maybe apps that enjoy a certain amount of success (via download count?) would then be subjected to such rules?

Google could be a little clearer about when security@android.com should be used for organized disclosure of bugs and consider taking a stronger position in the process.

Little clearer” is being more than generous. Remember, until a month or two ago, they had no formal place or process to make announcements of security updates. Disclosures related to Android have been all over the board and it is clear that Google has no process around it.

MITRE and/or whoever runs CVE these days should clarify what is appropriate for CVEs so that we know where we should be investing our efforts

Absolutely, and CVE’s documentation the last decade or more has not been properly updated. Having such documentation would also potentially cut down on their headache, as requests are made for issues clearly not a vulnerability. But newcomers to security research don’t have a well-written guide that explain it.

Should we pursue a decentralized CVE request process based on crowdsourcing and reputation?

I cannot begin to imagine what this would entail or what you have in mind. On the surface, won’t happen, won’t work.

Hopefully others in the industry chime in on this discussion, and again, I encourage you to take it to the CVE Editorial Board and the VIM list, to solicit feedback from more. I appreciate you bringing this topic up and getting it attention.

A quick, factual reminder on the value and reality of a “EULA”… (aka MADness)

This post is in response to the drama the last few days, where Mary Ann Davidson posted an inflammatory blog about security researchers that send Oracle vulnerabilities while violating their End-user License Agreement (EULA… that thing you click without reading for every piece of software you install). The post was deleted promptly by Oracle, then Oracle said it was not the corporate line, and due to the crazy journalists who of course felt obligated to cover. You can read up on the background elsewhere, because it has absolutely no bearing on reality, which this very brief blog covers.

This is such an absurdly simple concept to grasp, yet the CISO of Oracle (among others) are oblivious to it. Part of me wants to write a scathing 8 page “someone is wrong on the Internet” blog. The other part of me says sleep is more valuable than dealing with these mouth-breathing idiots, which Davidson is one of. Sleep will win, so here is the cold hard facts and reality of the situation. Anything else should be debated at some obscure academic conference, but we know Oracle pays big money to debate it to politicians. Think about that.

Reality #1: Now, let’s start with an age-old saying… “when chinchillas are outlawed, only outlaws will have chinchillas.” Fundamentally, the simple fact that cannot be argued by any rational, logical human, is that laws apply to law-abiding citizens. Those who break the law (i.e. criminal, malefactor, evildoer, transgressor, culprit, felon, crook, hoodlum, gangster, whatever…) do not follow laws. Those who ignore criminal law, likely do not give two fucks about civil law, which a EULA violation would fall under.

Reality #2: Researchers get access to crappy Oracle software in the process of performing their job duties. A penetration test or audit may give them temporary access, and they may find a vulnerability. If the client doesn’t mandate they keep it in-house, the researcher may opt to share it with the vendor, doing the right thing. Where exactly does the EULA fit in here? It was agreed to by the customer, not the third-party researcher. Even if there is a provision in the EULA for such a case, if the company doesn’t warn the researcher of said provision, how can they be held liable?

Reality #3: Tying back into #1 here, what are the real consequences? This is civil law, not criminal. When it comes to criminal law, which is a lot more clear, the U.S. doesn’t have solid extradition case-law backing them. We tend to think “run to Argentina!” when it comes to evading U.S. law. In reality, you can possibly just run to the U.K. instead. Ignore the consequences, that is not relevant when it comes to the law in this context. If you focus on “oh but the death penalty was involved”, you are not understanding Law 101.

In the case of Soering v. United Kingdom, the European Court of Human Rights ruled that the United Kingdom was not permitted under its treaty obligations to extradite an individual to the United States, because the United States’ federal government was constitutionally unable to offer binding assurances that the death penalty would not be sought in Virginia courts.

Now, consider all of the countries that have no extradition treaty with the U.S. There are a lot. How many? Think less on the volume, think more on how well-known this is… a quick Google shows that U.S. news tells us where to run! CNBC says “10 hideout cities for fugitives” and DailyFinance says “Know Where to Run to: The 5 Best Countries With No Extradition“. Not enough? Let’s look at the absolute brilliance that local news can deliver, since my search was intended to find a short list of countries with no extradition, and Wikipedia failed me. Leave it to WSFA 12 in Alabama, to give us a very concise list of countries with no extradition treaty with the US! Criminals, send a spoofed email of thanks to this station for cliff-noting this shit.

These countries currently have no extradition treaty with the United States:

Afghanistan, Algeria, Andorra, Angola, Armenia, Bahrain, Bangladesh, Belarus, Bosnia and Herzegovina, Brunei, Burkina Faso, Burma, Burundi, Cambodia, Cameroon, Cape Verde, the Central
African Republic, Chad, Mainland China, Comoros, Congo (Kinshasa), Congo (Brazzaville), Djibouti, Equatorial Guinea, Eritrea, Ethiopia, Gabon, Guinea, Guinea-Bissau, Indonesia, Ivory Coast, Kazakhstan, Kosovo, Kuwait, Laos, Lebanon, Libya, Macedonia, Madagascar, Maldives, Mali, Marshall Islands, Mauritania, Micronesia, Moldova, Mongolia, Montenegro, Morocco, Mozambique, Namibia, Nepal, Niger, Oman, Qatar, Russia, Rwanda, Samoa, São Tomé & Príncipe, Saudi Arabia, Senegal, Serbia, Somalia, Sudan, Syria, Togo, Tunisia, Uganda, Ukraine, United Arab Emirates, Uzbekistan, Vanuatu, Vatican, Vietnam and Yemen.

Now, can anyone arguing in favor of Davidson’s “EULA speech”, that Oracle officially disagreed with, explain how a EULA protects a company in any way, in a real-world scenario?

Quite simply, there are two major issues at play. First, the absurd idea that a EULA will protect you from anything, other than chasing Intellectual Property (IP) lawsuits against other companies. That happens a lot, to be sure. But it has no bearing, in any way, on security research.

Second, I think back to something an old drunk friend told me a few times. “Never lick a gift-whore in the mouse.” I said he was a drunk friend. Security researchers who ply their trade, find vulnerabilities in your product, report them to you, and wait for you to release a patch? Embrace them. Hug them. Pay them if you can. They are your allies… and every vulnerability they help you squash, is one less vulnerability a bad guy can use to pop your customers. No one in their right mind would ever alienate such a process.

And up again… (was re: System back down… for now…) (was re: Unexpected Downtime for OSVDB)

Yesterday, we experienced a catastrophic failure on the primary system hosting OSVDB that initially appears to be related to a drive failure. We are working with our hosting provider but have yet to determine root cause and we currently do not have an ETA as to when the site will be restored.

6/12/2015 Update: After fighting with the disks, the site has been restored and functioning. Apologies for the downtime!
6/14/2015 Update: Days after getting it fixed, the box tanked again. We’re working on it, while cursing technology.
8/3/2015 Update: The system has been back up since ~ 6/16. We just spaced it and forgot to update the blog. Apologies!

A Note on the Verizon DBIR 2015, “Incident Counting”, and VDBs

Recently, the Verizon 2015 Data Breach Investigations Report (DBIR) was released to much fanfare as usual, prompting a variety of media outlets to analyze the analysis. A few days after the release, I caught a Tweet linking to a blog from @raesene (Rory McCune) that challenged one aspect of the report. On page 16 of the report, Verizon lists the top 10 CVE ID exploited.

dbir-top10-cve

The bottom of the table says “Top 10 CVEs Exploited” which can be interpreted many ways. The paragraph above qualifies it as the “ten CVEs account for almost 97% of the exploits observed in 2014“. This is problematic because neither fully explains what that means. Is this the top 10 detected from sensors around the world? Meaning the exploits were launched and detected, with no indication if they were successful? Or does this mean these are the top ten exploits that were launched and resulted in a successful compromise of some form?

This gets more confusing when you read McCune’s blog that lists out what those ten CVE correspond to. I’ll get to the one that drew his attention, but will start with a different one. CVE-1999-0517 is an identifier for “an SNMP community name is the default (e.g. public), null, or missing.” This is very curious CVE identifier related to SNMP as it is specific to a default string. Any device that has a default community name, but does not allow remote manipulation is a rather trivial information disclosure vulnerability. This begins to speak to what the chart means, as “exploit” is being used in a broad fashion and does not track to the usage many in our industry associate with it. McCune goes on to point another on their list, CVE-2001-0540 which is an identifier for a “memory leak in Terminal servers in Windows NT and Windows 2000 allows remote attackers to cause a denial of service (memory exhaustion) via a large number of malformed Remote Desktop Protocol (RDP) requests to port 3389“. This is considerably more specific, giving us affected operating systems and the ultimate impact; a denial of service. This brings us back to why this section is in the report when part (or all) of it has nothing to do with actual breaches. After these two, McCune points out numerous other issues that directly challenge how this data was generated.

Since Verizon does not explain their methodology in generating these numbers, we’re left with our best guesses and a small attempt at an explanation via Twitter, that leads to as many questions as answers. You can read the full Twitter chat between @raesene, @vzdbir, and @mroytman (RiskIO Data Scientist who provided the data to generate this section of the report). The two pages of ‘methodology’ Verizon provides in the report (pages 59-60) are too high-level to be useful for the section mentioned above. Even after the conversation, the ‘top 10 exploited’ is still highly suspicious and does not seem accurate. The final bit from Roytman may make sense in RiskIO’s world, but doesn’t to others in the vulnerability world.

That brings me to the first vulnerability McCune pointed out, that started a four hour rabbit hole journey for me last night. From his blog:

The best example of this problem is CVE-2002-1931 which gets listed at number nine. This is a Cross-Site Scripting issue in version 2.1.1 and 1.1.3 of a product called PHP Arena and specifically the pafileDB area of that product. Now I struggled to find out too much about that product because the site that used to host it http://www.phparena.net is now a gambling site (I presume that the domain name lapsed and was picked up as it got decent traffic). Searching via google for information, most of the results seemed to be from vulnerability databases(!) and using a google dork of inurl:pafiledb shows a total of 156 results, which seems low for one of the most exploited issues on the Internet.

This prompted me to look at CVE-2002-1931 and the corresponding OSVDB entry. Then I looked at the ‘pafiledb.php’ cross-site scripting issues in general, and immediately noticed problems:

pafiledb-before

This is a good testament to how far vulnerability databases (VDBs) have come over the years, and how our data earlier on is not so hot sometimes. Regardless of vulnerability age, I want database completeness and accuracy so I set out to fix it. What I thought would be a simple 15 minute fix took much longer. After reading the original disclosures of a few early paFileDB XSS issues, it became clear that the one visible script being tested was actually more complex, and calling additional PHP files. It also became quite clear that several databases, ours included, had mixed up references and done a poor job abstracting. Next step, take extensive notes on every disclosure including dates, exploit strings, versions affected, solution if available, and more. The rough notes for some, but not all of the issues to give a feel for what this entails:

pafiledb-notes1

Going back to this script being ‘more complex’, and to try to answer some questions about the disclosure, the next trick was to find a copy of paFileDB 3.1 or before to see what it entailed. This is harder than it sounds, given the age of the software and the fact the vendor site has been gone for seven years or more. With the archive finally in hand, it became more clear what the various ‘action’ parameters really meant.

pafiledb31

Going through each disclosure and following links to links, it also became obvious that every VDB missed the inclusion of some disclosures, and/or did not properly abstract them. To be fair, many VDBs have modified their abstraction rules over the years, including us. So using today’s standards along with yesterday’s data, we get a very different picture of those same XSS vulnerabilities:

pafiledb-after1

With this in mind, reconsider the Verizon report that says CVE-2002-1931 is a top 10 exploited vulnerability in 2014. That CVE is very specific, based on a 2002-10-20 disclosure that starts out referring to a vulnerability that we don’t have details on. Either it was publicly disclosed and the VDBs missed it, it was mentioned on the vendor site somewhere (and doesn’t appear to be there now, even with the great help of archive.org), or it was mentioned in private or restricted channels, but known to some. We’re left with what that Oct, 2002 post said:

Some of you may be familiar with Pafiledb provided by PHP arena. Well they just released a new version that fixed a problem with their counting of files. Along with that they said they fixed a possible security bug involving using Javascript as a search string. I checked it on my old version and it is infact there, so I updated to the new version so the bugs can be fixed and I checked it and it no longer works.

We know it is “pafiledb.php” only because that is the base script that in turn calls others. In reality, based on the other digging, it is very likely making a call to includes/search.php which contains the vulnerability. Without performing a code-level analysis, we can only include a technical note with our submission and move on. Continuing the train of thought, what data collection methods are being performed that assign that CVE to an attack that does not appear to be publicly disclosed? The vulnerability scanners from around that time, many of which are still used today, do not specifically test for and exploit that issue. Rather, they look for the additional three XSS disclosed in the same post further down. All three “pafiledb.php” exploits that call a specific action that correspond to /includes/rate.php, /includes/download.php, and /includes/email.php. It is logical that intrusion detection systems and vulnerability scanners would be looking for those three issues (likely lumped into one ID), but not the vague “search string” issue.

McCune’s observation and singling this vulnerability out is spot on. While he questioned the data in a different way, my method gives additional evidence that the ‘top 10’ is built on faulty data at best. I hope that this blog is both educational from the VDB side of things, and further encourages Verizon to be more forthcoming with their methodology for this data. As it stands, it simply isn’t trustworthy.

Reviewing the Secunia 2015 Vulnerability Review (A Redux)

It’s that time of year again! Vulnerability databases whip up reports touting statistics and observations based on their last year of collecting data. It’s understandable, especially for a commercial database, to show why your data source is the best. In the past, we haven’t had a strong desire to whip up a flashy PDF with our take on vulnerability disclosure statistics. Instead, we’ve focused more on educating others on why their statistics are often so horribly wrong or misleading. It should come as no surprise then, as we find ourselves forced to point out the errors in Secunia’s ways. We did the same thing last year in a long blog, giving more perspective and our own numbers for balance.

After reading this year’s report from Secunia, we could basically use last year’s blog as a template and just plug in new numbers. Instead, it is important to point out that after last year’s review, Secunia opted not to revise their methodology. Worse, they did not take any pointers from a presentation on vulnerability statistics Steve Christey (CVE / MITRE) and I did at BlackHat before last year’s report, pointing out the most common flaws in statistics and where the bias comes from. It is easy to chalk up last year’s report as being naive and not fully appreciating or understanding such statistics. This year though, they have no excuse. The numbers they report seem purposefully misleading and are a disservice to their customers and the community.

The headline of their press release and crux of their report is that they identified 15,435 vulnerabilities in 2014. This is entirely inaccurate, and completely misleading. This is fundamentally due to the same very flawed methodology they used last year, where they count a single vulnerability as many as 210 times (e.g. this year CVE-2014-0160, aka Heartbleed). Then factor in all of the other high profile “logo” vulnerabilities (e.g. FREAK, ShellShock), especially in OpenSSL, and the same handful of vulnerabilities will get counted hundreds or even a thousand times. To be abundantly clear, a vulnerability in a third-party library such as OpenSSL is one vulnerability. It doesn’t matter how many other products use and integrate that code, the fundamental flaw is in the library. Counting each product that implements OpenSSL as a distinct vulnerability, rather than a distinct occurrence of a vulnerability, is wrong. Worse, it actually highlights just how poor their statistics are, if you do accept their flawed methodology, as it is heavily used among thousands of applications that Secunia doesn’t cover, even when a vendor like IBM issues numerous advisories that they miss. No matter how you cut it, their numbers are invalid.

So poking at our database, understanding we don’t quite have a 100% mapping to Secunia, but feel it is pretty close, we see that around 5,280 of our entries have a Secunia reference and 4,530 of those also have a CVE reference (compared to more than 8,500 entries we have with a CVE reference). Even counting for a good margin of error, it still appears that Secunia does not cover almost 3,000 vulnerabilities in 2014, which are covered by CVE entries. Those familiar with this blog know that we are more critical of CVE than any other Vulnerability Database. If a commercial VDB with hundreds of high-dollar clients can’t even keep a 100% mapping with an over-funded, U.S. government run VDB that is considered the “lowest common denominator” by many, that speaks volumes. I could end the blog on that note alone, because anything less than mediocrity is a disgrace. How do you trust, and pay for, a vulnerability intelligence service missing thousands of publicly disclosed vulnerabilities, that are served up on a platter and available to anyone for free use and inclusion in their own database? You don’t. After all, remember this quote from the Secunia report last year?

CVE has become a de facto industry standard used to uniquely identify vulnerabilities, which have achieved wide acceptance in the security industry.

All of the above is not only important, it is absolutely critical to appropriately frame their statistics. Not only does Secunia avoid using the minimum industry standard for vulnerability aggregation, they opt to use their own methodology, which they now know beyond doubt seriously inflates their ‘vulnerability’ count. Last year, it was by almost three times more than reality. This year, it was also by about three times more than reality, as they only covered approximately 5,500 unique vulnerabilities. That is almost 3,000 less than CVE and 8,000 less than OSVDB. Any excuses of Secunia coverage lacking that are based on them claiming to verify vulnerabilities and “not covering vulnerabilities in beta software” are also invalid, as a majority of vulnerabilities reported are in stable software, and regularly come from trusted researchers or the vendors themselves. Regardless, it should be clear to anyone passingly familiar with the aggregation of vulnerabilities that Secunia is playing well outside the bounds of reality. Of course, every provider of vulnerability intelligence has some motive, or at least desire, to see the numbers go up year to year. It further justifies their customers spending money on the solution.

“Every year, we see an increase in the number of vulnerabilities discovered, emphasizing the need for organizations to stay on top of their environment. IT teams need to have complete visibility of the applications that are in use, and they need firm policies and procedures in place, in order to deal with the vulnerabilities as they are disclosed.” — Kasper Lindgaard, Director of Research and Security at Secunia.

While Mr. Lindgaard’s quote is flashy and compelling, it is also completely wrong, even according to their own historical reports. If Mr. Lindgaard had read Secunia’s own report from 2010 and 2011, he might have avoided this blunder.

This report presents global vulnerability data from the last five years and identifies trends found in 2010. The total number of vulnerabilities disclosed in 2010 shows a slight decrease of 3% compared to 2009. – Yearly Report 2010 (Secunia)

Analysing the long-term and short-term trends of all products from all vendors in the Secunia database over the last six years reveals that the total number of vulnerabilities decreased slightly in 2011 compared to 2010. – Vulnerabilities are Resilient (2011, Secunia Report)

In 2007 OSVDB cataloged 9,574 vulnerabilities as compared to 11,050 in 2006. In 2009 they cataloged 8,175 vulnerabilities as compared to 9,807 in 2008. In 2011 they cataloged 7,913 vulnerabilities compared to 9,161 in 2010. That is three distinct years that were not an increase at all. Taking Mr. Lindgaard’s quote into account, and comparing it to our own data that is based on a more standard method of abstraction, we also show that it has not been a steady rise every year:

osvdbbyyear

With highly questionable statistics, based on a flawed methodology, Secunia also lucked out this year, as vendors are more prudent about reporting vulnerabilities in used third-party libraries (e.g. OpenSSL, PHP). This plays out nicely for a company that incorrectly counts each vendor’s use of vulnerable software as a distinct issue.

Getting down to the statistics they share, we’ll give a few additional perspectives. As with all vulnerability statistics, they should be properly explained and disclaimed, or they are essentially meaningless.

In 2014, a total of 15,435 vulnerabilities were discovered in 3,870 products from 500 vendors.

As stated previously, this is absolutely false. Rather than track unique vulnerabilities, Secunia’s model and methodology is to focus more on products and operating systems. For example, they will issue one advisory for a vulnerability in OpenSSL, then dozens of additional advisories for the exact same vulnerability. The only difference is that each subsequent advisory covers a product or operating system that uses it. Above I mentioned the redundant ‘HeartBleed’ entries, but to further illustrate the point and demonstrate how this practice can lead to wildly inaccurate statistics, let’s examine the OpenSSL ‘FREAK’ vulnerability as well (29 Secunia advisories). The flaw is in the OpenSSL code, not the implementation from each vendor, so it is still that same one vulnerability. Depending on how good their coverage is, this could be more problematic. Looking at IBM, who uses OpenSSL in a wide variety of their products, we know that at least 56 are impacted by the vulnerability, which may mean that many more Secunia advisories cover them. If we had a perfect glimpse into what products use OpenSSL, it would likely be thousands. Trying to suggest that is ‘thousands of unique vulnerabilities’ is simply ignorant.

83% of vulnerabilities in all products had patches available on the day of disclosure in 2014.

We assume that by ‘patches’, they mean a viable vendor-provided solution (e.g. upgrade) as well. Based on our aggregation, it is closer to 64% that have a solution available, and that is not counting “the day of disclosure”. That includes vendors who have patched since the issue was disclosed. This big gap comes from the number of vulnerabilities aggregated by each database since the start of 2014.

25 zero-day vulnerabilities were discovered in total in 2014, compared to 14 the year before.

Assuming we use the same definition of “zero-day” as Secunia, this is considerably under what we tracked. We flag such issues “discovered in the wild”, meaning the vulnerability was being actively exploited by attackers when it was discovered and/or disclosed. In 2014, we show 43 entries with this designation, compared to 73 in 2013. That is a sharp decrease, not increase. Part of that large swing for us comes from a rash of attacker-trojaned software distributions that were used to target mobile users in 2013. Since trojaned software from a legitimate vendor meets the criteria to be called a vulnerability (it can be abused by a bad actor to cross privilege boundaries), we include it where Secunia and other databases generally ignore it.

20 of the 25 zero-day vulnerabilities were discovered in the 25 most popular products – 7 of these in operating systems.

Since we got into the “popular products” bit in last year’s rebuttal to their statistics, we’ll skip that this year. However, in 2014 we show that 16 of those “zero day” were in Microsoft products (two in Windows, six in the bundled MSIE), and four were in Adobe products.

In 2014, 1,035 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari. That is a 42% increase from 2013.

Without searching our data, it is likely very safe to say these numbers are way off, and that Secunia’s coverage of browser security is woefully behind. In addition to not aggregating as much data, they likely don’t properly take into account vulnerabilities in WebKit, which potentially affects Chrome, Safari, and Opera. Are WebKit vulnerabilities attributed to one, the other, or all three? With that in mind, let’s compare numbers. We see ~200 for Mozilla, ~250 for Microsoft IE, and not even 10 between Safari and Opera. In reality, Opera is not forthcoming in their information about vulnerabilities they patch, so their actual vulnerability count is likely much higher. That leaves us with Chrome, which had over 300 vulnerabilities in their own code. As mentioned, factor in another 100+ vulnerabilities in WebKit that may impact up to three browsers. That comes to a total of 889, far less than their claimed 1,035. That makes us curious if they are using the same flawed methodology with browser vulnerabilities, as they do with libraries (i.e. counting WebKit / Blink issues multiple times for each browser using it). Some vulnerabilities in these browsers are also due to flaws in other 3rd party libraries and not in the browsers’ own code (e.g. ICU4C and OpenJPEG). Finally, Google Chrome provides Adobe Flash Player, so is Secunia adding the number of vulnerabilities addressed in Flash Player to their Google Chrome count as well, further bloating it? So many unknowns and another reminder of why vulnerability statistics are often misleading or serve no value. Without explaining the methodology, we’re left with guesses and assumptions that strongly suggest this statistic can’t be trusted.

Even more offensive, is that the “vulnerability intelligence experts at Secunia Research” don’t demonstrate any expertise in informing readers why Google Chrome numbers are so high. Instead, they publish the information, withhold details of the methodology, and let the media run wild parroting their pedestrian statistics and understanding. As a lesson to the media, and Secunia, consider these additional pieces of information that help explain those numbers, and why you simple can’t compare disclosed vulnerability totals:

  1. Google releases details about the vulnerabilities fixed in Chrome, unlike some other browser vendors. For example, Opera releases almost no information. Microsoft typically releases information on the more serious vulnerabilities, and none from their internal auditing. Of the 310 Chrome vulnerabilities we tracked, 66 were CVSSv2 5.0 or less.
  2. Google has perhaps the most aggressive bug bounty program in the industry, which encourages more researchers to audit their code. How aggressive? They paid out more than US$550,000 in 2014, just for Chrome vulnerabilities.
  3. Google’s turnaround in fixing Chrome vulnerabilities is quite respectable. Since many of the vulnerabilities were reported to them and fixed promptly, it is easy to argue with the claim it is the “least secure” browser.
  4. Google fields several teams of security experts that audit their code, including Chrome. They find and fix potential issues before disclosing them. By doing that, they are ensuring the security of the browser, yet effectively getting punished by Secunia for doing so.
  5. In addition to all of the issues disclosed, a tiny fraction were actually proven to be exploitable. Like many software vendors, they find it is easier and more efficient to spend a small amount of time to fix the possible vulnerability rather than the larger amount of time required to prove it is exploitable.

There are many pitfalls of using simple disclosed vulnerability totals to compare products. This is explained over and over by those familiar with the vulnerability disclosure world, and should be common knowledge to ‘experts’ in vulnerability research. Instead, Secunia not only fuels media articles that aren’t critical of their report, they Tweet little snippets without any context. They also have their partners repeat their statistics, giving them more exposure, misleading the customers of other companies.

Vulnerability statistics can be useful. But they must be presented in a responsible and educated manner to serve their purpose. Those not familiar with vulnerability aggregation and generating the resulting statistics typically do a poor job, and add confusion to the matter. We like to call such people “vulnerability tourists”, and we can now add Secunia to the growing list of them.

[Update: 2015-04-01. One additional quote from the 2010 Secunia Report was added, and clarification of wording regarding the yearly vuln total chart was made.]

Vendors sure like to wave the “coordination” flag… (revisiting the ‘perfect storm’)

I’ve written about coordinated disclosure and the debate around it many times in the past. I like to think that I do so in a way that is above and beyond the usual old debate. This is another blog dedicated to an aspect of “coordinated” disclosure that vendors fail to see. Even when a vendor is proudly waving their own coordination flag, decrying the actions of another vendor, they still miss out on the most obvious.

In order to understand just how absurd these vendors can be, let’s remember what the purpose of “coordinated disclosure” is. At a high level, it is to protect their consumers. The idea is to provide a solution for a security issue at the same time a vulnerability becomes publicly known (or ideally, days before the disclosure). For example, in the link above we see Microsoft complaining that Google disclosed a vulnerability two days before a patch was available, putting their customers at risk. Fair enough, we’re not going to debate how long is enough for such patches. Skip past that.

There is another simple truth about the disclosure cycle that has been outlined plenty of times before. After a vendor patch becomes public, it takes less than 24 hours for a skilled researcher to reverse it to determine what the vulnerability is. From here, it could be a matter of hours or days before functional exploit code is created based on the complexity of the issue. Factor in all of the researchers and criminals capable of doing this, and the worst case scenario is that within very few hours a working exploit is created. Best case scenario, customers may have two or three days.

Years ago, Steve Christey pointed out that multiple vendors had released patches on the same day, leading me to write about how that was problematic. So jump to today, and that has become the reality that organizations face at least once a year. But… it got even worse. On October 14, 2014, customers got to witness the dark side of “coordinated disclosure”, one that these vendors are quick to espouse, but equally quick to disregard themselves. In one day we received 25 Microsoft vulnerabilities, 117 Oracle vulnerabilities, 12 SAP vulnerabilities, 8 Mozilla advisories, 6 adobe vulnerabilities, 1 Cisco vulnerability, 1 Chrome OS vulnerability, 1 Google V8 vulnerability, and 3 Linux Kernel vulnerabilities disclosed. That covers every major IT asset in an organization almost and forces administrators to triage in ways that were unheard of years prior.

Do any of these vendors feel that an IT organization is capable of patching all of that in a single day? Two days? Three days? Is it more likely that a full week would be an impressive task while some organizations must run patches through their own testing before deployment and might get it done in two to four weeks? Do they forget that with these patches, bad guys can reverse them and have working exploit in as little as a day, putting their customers at serious risk?

So basically, these vendors who consistently or frequently release on a Tuesday (e.g. Microsoft, Oracle, Adobe) have been coordinating the exact opposite of what they frequently preach. They are not necessarily helping customers by having scheduled patches. This year, we can look forward to Oracle quarterly patches on April 14 and July 14. Both of which are the “second Tuesday” Microsoft / Adobe patch times. Throw in other vendors like IBM (that has been publishing as many as 150 advisories in 48 hours lately), SAP, Google, Apple, Mozilla, and countless others that release frequently, and administrators are in for a world of hurt.

Oh, did I forget to mention that kicker about all of this? October 14, 2014 has 254 vulnerabilities disclosed. On the same day that the dreaded POODLE vulnerability was disclosed, impacting thousands of different vendors and products. That same day, OpenSSL, perhaps the most oft used SSL library released a patch for the vulnerability as well, perfectly “coordinated” with all of the other issues.

Follow

Get every new post delivered to your Inbox.

Join 6,580 other followers