Tag Archives: CVE

More tricks than treats with today’s Metasploit blog disclosures?

Today, Tod Beardsley posted part one and part two on the Metasploit blogs titled “Seven FOSS Tricks and Treats. Unfortunately, this blog comes with as many tricks as it does treats.

In part one, he gently berates the vendors for their poor handling of the issues. In many cases, they are labeled as “won’t fix” without an explanation of why. During his berating, he also says “I won’t mention which project … filed the issue on a public bug tracker which promptly e-mailed it back in cleartext“. In part two, the only disclosure timeline including a bug report is for Moodle and ticket MDL-41449. If this is the case he refers to, then he should have noted that the tracker requires an account, and that a new account / regular user cannot access this report. Since his report was apparently mailed in the clear, the ticket system mailing it back is not the biggest concern. If this is not the ticket he refers to, now that the issues are public the ticket should be included in the disclosure for completeness.

Next, we have the issue of “won’t fix”. Zabbix, NAS4Free, and arguably OpenMediaVault are all intended functionality by the vendor. In each case, they require administrative credentials to use the function being ‘exploited’ by the metasploit modules. I won’t argue that additional circumstances make exploitation easier, such as XSS or default credentials, but intended functionality is often a reason a vendor will not “fix” the bug. As you say in part one, a vendor should make this type of functionality very clear as to the dangers involved. Further, they should strive to avoid making it easier to exploit. This means quickly fixing vulnerabilities that may disclose session information (e.g. XSS), and not shipping with default credentials. Only at the bottom of the first post do you concede that they are design decisions. Like you, we agree that admin of a web interface does not imply the person was intended to have root access on the underlying operating system. In those cases, we consider them a vulnerability but flag them ‘concern’ and include a technical note explaining.

One of the most discouraging things about these vulnerability reports is the lack of version numbers. It is clear that Beardsley downloaded the software to test it. Why not include the tested version so that administrators can more easily determine if they may be affected? For example, if we assume that the latest version of Moodle was 2.5.2 when he tested, it is likely vulnerable. This matters because version 2.3.9 does not appear to be vulnerable as it uses an alternate spell check method. This kind of detail is extremely helpful to the people who have to mitigate the vulnerability, and the type of people who use vulnerability databases as much as penetration testers.

Finally, the CVE assignments are questionable. Unfortunately, MITRE does not publish the “CVE ID Reservation Guidelines for Researchers” on their CVE Request Page, instead offering to mail it. This may cut down on improper assignments and may explain why these CVE were assigned. When an application has intended functionality that can only be abused by an attacker with administrator credentials, that does not meet the criteria for a CVE assignment. Discussion with CVE over each case would help to ensure assignment is proper (see above re: implied permission / access).

As always, we love seeing new vulnerabilities disclosed and quickly fixed. However, we also prefer to have disclosures that fully explain the issue and give actionable information to all parties involved, not just one side (e.g. penetration testers). Keep up the good work and kindly consider our feedback on future disclosures!

An Open Letter to @InduSoft

InduSoft,

When referencing vulnerabilities in your products, you have a habit of only using an internal tracking number that is kept confidential between the reporter (e.g. ICS-CERT, ZDI) and you. For example, from your HotFix page (that requires registration):

WI2815: Directory Traversal Buffer overflow. Provided and/or discovered by: ICS-CERT ticket number ICS-VU-579709 created by Anthony …

The ICS-CERT ticket number is assigned as an internal tracking ID while the relevant parties figure out how to resolve the issue. Ultimately, that ticket number is not published by ICS-CERT. I have already sent a mail to them suggesting they include it in advisories moving forward, to help third parties match up vulnerabilities to fixes to initial reports. Instead of using that, you should use the public ICS-CERT advisory ID. The details you provide there are not specific enough to know which issue this corresponds to.

In another example:

WI2146: Improved the Remote Agent utility (CEServer.exe) to implement authentication between the development application and the target system, to ensure secure downloading, running, and stopping of projects. Also addressed problems with buffer overrun when downloading large files. Credits: ZDI reports ZDI-CAN-1181 and ZDI-CAN-1183 created by Luigi Auriemma

In this case, these likely correspond to OSVDB 77178 and 77179, but it would be nice to know that for sure. Further, we’d like to associate those internal tracking numbers to the entries but vendors do not reliably put them in order, so we don’t know if ZDI-CAN-1181 corresponds to the first or second.

In another:

WI1944: ISSymbol Virtual Machine buffer overflow Provided and/or discovered by: ZDI report ZDI-CAN-1341 and ZDI-CAN-1342

In this case, you give two ZDI tracking identifiers, but only mention a single vulnerability. ZDI has a history of abstracting issues very well. The presence of two identifiers, to us, means there are two distinct vulnerabilities.

This is one of the primary reasons CVE exists, and why ZDI, ICS-CERT, and most vendors now use it. In most cases, these larger reporting bodies will have a CVE number to share with you during the process, or if not, will have one at the time of disclosure.

Like your customers do, we appreciate clear information regarding vulnerabilities. Many large organizations will use a central clearing house like ours for vulnerability alerting, rather than trying to monitor hundreds of vendor pages. Helping us to understand your security patches in turn helps your customers.

Finally, adding a date the patch was made available will help to clarify these issues and give another piece of information that is helpful to organizations.

Thank you for your consideration in improving your process!

Mobile Devices and Exploit Vector Absurdity

The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.

Number One

We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:

As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.

The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.

Number Two

There were a few disclosures out of Japan’s JVN system, run by JPCERT. Two examples, both the same fundamental vulnerability, are summarized below:

#1 – CVE-2013-3643 (NVD Entry) – JVN 99813183 / JVNDB-2013-000056
#2 – CVE-2013-3642 (NVD Entry) – JVN 79301570 / JVNDB-2013-000055

#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.

Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.

The Reality

The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.

Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.

Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.

CVE-2013-1334CVE-2012-1848CVE-2011-1282CVE-2010-3942CVE-2009-1123CVE-2008-2252

CVE Vulnerabilities: How Your Dataset Influences Statistics

Readers may recall that I blogged about a similar topic just over a month ago, in an article titled Advisories != Vulnerabilities, and How It Affects Statistics. In this installment, instead of “advisories”, we have “CVEs” and the inherent problems when using CVE identifiers in the place of “vulnerabilities”. Doing so is technically inaccurate, and it negatively influences statistics, ultimately leading to bad conclusions.

NSS Labs just released an extensive report titled “Vulnerability Threat Trends; A Decade in Review, Transition on the Way“, by Stefan Frei. While the report is interesting, and the fundamental methodology is sound, Frei uses a dataset that is not designed for true vulnerability statistics. Additionally, I believe that some factors that Frei attributes to trends are incorrect. I offer this blog as open feedback to bring additional perspective to the realm of vulnerability stats, which is a long ways from approaching maturity.

Vulnerabilities versus CVE

In the NSS Labs paper, they define a vulnerability as “a weakness in software that enables an attacker to compromise the integrity, availability, or confidentiality of the software or the data that it processes.” This is as good a definition as any. The key point here is a weakness, singular. What Frei fails to point out, is that the CVE dictionary is not a vulnerability database in the same sense as many others. It is a specialty database designed primarily to assign a unique identifier to a vulnerability, or a group of vulnerabilities, to coordinate tracking and discussion. While CVE says “CVE Identifiers are unique, common identifiers for publicly known information security vulnerabilities” , it is more important to note the way CVE abstracts, which is covered in great detail. From the CVE page on abstraction:

CVE Abstraction Content Decisions (CDs) provide guidelines about when to combine multiple reports, bugs, and/or attack vectors into a single CVE name (“MERGE”), and when to create separate CVE names (“SPLIT”).

This clearly denotes that a single CVE may represent multiple vulnerabilities. With that in mind, every statistic generated by NSS Labs for this report is not accurate, and their numbers are not reproduceable using any other vulnerability dataset (unless it too is only based on CVE data and does not abstract differently, e.g. NVD). This distinction puts the report’s statements and conclusions in a different light:

As of January 2013 the NVD listed 53,489 vulnerabilities ..
In the last ten years on average 4,660 vulnerabilities were disclosed per year ..
.. with an all-‐time high of 6,462 vulnerabilities counted in 2006 ..

The abstraction distinction means that these numbers aren’t just technically inaccurate (i.e. terminology), they are factually inaccurate (i.e. actual stats when abstracting on a per-vulnerability basis). In each case where Frei uses the term “vulnerability”, he really means “CVE”. When you consider that a single CVE may cover as many as 66 or more distinct vulnerabilities, it really invalidates any statistic generated using this dataset as he did. For example:

However, in 2012 alone the number of vulnerabilities increased again to a considerable 5,225 (80% of the all-‐time high), which is 12% above the ten-‐year average. This is the largest increase observed in the past six years and ends the trend of moderate declines since 2006.

Based on my explanation, what does 5,225 really mean? If we agree for the sake of argument, that CVE averages two distinct vulnerabilities per CVE assignment, that is now over 10,000 vulnerabilities. How does that in turn change any observations on trending?

The report’s key findings offer 7 high-level conclusions based on the CVE data. To put all of the above in more perspective, I will examine a few of them and use an alternate dataset, OSVDB, that abstracts entries on a per-vulnerability basis. With those numbers, we can see how the findings stand. NSS Labs report text is quoted below.

The five year long trend in decreasing vulnerability disclosures ended abruptly in 2012 with a +12% increase

Based on OSVDB data, this is incorrect. Both 2009 (7,879) -> 2010 (8,835) as well as 2011 (7,565) -> 2012 (8,919) showed an upward trend.

More than 90 percent of the vulnerabilities disclosed are moderately or highly critical – and therefore relevant

If we assume “moderately” is “Medium” criticality, as later defined in the report, is 4.0 -‐ 6.9 then OSVDB shows 57,373 entries that are CVSSv2 4.0 – 10.0, out of 82,123 total. That means 90% is considerably higher than we show. Note: we do not have complete CVSSv2 data for 100% of our entries, but we do have them for all entries affiliated with the ones Frei examined and more. If “moderately critical” and “highly critical” refer to different ranges, then they should be more clearly defined.

It is also important to note that this finding is a red herring, due to the way CVSS scoring works. A remote path disclosure in a web application scores a 5.0 base score (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N). This skews the scoring data considerably higher than many in the industry would agree with, as 5.0 is the same score you get for many XSS vulnerabilities that can have more serious impact.

9 percent of vulnerabilities disclosed in 2012 are extremely critical (with CVSS score>9.9) paired with low attack/exploitation complexity

This is another red herring, because any CVSS 10.0 score means that “low complexity” was factored in. The wording in the report implies that a > 9.9 score could be paired with higher complexity, which isn’t possible. Further, CVSS is scored for the worst case scenario when details are not available (e.g. CVE-2012-5895). Given the number of “unspecified” issues, this may seriously skew the number of CVSSv2 10.0 scores.

Finally, there was one other element to this report that was used in the overview, and later in the document, that is used to attribute a shift in disclosure trends. From the overview:

The parallel and massive drop of vulnerability disclosures by the two long established purchase programs iDefense VCP and TippingPoint ZDI indicate a transition in the way vulnerability and exploit information is handled in the industry.

I believe this is a case of “correlation does not mean causation“. While these are the two most recognized third-party bug bounty programs around, there are many variables at play here. In the bigger picture, shifts in these programs do not necessarily mean anything. Some of the factors that may have influenced disclosure numbers for those two programs include:

  • There are more bug bounty programs available. Some may offer better price or incentive for disclosing through them, stealing business from iDefense/ZDI.
  • Both companies have enjoyed their share of internal politics that affected at least one program. In 2012, several people involved in the ZDI program left the company to form their startup. It has been theorized that since their departure, ZDI has not built the team back up and that disclosures were affected as a result.
  • ZDI had a small bout of external politics, in which one of their most prevalent bounty collectors (Luigi Auriemma) had a serious disagreement about ZDI’s handling of a vulnerability, as relates to Portnoy and Exodus. Auriemma’s shift to disclose via his own company would dramatically affect ZDI disclosure totals alone.
  • Both of these companies have a moving list of software that they offer a bounty on. As it changes, it may result in spikes of disclosures via their programs.

Regardless, iDefense and ZDI represent a small percentage of overall disclosures, it is curious that Frei opted to focus on this so prominently as a reason for vulnerability trends changing without considering some influencing factors. Even during a good year, 2011 for example, iDefense (42) and ZDI (297) together accounted for 339 out of 7,565 vulnerabilities, only ~ 4.5% of the overall disclosures. There are many other trends that could just as easily explain relatively small shifts in disclosure totals. When making statements about trends in vulnerability disclosure and how it affects statistics, it isn’t something that should be done by casual observers. They simply miss a lot of the low-level details you glean on the day-to-day vulnerability handling and cataloging.

To be clear, I am not against using CVE/NVD data to generate statistics. However, when doing so, it is important that the dataset be explained and qualified before going into analysis. The perception and definition of what “a vulnerability” is changes based on the person or VDB. In vulnerability statistics, not all vulnerabilities are created equal.

Advisories != Vulnerabilities, and How It Affects Statistics

I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!

I noticed a Tweet from @SCMagazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.

Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.

It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.

A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12″ (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!

If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.

Base Source Critical Important Moderate Low
2012 Advisories (83) 35 (42.2%) 46 (55.4%) 2 (2.4%)
2012 CVE (160) 100 (62.5%) 18 (11.3%) 39 (24.4%) 3 (1.8%)
2012 Total (176) 101 (57.4%) 19 (10.8%) 41 (23.3%) 15 (8.5%)

It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:

   

The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.

In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.

  • MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
  • MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
  • Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
  • OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
  • Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
  • One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.

These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.

We’re Still Here – Update on OSVDB Project: Data and Exports

At a glance, it may appear as if the OSVDB project has fallen by the wayside. Some of our public facing pages have not been updated in several years, the last string of blog posts was over a year ago, and a recent update caused a few functions to fail (e.g., data exports). On the other hand, anyone paying attention to the data has noticed we are certainly present and moving forward. We have had one person working full time on OSVDB for over a year now. He is responsible for the daily push of new vulnerabilities and is scouring additional sources for vulnerabilities that didn’t appear through the normal channels. Given the nature of the project, we place data completeness and integrity as the top priority.

The OSVDB project is coming up on its tenth year anniversary. The last ten years have seen some big changes, as well as many things that have not changed one bit. The biggest thing that hasn’t changed is the lack of support we receive from the community. The top ten all time contributors are the core members of OSF, the handful of longstanding dedicated volunteers we have had over the years, or some people we have been able to pay to help work on the project. Beyond those ten people, the volunteer support we lobbied for years never materialized. We still enjoy a couple dozen volunteers that primarily mangle their own disclosures, or add CVE references, which we appreciate greatly. Unfortunately, the rate of vulnerability disclosures demands a lot more time and attention. In addition to the lack of volunteers, community support in the form of sponsorship and donations has been minimal at best. Tenable Network Security and Layered Technologies have been with us for many years and have largely been responsible for our ability to keep up with the incoming data.

Other than those two generous companies, we have had a few other sponsors/donations over the years but nothing consistent. In the last year, we have spent most of our time trying to convince companies that are using our data in violation of our posted license to come clean and support our project. In a few cases, these companies have have built full products and services that are entirely based on our data. In other cases, companies use our data for presentations, marketing, customer reports, and more while trying to sell their products and services. Regardless, the one thing they aren’t doing is supporting the project by helping to update data, properly licensing the data or at least throwing us a few bucks as an apology. In short, several security companies, both new and well established, that sell integrity in one form or another, appear to have little integrity of their own. After a recent server upgrade broke our data export functionality, it was amazing to see the number of companies that came out of the woodwork complaining about the lack of exports. Some of them were presumptuous and demanding, as if it is a Constitutional right to have unfettered access to our data. Because of these mails, and because none of these companies want to license our data, we are in no hurry to fix the data exports. In short, they don’t get to profit heavily off the work of our small group of volunteers, many of whom are no longer with us.

Even as an officer of OSF and data manager of OSVDB, I honestly couldn’t tell you how we have survived this long as a project. I can tell you that it involved a lot of personal time, limping along, and the hardcore dedication of less than a dozen individuals over ten years that made it happen. With almost no income and no swarm of volunteers, the project simply isn’t sustainable moving forward, while still maintaining our high standards for data quality. We gave the community ten years to adopt us, and many did. Unfortunately, they largely did it in a completely self serving manner that did not contribute back to the project. That will be ending shortly. In the coming months, there will be big changes to the project as we are forced to shift to a model that allows us to not only make the project sustainable, but push for the evolution we have been preaching about for years. This will involve making the project less open in some aspects, such as our data exports, and has required us to seek a partnership to financially support our efforts.

For ten years we have had a passion for making OSVDB work in an open and free manner. Unfortunately, the rest of the community did not have the same passion and these changes have become a necessity. The upside to all of this is that our recent partnership has allowed us to develop and we will be offering a subscription data feed that has better vulnerability coverage than other solutions, at a considerably better price point. That said, the data will remain open via HTTP and for a 99% of our users this is all that is required. When exports are fixed, we will offer a free export to support the community, but approval will be required and it will contain a limited set of fields for each entry. We are still working out the details and considering a variety of ideas to better support a wide range of interest in the project, but doing so in a sustainable manner. In the end, our new model will help us greatly improve the data we make available, free or otherwise and ensure OSVDB is around for the next 10 years.

Adobe, Qualys, CVE, and Math

Elinor Mills wrote an article titled Firefox, Adobe top buggiest-software list. In it, she quotes Qualys as providing vulnerability statistics for Mozilla, Adobe and others. Qualys states:

The number of vulnerabilities in Adobe programs rose from 14 last year to 45 this year, while those in Microsoft software dropped from 44 to 41, according to Qualys. Internet Explorer, Windows Media Player and Microsoft Office together had 30 vulnerabilities.

This caught my attention immediately, as I know I have mangled more than 45 Adobe entries this year.

First, the “number of vulnerabilities” game will always have wiggle room, which has been discussed before. A big factor for statistic discrepancy when using public databases is the level of abstraction. CVE tends to bunch up vulnerabilities in a single CVE, where OSVDB tends to break them out. Over the past year, X-Force and BID have started abstracting more and more as well.

Either way, Qualys cited their source, NVD, which is entirely based on CVE. How they got 45 vulns in “Adobe programs” baffles me. My count says 97 Adobe vulns, 95 of them have CVEs assigned to them (covered by a total of 93 CVEs). OSVDB abstracted the entries like CVE did for the most part, but split out CVE-2009-1872 as distinct XSS vulnerabilities. OSVDB also has two entries that do not have CVE, 55820 and 56281.

Where did Qualys get 45 if they are using the same CVE data set OSVDB does? This discrepancy has nothing to do with abstraction, so something else appears to be going on. Doing a few more searches, I believe I figured it out. Searching OSVDB for “Adobe Reader” in 2009 yields 44 entries, one off from their cited 45. That could be easily explained as OSVDB also has 9 “Adobe Multiple Products” entries that could cover Reader as well. This may in turn be a breakdown where Qualys or Mills did not specify “Adobe Software” (cumulative, all software they release) versus “Adobe Reader” or some other specific software they release.

Qualys tallied 102 vulnerabilities that were found in Firefox this year, up from 90 last year.

What is certainly a discrepancy due to abstraction, OSVDB has 74 vulnerabilities specific to Mozilla Firefox (two without CVE), 11 for “Mozilla Multiple Browsers” (Firefox, Seamonkey, etc) and 81 for “Mozilla Multiple Products” (Firefox, Thunderbird, etc). While my numbers are somewhat anecdotal, because I cannot remember every single entry, I can say that most of the ‘multiple’ vulnerabilities include Firefox. That means OSVDB tracked as many as, but possibly less than, 166 vulnerabilities in Firefox.

Microsoft software dropped from 44 to 41, according to Qualys. Internet Explorer, Windows Media Player and Microsoft Office together had 30 vulnerabilities.

According to my searches on OSVDB, we get the following numbers:

  • 234 vulnerabilities in Microsoft, only 4 without CVE
  • 50 vulnerabilities in MSIE, all with CVE
  • 4 vulnerabilities in Windows Media Player, 1 without CVE
  • 52 vulnerabilities in Office, all with CVE. (based on “Office” being Excel, Powerpoint, Word and Outlook.
  • 92 vulnerabilities in Windows, only 2 without CVE

When dealing with vulnerability numbers and statistics, like anything else, it’s all about qualifying your numbers. Saying “Adobe Software” is different than “Adobe Acrobat” or “Adobe Reader” as the software installation base is drastically different. Given the different levels of abstraction in VDBs, it is also equally important to qualify what “a vulnerability” (singular) is. Where CVE/NVD will group several vulnerabilities in one identifier, other databases may abstract and assign unique identifiers to each distinct vulnerability.

Qualys, since you provided the stats to CNet, could you clarify?

What I learned from early CVE entries!

This post is the farthest thing from picking on or insulting CVE. They were running a VDB some four years before OSVDB entered the picture. More impressive, they operated with a level of transparency that no other VDB offered at the time. Early OSVDB entries suffered just as greatly as the early CVE entries, and we even had the benefit of four years to learn from their efforts. Reading the original CVE entries is a fun look at how it all began. This post is a brief light-hearted look at the past.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=1999-0345 – CVE contributors can be stumped

http://cve.mitre.org/cgi-bin/cvename.cgi?name=1999-0465 – Client side vulnerabilities aren’t an issue.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=1999-0285 – No reference, no problem!

http://cve.mitre.org/cgi-bin/cvename.cgi?name=1999-0549 – ISS tried desperately to help.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=1999-0684 – A CVE entry can be a duplicate of itself.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=2000-0151 – We miss colorful CVE commentary.

Reviewing(4) CVE

As I was working on OSVDB tonight I spent some time on the CVE website. I decided to quickly review the current list of CVE-Compatible Products and Services (http://cve.mitre.org/compatible/compatible.html) and noticed that OSVDB was not on the list. I was pretty confused as I thought we should have been given that we submitted the paperwork many years ago. When we first submitted the only requirement that we were not able to meet was showing the difference of CAN or CVE based on the status of the entry. We thought this was quite silly as it didn’t appear to be used much and historically entries that met criteria would stay in CAN status for years. This lead us to send mail to the CVE staff asking about the designations and if they still thought it practical. The responses we recieved were informative and reasonable, but they expressed doubt on if it still had value and if they would continue to distinguish. We indicated that was a good call, and that we would not change OSVDB to accommodate that distinction as we saw no value in it under the current CVE. A few months later, CVE announced that they were no longer supporting CAN vs CVE and that moving forward, all new entries would be CVE. I figured OSVDB was good to go for compatibility at that point… but apparently not. After some more digging I then found that we were only listed on the Declarations to Be CVE-Compatible page (http://cve.mitre.org/compatible/declarations.html). Obviously we have missed something and hope to get it corrected in short order, and certainly hope it won’t involve more paperwork.

Speaking of voting, if you look at a CVE entry there are a couple things that stand out as a bit odd considering CVE dropped support for the CAN status. There are several fields such as Status, Phase and Voting that no longer affect or improve the information made available by CVE. For example, if you look at most of the recent CVE entries you see something such as the following:

Status
Candidate This CVE Identifier has "Candidate" status and must be reviewed and accepted by the CVE Editorial Board before it can be updated to official "Entry" status on the CVE List. It may be modified or even rejected in the future.
Phase
Assigned (20090602)
Votes
 
Comments

If you go back further in time and check out an early CVE entry, for example the CVE referenced in OSVDB 1 (ColdFusion Application Server exprcalc.cfm OpenFilePath Variable Arbitrary File Disclosure), you will see something a bit different. The same fields are present but there is additional information and comments populated in those same fields. Take a look at CVE-2004-0230:

Status
Candidate This CVE Identifier has "Candidate" status and must be reviewed and accepted by the CVE Editorial Board before it can be updated to official "Entry" status on the CVE List. It may be modified or even rejected in the future.
Phase
Modified (19991210-01)
Votes
ACCEPT(3) Frech, Ozancin, Balinsky
MODIFY(1) Wall
NOOP(1) Baker
REVIEWING(1) Christey
Comments
Wall> The reference should be ASB99-01 (Expression Evaluator Security Issues)
make application plural since there are three sample applications
(openfile.cfm, displayopenedfile.cfm, and exprcalc.cfm).
Christey> The CD:SF-EXEC and CD:SF-LOC content decisions apply here.
Since there are 3 separate "executables" with the same
(or similar) problem, we need to make sure that CD:SF-EXEC
determines what to do here. There is evidence that some
of these .cfm scripts have an "include" file, and if so,
then CD:SF-LOC says that we shouldn’t make separate entries
for each of these scripts. On the other hand, the initial
L0pht discovery didn’t include all 3 of these scripts, and
as far as I can tell, Allaire had patched the first problem
before the others were discovered. So, CD:DISCOVERY-DATE
may argue that we should split these because the problems
were discovered and patched at different times.

In any case, this candidate can not be accepted until the
Editorial Board has accepted the CD:SF-EXEC, CD:SF-LOC,
and CD:DISCOVERY-DATE content decisions.

You can see that it is still a candidate, but it is in a modified phase and there are some comments that provide additional insight into what the CVE guys are thinking on this vulnerability. Now to my favorite part, the voting. You can see that there are actually some votes on this one! There are 3 ACCEPT, 1 MODIFY, 1 NOOP and 1 REVIEWING. I love the fact that Christey is still reviewing this entry. You have to hand it to him he is very thorough in his work.. I knew he was passionate but to invest close to 10 years reviewing this entry is some real dedication! Perhaps he wants to REJECT but is waiting until his vote can be the deciding factor? =)

Other “legacy” entries can be found in CVE that do not meet their current standards. For example, CVE-1999-0651 (and a couple dozen like it) cover a particular service running. This is actually somewhat of a precursor to what became CWE:

CVE-ID

CVE-1999-0651

(under review)
• Severity Rating • Fix Information • Vulnerable Software Versions • SCAP Mappings
Description
The rsh/rlogin service is running.
References
Note: References are provided for the convenience of the reader to help distinguish between vulnerabilities. The list is not intended to be complete.
 
Status
Candidate This CVE Identifier has "Candidate" status and must be reviewed and accepted by the CVE Editorial Board before it can be updated to official "Entry" status on the CVE List. It may be modified or even rejected in the future.
Phase
Proposed (19990804)
Votes
ACCEPT(2) Wall, Baker
MODIFY(1) Frech
NOOP(1) Christey
REJECT(1) Northcutt
Comments
Christey> aka "shell" on UNIX systems (at least Solaris) in the
/etc/inetd.conf file.
Frech> associated to:
XF:nt-rlogin(92)
XF:rsh-svc(114)
XF:rshd(2995)

Candidate assigned on 19990607 and proposed on 19990804

Even that far back, other databases (notably ISS X-Force) were doing the same. In the case of X-Force, those mappings had a more logical place given their database supported a vulnerability scanner. A year or two ago, out of curiosity, OSVDB formally requested a legacy entry like this one be retired as it no longer met standards for inclusion in CVE. As far as we know, the request is still being reviewed =)

In all seriousness, the guys at CVE are great folks and provide a much-needed service in the security industry. We have gotten to know many of them and work very closely with them regarding vulnerabilities, disclosure and related topics. We really have nothing but nice things to say about them despite the occasional joke we throw out there from time to time! Further, I know when OSVDB started there were a lot of things that seemed like the “right” thing to do… or the “right” way to do it. But the reality was and still is that the sheer volume of vulnerabilties that must be processed is enormous. The amount of time it takes just to figure out what is going on is hard enough to keep up with whether you have a paid staff or are a volunteer organization like OSVDB. We have tried to stay true to our roots but have had to make several changes to processes and standards over time to evolve. As we have been preaching for some time, VDBs need to keep evolving to better serve the industry. While it may be painful at first, it frequently leads to a more streamlined process that saves time and headache for years to come. Perhaps it is time for even our friends over at CVE to take a look at their processes and figure out what makes sense to continue and what should be retired.

Any votes?

Votes
ACCEPT(2) Jkouns, Jericho
NOOP(2) Dave, Lyger

VDB Relationships (hugs and bugs!)

Like any circle in any industry, having good professional relationships can be valuable to involved parties. In the world of security, more specifically Vulnerability Databases (VDBs), the relationships we maintain benefit the community behind the scenes. Like ogres and onions, there are layers.

Someone from CVE and someone from OSVDB run an informal list called ‘Vulnerability Information Managers’ (VIM) for discussion of vulnerabilities as relates to post-disclosure issues. New information comes up, additional research, vendor confirmations, vendor disputes and more. It’s a great resource for us to discuss the details that help each VDB fine-tune their information. (No new vulnerabilities are posted there, don’t bother)

In addition, some of the VDBs have stronger relationships that allow for great dialogue and information sharing. A few examples of these, from OSVDB’s perspective:

- A couple of the CVE guys are great for very informal chat about vulnerabilities. Despite being the dreaded “government contractors”, they are respectable, very knowledgeable and have a great sense of humor. I just sent one a mail with the subject “PROVENANCE BITCHEZ?!” challenging him on the details of a given CVE. They are so nice, I broke my rule of not taking candy from strangers and happily accepted the bag of leftover candy from their BlackHat booth. Joking aside, the ability to coordinate and share information is incredible and a testament to their integrity and desire to help the industry.

- OSVDB uses Secunia for one of our feeds to gather information. The two guys we regularly have contact with (CE & TK) lead a bright team that does an incredible amount of work behind the scenes. In case it slipped your attention, Secunia actually validates vulnerabilities before posting them. That means they take the time to install, configure and test a wide range of software based on the word of 3l1t3hax0ry0 that slapped some script tag in software you never heard of, as well as testing enterprise-level software that costs more than OSVDB makes in five years. Behind the scenes, Secunia shares information as they can with others, and there is a good chance you will never see it. If you aren’t subscribed to their service as a business, you should be. For those who asked OSVDB for years to have a ‘vulnerability alerting’ service; you can blame Secunia for us not doing it. They do it a lot better than we could ever hope to.

- The head of R&D at Tenable contributes a lot of time and information to VIM based on his research of disclosed vulnerabilities. Installing the software, configuring, testing and sometimes noticing additional vulnerabilities. He is a frequent contributor to VIM and has worked with OSVDB on sharing information to enhance the Nessus plugins as well as the OSVDB database.

- str0ke, that mysterious guy that somehow manages to run milw0rm in his spare time. What may appear to some as a website with user-posted content, is actually a horrible burden to maintain. Since the site’s inception, str0ke has not just posted the exploits sent in, but he has taken time to sanity check every single one as best he can. What you don’t see on that site are dozens (hundreds?) of exploits a month that were sent in but ended up being incorrect (or as OSVDB would label, “myth/fake”). When str0ke was overwhelmed and decided to give up the project, user demand (read: whining & complaints) lead him to change his mind and keep it going. Make sure you thank him every so often for his work and know this: milw0rm cannot be replaced as easily as you think. Not to the quality that we have seen from str0ke.

Since we have no corporate overlords, I’ll go ahead and talk about the flip side briefly:

- ISS (now IBM) runs a good database. Very thorough, keen to detail on including original source and vendor information. In 2004, the head of that group (AF) left, and until that time, we had a great dialogue and open communication. Since then, even before the IBM frenzy, we’ve mostly gotten the cold shoulder when mailing. Even when pointing out problems or negative changes on their side. LJ, bring back the old days!

- NVD. Why do you waste taxpayer money with that ‘database’? We pay $22 for Booz Allen Hamilton to “analyze” each CVE entry (thanks FOIA request!), yet they find a fraction of the typos and mistakes I do? By fraction, I mean exactly none from what I hear through the grape vine (DHS cronies are cool). If you can’t notice and report simple typos in a CVE, and you botch CVSS2 scores left and right (yes, I’ve mailed in corrections that were acted on), what exactly are you doing with our money? Are you the virtual Blackwater of VDBs?

- SecurityFocus / BID. Sorry, not going to bother with verbal fluffing. My countless mails pointing out errors and issues with your database are seemingly dumped to a black hole. Your promises of certain mail archives ‘not changing’ were pure fantasy. To this date you make erroneous assumptions about affected products, and still don’t grasp “case sensitive”. I know some of your team, you have great people there. Just lift the corporate policy that turns them into virtual shut-ins, please?

Sorry to end it on a downer. I still dream of a niche of the security industry (VDBs) where we can all play well with each other.

Follow

Get every new post delivered to your Inbox.

Join 5,027 other followers