Category Archives: Vulnerability Statistics

Vulnerability Counts and OSVDB Advocacy

CVE just announced reaching 30,000 identifiers which is a pretty scary thing. CVE staff have a good eye for catching vulnerabilities from sources away from the mainstream (e.g. bugtraq) and they have the advantage of being a very widely accepted standard for tracking vulnerabilities. As companies and researchers request CVE numbers for disclosures, they get a lot of the information handed to them on a silver platter. Of course, sometimes that platter is full of mud and confusion as vendors don’t always provide clear details to help CVE accurately track and distinguish between multiple vulnerabilities. I’ve also pointed out many times in the past that CVE is a very unique VDB that provides identifiers for vulnerability tracking. They do not provide many fields associated with other VDBs (solution, creditee, etc). As such, they may have a single entry that covers multiple distinct vulnerabilities if they are the same class (XSS, SQLi, RFI), or if there is a lack of details but they know it affects the same product (Oracle). So when we see 30,000 identifiers, we have to realize that the real count of vulnerabilities is significantly higher.

CVE is run by The MITRE Corporation, sponsored / funded by the NCSD (US-CERT) of DHS under government contract. That means our tax dollars fund this database so it should be of particular interest to U.S. taxpayers in the security industry. I know from past discussions with CVE staff and other industry veterans that on any given day, they are more likely to have more work than available staff. That means the rate of vulnerabilities that get published is greater than the resources CVE can maintain to track them. In short, the 30,000 identifiers you see only represents a percentage of the vulnerabilities actually disclosed. We could probably debate what percentage that represents all day long, and I don’t think that is really the point here other than “we know it isn’t all of them”.

Every VDB suffers from the same thing. “Commercial” VDBs like X-Force, BID and Secunia have a full time staff that maintain their databases, like CVE does. Despite having all of these teams (some of them consisting of 10 or more people) maintain VDBs, we still see countless vulnerabilities that are ‘missed’ by all of them. This is not a slight against them in any way; it is a simple manner of resources available and the amount of information out there. Even with a large team sorting disclosed vulnerabilities, some teams spend time validating the findings before adding them to the database (Secunia), which is an incredible benefit for their customers. There is also a long standing parasitic nature to VDBs, with each of them watching the others as best they can, to help ensure they are tracking all the vulnerabilities they can. For example, OSVDB keeps a close eye on Secunia and CVE specifically, and as time permits we look to X-Force, BID, SecurityTracker and others. Each VDB tends to have some researchers that exclusively disclose vulnerabilities directly to the VDB of their choice. So each one I mention above will get word of vulnerabilities that the rest really have no way of knowing about short of watching each other like this. This VDB inbreeding (I will explain the choice of word some other time) is an accepted practice and I have touched on this in the past (CanSecWest 2005).

Due to the inbreeding and OSVDB’s ability to watch other resources, it occasionally frees up our moderators to go looking for more vulnerability information that wasn’t published in the mainstream. This usually involves grueling crawls through vendor knowledge-bases, mind-numbing changelogs, searching CVS type repositories and more. That leads to the point of this lengthy post. In doing this research, we begin to see how many more vulnerabilities are out there in the software we use, that escapes the VDBs most of the time. Only now, after four years and getting an incredible developer to make many aspects of the OSVDB wish-list a reality, do we finally begin to see all of this. As I have whined about for those four years, VDBs need to evolve and move beyond this purely “mainstream reactionary” model. Meaning, we have to stop watching the half dozen usual spots for new vulnerability information, creating our entries, rinsing and repeating. There is a lot more information out there just waiting to be read and added.

In the past few weeks, largely due to the ability to free up time due to the VDB inbreeding mentioned above, we’ve been able to dig into a few products more thoroughly. These examples are not meant to pick on any product / VDB or imply anything other than what is said above. In fact, this type of research is only possible because the other VDBs are doing a good job tracking the mainstream sources, and because some vendors publish full changelogs and don’t try to hide security related fixes. Kudos to all of them.

Example: Search your favorite VDB for ”inspircd”, a popular multi-platform IRC daemon. Compare the results of BID, Secunia, X-Force, SecurityTracker, and http://osvdb.org/ref/blog/inspircd-cve.png. Compare these results to OSVDB after digging into their changelogs. Do these same searches for “xfce” (10 OSVDB, 5 max elsewhere), “safesquid” (6 OSVDB, 1 max elsewhere), “beehive forum” (27 OSVDB, 8 max elsewhere) and “jetty” (25 OSVDB, 12 max elsewhere). Let me emphasize, I did not specifically hand pick these examples to put down any VDB, these are some of the products we’ve investigated in the last few weeks.

The real point here is that no matter what vulnerability disclosure statistic you read, regardless of which VDB it uses (including OSVDB), consider that the real number of vulnerabilities disclosed is likely much higher than any of us know or have documented. As always, if you see vulnerabilities in a vendor KB or changelog, and can’t find it in your favorite VDB, let them know. We all maintain e-mail addresses for submissions and we all strive to be as complete as possible.

Advertisements

2007 Top Vulnerable Vendors?

http://www.eweek.com/article2/0,1895,2184206,00.asp
http://www.eweek.com/c/a/Security/Report-MS-Apple-Oracle-Are-Top-Vulnerable-Vendors/

New IBM research shows that five vendors are responsible for 12.6 percent of all disclosed vulnerabilities. Not surprising: In the first half of 2007, Microsoft was the top vendor when it came to publicly disclosed vulnerabilities. Likely surprising to some: Apple got second place. IBM Internet Security Systems’ X-Force R&D team released its 2007 report on cyber attacks on Sept. 17, revealing that the top five vulnerable vendors accounted for 12.6 of all disclosed vulnerabilities in the first half of the yearor 411 of 3,272 vulnerabilities disclosed. Here’s the order in which the top 10 vendors stacked up, by percentage of vulnerabilities publicly disclosed in the first half of the year: Microsoft, 4.2 percent Apple, 3 percent Oracle, 2 percent Cisco Systems, 1.9 percent Sun Microsystems, 1.5 percent IBM, 1.3 percent Mozilla, 1.3 percent XOOPS, 1.2 percent BEA, 1.1 percent Linux kernel, 0.9 percent

This article was posted to ISN the other day and struck a nerve. How many times are we going to see vulnerability statistics presented without qualification? Rather than really get into the details, I replied with a single simple example on why such statistics are misleading at best and incorrect at worst. The bulk of my reply follows. My hopes for Lisa or IBM/ISS clarifying this is already dwindling.

One other factor, that Lisa Vaas apparently didn’t ask about, is how ISS X-Force catalogs vulnerabilities, and if their method and standards could impact these numbers at all. Take for example, two X-Force vulnerability database entries: Oracle Critical Patch Update – July 2007 http://xforce.iss.net/xforce/xfdb/35490 18 CVE, 30+ Oracle Oracle Critical Patch Update – January 2007 http://xforce.iss.net/xforce/xfdb/31541 30 CVE, 50+ Oracle So when comparing numbers, you have 2 X-Force entries that equate to 48 CVE entries that equate to *more than 80* unique and distinct vulnerabilities according to Oracle. I’m not a math or stat guy, but I have a feeling that this could seriously skew the statistics above, especially when you consider that Microsoft and Apple both have a more distinct breakdown and separation in the X-Force database. Anyone from IBM/ISS care to clarify? Lisa, did you have more extensive notes on this aspect that didn’t make it in the article perhaps?

Scrubbing the Source Data

A few months ago, Jeff Jones at CSO Online blogged about “Scrubbing the Source Data”, talking about the challenges of using vulnerability data for analysis. Part 1 examined using the National Vulnerability Database (NVD) showing how you can’t blindly rely on the data from VDBs. In his examples he shows that using the data to examine Windows is probably fairly accurate, yet examining Apple is less so and Ubuntu Linux is basically not possible. Unfortunately, there isn’t a part two to the series (yet) as implied by the title and introduction. Jones concludes the post:

Given these accuracy levels for vulnerabilities after the vendor has acknowledged it and provided a fix, it doesn’t seem like too much of a stretch to also conclude that using this data to analyze unpatched data would be equally challenging. Finally, I think this exercise helps demonstrate that anyone leveraging public data sources needs to have a good understanding of both the strengths and the weaknesses that any given data source may have, with respect to what one is trying to analyze or measure, and include steps in their methodology that accomodates accordingly.

McAfee: Microsoft patches 133 Critical/Important Vulns in 2006

http://www.avertlabs.com/research/blog/?p=153

McAfee is reporting that Microsoft patched 133 Critical / Important vulnerabilities in 2006. They also compare this number against previous years to presumably demonstrate that security isn’t getting better at Microsoft.

Oracle RDBMS vs Microsoft SQL Server

http://www.databasesecurity.com/dbsec/comparison.pdf

Introduction

This paper will examine the differences between the security posture of Microsoft’s SQL Server and Oracle’s RDBMS based upon flaws reported by external security researchers and since fixed by the vendor in question. Only flaws affecting the database server software itself have been considered in compiling this data so issues that affect, for example, Oracle Application Server have not been included. The sources of information used whilst compiling the data that forms the basis of this document include:

The Microsoft Security Bulletins web page
The Oracle Security Alerts web page
The CVE website at Mitre.
The SecurityFocus.com website

A general comparison is made covering Oracle 8, 9 and 10 against SQL Server 7, 2000 and 2005. The vendors� flagship database servers are then compared.

[..]

Vulnerability Type Distributions in CVE

http://cwe.mitre.org/documents/vuln-trends.html

Document version: 1.0 Date: October 4, 2006

For the past 5 years, CVE has been tracking the types of errors that lead to publicly reported vulnerabilities, and periodically reporting trends on a limited scale. In support of the Common Weakness Enumeration (CWE) project, and as a result of the interest in this work as mentioned during the “Year of the web application: Hack & Data from the Front lines” panel at the 5th Annual Cyber Security Executive Summit in New York City on September 13, 2006, we have published a more extensive analysis. An updated version will be released once 2006 is complete.

The primary goal of this study is to better understand research trends using publicly reported vulnerabilities. It should be noted that the data is obtained from an uncontrolled population, i.e., decentralized public reports from a research community with diverse goals and interests, with an equally diverse set of vendors and developers. More specialized, exhaustive, and repeatable methods could be devised to evaluate software security. But until such methods reach maturity and widespread acceptance, the overall state of software security can be viewed through the lens of public reports.

[..]

A Time to Patch III: Apple

http://blog.washingtonpost.com/securityfix/2006/05/a_time_to_patch_iii_apple_2.html

A Time to Patch III: Apple

Over the past several months, Security Fix published data showing how long it took Microsoft and Mozilla to issue updates for security flaws. Today, I’d like to present some data I compiled that looks at Apple’s performance on this front.

Here’s what I found: Over the past two years, after being notified about serious security flaws in its products, it took Apple about 91 days on average to issue patches to correct those vulnerabilities. I also found that almost without exception, open-source Linux vendors were months ahead of Apple in fixing the same flaws.

You can download a copy of the charts I put together either in HTML format or as a Microsoft Excel file. I spent a long time on this research, but that doesn’t mean it is free of typos and so forth. If you spot one, or a discrepancy in the data, please drop me line and I will update the data as necessary.

[..]

Depending on how you count flaws..

http://www.computerworld.com/[..]/0,10801,109278,00.html

After flap, Symantec adjusts browser bug count
Depending on how you count flaws, either IE or Firefox could be considered less secure
News Story by Robert McMillan

MARCH 07, 2006 (IDG NEWS SERVICE) – A report issued today by Symantec Corp. seeks to satisfy users of both Mozilla Corp.’s Firefox browser and Microsoft Corp.’s Internet Explorer.

In its latest Internet Security Threat Report, covering the last six months of 2005, the company now features two different ways of counting browser bugs: one that finds that Internet Explorer has the most vulnerabilities, and a second that reveals Firefox as the bug leader.

Thank you Symantec, for generating completely useless vulnerability statistics. When you can manipulate them to support either side of an argument (and do so intentionally), what’s the point? Just define your criteria for counting a vulnerability, define your time frame, and let the results speak for themselves.

Mac vs Windows – More “Statistics”

Yet another article comparing Mac vs Windows, and using statistics to back it up. Since this is getting to be a common occurrence, I won’t go into the usual lecture about statistics, how they can easily be manipulated to back any argument (including how VAX/VMS is the most in/secure OS in the world!), how you must fully qualify the data you used to generate your statistics, and all the other tricks that make statistics the best tool to create a convincing argument (lie?). I’m not saying this because I think Mac or Windows is more or less secure. I’m saying this because I don’t feel the following article is accurate or well written. Even the readers who commented bring up some very valid points and questions for the author. Add to that it seems that the author (George Ou) is somewhat outspoken and a fan of Microsoft, his credibility and bias toward rivals comes into question. I’d love for Secunia to officially respond to this article, since he uses their database and rating system to generate his stats.

George Ou’s relevant conclusions: Between Feb 04 and Feb 06, Mac OS X had 5 “extremely critical” (1 unpatched) vulnerabilities and MS Windows had 2 “extremely critical” (0 unpatched) vulnerabilities. Mac OS X had 173 high and 59 moderate vulns, while MS Windows had 49 high and 41 moderate vulns. Ou goes to conclude “The data is clear, and Apple has a lot more vulnerabilities of every kind ranging from moderately critical to extremely critical.

Vulnerability statistics for Mac and Windows
http://blogs.zdnet.com/Ou/?p=165

One of many good comments challenging the piece:
http://www.zdnet.com/[..]messageID=356498&start=-1

Past criticism of Ou’s work, and signs he may be biased toward Microsoft:
http://www.google.com/search?hl=en&q=George+Ou

A Time to Patch

http://blogs.washingtonpost.com/securityfix/2006/01/a_timeline_of_m.html

Brian Krebs has a fantastic post on his blog covering the time it takes for Microsoft to release a patch, and if they are getting any better at it. Here are a few relevant paragraphs from it, but I encourage you to read the entire article. It appears to be a well developed article that is heavily researched and quite balanced. Makes me wonder if his editors shot it down for some reason. If they did, shame on them.

A few months back while researching a Microsoft patch from way back in 2003, I began to wonder whether anyone had ever conducted a longitudinal study of Redmond’s patch process to see whether the company was indeed getting more nimble at fixing security problems.

Finding no such comprehensive research, Security Fix set about digging through the publicly available data for each patch that Microsoft issued over the past three years that earned a “critical” rating. Microsoft considers a patch “critical” if it fixes a security hole that attackers could use to break into and take control over vulnerable Windows computers.

Here’s what we found: Over the past three years, Microsoft has actually taken longer to issue critical fixes when researchers waited to disclose their research until after the company issued a patch. In 2003, Microsoft took an average of three months to issue patches for problems reported to them. In 2004, that time frame shot up to 134.5 days, a number that remained virtually unchanged in 2005.

First off, these are the kind of statistics and research that I mean when I talk about the lack of evolution of vulnerability databases. This type of information is interesting, useful, and needed in our industry. This begins to give customers a solid idea on just how responsive our vendors are, and just how long we stay at risk with unpatched vulnerabilities. This is also the type of data that any solid vulnerability database should be able to produce with a few clicks of the mouse.

This type of article can be written due to the right data being available. Specifically, a well documented and detailed time line of the life of a vulnerability. Discovery, disclosure to the vendor, vendor acknowledgement, public disclosure, and patch date are required to generate this type of information. People like Steven Christey (CVE) and Chris Wysopal (VulnWatch) have been pushing for this information to be made public, often behind the scenes in extensive mail to vendors. In the future if we finally get these types of statistics for all vendors over a longer period of time, you will need to thank them for seeing it early on and helping to make it happen.

This type of data is of particular interest to OSVDB and has been worked into our database (to a degree) from the beginning. We currently track the disclosure date, discovery date and exploit publish date for each vulnerability, as best we can. Sometimes this data is not available but we include it when it is. One of our outstanding development/bugzilla entries involving adding a couple more date fields, specifically vendor acknowledge date and vendor solution date. With these five fields, we can begin to trend this type of vendor response time with accuracy, and with a better historical perspective.

While Krebs used Microsoft as an example, are you aware that other vendors are worse than Microsoft? Some of the large Unix vendors have been slow to patch for the last twenty years! Take the recent disclosure of a bug in uustat on Sun Microsystems Solaris Operating System. iDefense recently reported the problem and included a time line of the disclosure process.

08/11/2004 Initial vendor contact
08/11/2004 Initial vendor response
01/10/2006 Coordinated public disclosure

Yes, one year and five months for Sun Microsystems to fix a standard buffer overflow in a SUID binary. The same thing that has plagued them as far back as January 1997 (maybe as far back as December 6, 1994, but details aren’t clear). It would be nice to see this type of data available for all vendors on demand, and it will be in due time. Move beyond the basic stats and consider if we apply this based on the severity of the vulnerability. Does it change the vendor’s response time (consistently)? Compare the time lines along with who discovered the vulnerability, and how it was disclosed (responsibly or no). Do those factors change the vendor’s response time?

The answers to those questions have been on our minds for a long time and are just a few of the many goals of OSVDB.