Category Archives: Vulnerability Statistics

Who discovered the most vulns?

This is a question OSVDB moderators, CVE staff and countless other VDB maintainers have asked. Today, Gunter Ollmann with IBM X-Force released his research trying to answer this question. Before you read on, I think this research is excellent. The relatively few criticisms I bring up are not the fault of Ollmann’s research and methodology, but the fault of his VDB of choice (and *every* other VDB) not having a complete data set.

Skimming his list, my first thought was that he was missing someone. Doing a quick search of OSVDB, I see that Lostmon Lords (aka ‘lostmon’) has close to 350 vulnerabilities published. How could the top ten list miss someone like this when his #10 only had 147? Read down to Ollmann’s caveat and there is a valid point, but sketchy wording. The data he is using relies on this information being public. As the caveat says though, “because they were disclosed on non-public lists” implies that the only source he or X-Force are using are mail lists such as Bugtraq and Full-disclosure. Back in the day, that was a pretty reliable source for a very high percentage of vulnerability information. In recent years though, a VDB must look at other sources of information to get a better picture. Web sites such as milw0rm get a steady stream of vulnerability information that is frequently not cross-posted to mail lists. In addition, many researchers (including lostmon) mail their discoveries directly to the VDBs and bypass the public mail lists. If researchers mail a few VDBs and not the rest, it creates a situation where the VDBs must start watching each other. This in turn leads to “VDB inbreeding” that Jake and I mentioned at CanSecWest 2005, which is a necessary evil if you want more data on vulnerabilities.

In May of 2008, OSVDB did the same research Ollmann did and we came up with different results. This was based on data we had available, which is still admittedly very incomplete (always need data manglers.) So who is right? Neither of us. Well, perhaps he is, perhaps we are, but unfortunately we’re both working with incomplete databases. As a matter of my opinion, I believe OSVDB has better coverage of vulnerabilities, while X-Force clearly has better consistency in their data and a fraction of the gaps we do.

Last, this data is interesting as is, but would be really fascinating if it was mixed with ‘researcher confidence’ (a big thing of Steve Christey/CVE and myself), in which we track a researcher’s track record for accuracy in disclosure. Someone that disclosed 500 vulnerabilities last year with a 10% error rate should not be above someone who found 475 with a 0% error rate. In addition, as Ollmann’s caveat says, these are pure numbers and do not factor in hundreds of XSS versus remote code execution in operating system default install services. Having a weight system that can be applied to a vulnerability (e.g., XSS = 3, SQLi = 7, remote code exec = 9) that is then factored into researcher could move beyond “who discovered the most” and perhaps start to answer “who found the most respectable vulnerabilities”.

Top vulnerability researcher?

Who is the top vulnerability researcher? Who has discovered the most computer security vulnerabilities? Which country has the most researchers and publishes the most vulnerabilities? Who has discovered the most critical vulnerabilities?

From looking at OSVDB here are the top 12 researchers in terms of volume:

Rank / Creditee / # Vulns

  1. r0t 770
  2. Lostmon Lords 241
  3. rgod 239
  4. Aliaksandr Hartsuyeu 201
  5. Kacper 199
  6. James Bercegay 180
  7. luny 142
  8. Diabolic Crab 139
  9. Janek Vind “waraxe” 136
  10. JeiAr 117
  11. Dedi Dwianto 86
  12. M.Hasran Addahroni 79

Take a look at the other OSVDB Browse categories and note you can even click on a Creditee’s name and see all of the vulnerabilities that they have discovered here: http://osvdb.org/browse

Of course our statistics are based off of the content in OSVDB and we need your help to provide better statistics. If you are a researcher, it would help if you could take the time to create an OSVDB account and update the vulnerabilities that you have discovered!

You can signup for an OSVDB account here: https://osvdb.org/account/signup

Here is a quick overview:

  • Search for your vulnerabilities at http://osvdb.org/search/advsearch
  • Click on your vuln, then click “Edit Vulnerability” -Click the Credits menu item, if credit is missing click “Toggle Add Author…”
  • You name may already be in the database, as you type it will search OSVDB to see if your information is there. If so, select and click “Add Author”.
  • Once you add the creditee information you can update your information or if your name is not there you can add it as a new creditee.

Rinse and repeat!

Vulnerability Counts and OSVDB Advocacy

CVE just announced reaching 30,000 identifiers which is a pretty scary thing. CVE staff have a good eye for catching vulnerabilities from sources away from the mainstream (e.g. bugtraq) and they have the advantage of being a very widely accepted standard for tracking vulnerabilities. As companies and researchers request CVE numbers for disclosures, they get a lot of the information handed to them on a silver platter. Of course, sometimes that platter is full of mud and confusion as vendors don’t always provide clear details to help CVE accurately track and distinguish between multiple vulnerabilities. I’ve also pointed out many times in the past that CVE is a very unique VDB that provides identifiers for vulnerability tracking. They do not provide many fields associated with other VDBs (solution, creditee, etc). As such, they may have a single entry that covers multiple distinct vulnerabilities if they are the same class (XSS, SQLi, RFI), or if there is a lack of details but they know it affects the same product (Oracle). So when we see 30,000 identifiers, we have to realize that the real count of vulnerabilities is significantly higher.

CVE is run by The MITRE Corporation, sponsored / funded by the NCSD (US-CERT) of DHS under government contract. That means our tax dollars fund this database so it should be of particular interest to U.S. taxpayers in the security industry. I know from past discussions with CVE staff and other industry veterans that on any given day, they are more likely to have more work than available staff. That means the rate of vulnerabilities that get published is greater than the resources CVE can maintain to track them. In short, the 30,000 identifiers you see only represents a percentage of the vulnerabilities actually disclosed. We could probably debate what percentage that represents all day long, and I don’t think that is really the point here other than “we know it isn’t all of them”.

Every VDB suffers from the same thing. “Commercial” VDBs like X-Force, BID and Secunia have a full time staff that maintain their databases, like CVE does. Despite having all of these teams (some of them consisting of 10 or more people) maintain VDBs, we still see countless vulnerabilities that are ‘missed’ by all of them. This is not a slight against them in any way; it is a simple manner of resources available and the amount of information out there. Even with a large team sorting disclosed vulnerabilities, some teams spend time validating the findings before adding them to the database (Secunia), which is an incredible benefit for their customers. There is also a long standing parasitic nature to VDBs, with each of them watching the others as best they can, to help ensure they are tracking all the vulnerabilities they can. For example, OSVDB keeps a close eye on Secunia and CVE specifically, and as time permits we look to X-Force, BID, SecurityTracker and others. Each VDB tends to have some researchers that exclusively disclose vulnerabilities directly to the VDB of their choice. So each one I mention above will get word of vulnerabilities that the rest really have no way of knowing about short of watching each other like this. This VDB inbreeding (I will explain the choice of word some other time) is an accepted practice and I have touched on this in the past (CanSecWest 2005).

Due to the inbreeding and OSVDB’s ability to watch other resources, it occasionally frees up our moderators to go looking for more vulnerability information that wasn’t published in the mainstream. This usually involves grueling crawls through vendor knowledge-bases, mind-numbing changelogs, searching CVS type repositories and more. That leads to the point of this lengthy post. In doing this research, we begin to see how many more vulnerabilities are out there in the software we use, that escapes the VDBs most of the time. Only now, after four years and getting an incredible developer to make many aspects of the OSVDB wish-list a reality, do we finally begin to see all of this. As I have whined about for those four years, VDBs need to evolve and move beyond this purely “mainstream reactionary” model. Meaning, we have to stop watching the half dozen usual spots for new vulnerability information, creating our entries, rinsing and repeating. There is a lot more information out there just waiting to be read and added.

In the past few weeks, largely due to the ability to free up time due to the VDB inbreeding mentioned above, we’ve been able to dig into a few products more thoroughly. These examples are not meant to pick on any product / VDB or imply anything other than what is said above. In fact, this type of research is only possible because the other VDBs are doing a good job tracking the mainstream sources, and because some vendors publish full changelogs and don’t try to hide security related fixes. Kudos to all of them.

Example: Search your favorite VDB for ”inspircd”, a popular multi-platform IRC daemon. Compare the results of BID, Secunia, X-Force, SecurityTracker, and http://osvdb.org/ref/blog/inspircd-cve.png. Compare these results to OSVDB after digging into their changelogs. Do these same searches for “xfce” (10 OSVDB, 5 max elsewhere), “safesquid” (6 OSVDB, 1 max elsewhere), “beehive forum” (27 OSVDB, 8 max elsewhere) and “jetty” (25 OSVDB, 12 max elsewhere). Let me emphasize, I did not specifically hand pick these examples to put down any VDB, these are some of the products we’ve investigated in the last few weeks.

The real point here is that no matter what vulnerability disclosure statistic you read, regardless of which VDB it uses (including OSVDB), consider that the real number of vulnerabilities disclosed is likely much higher than any of us know or have documented. As always, if you see vulnerabilities in a vendor KB or changelog, and can’t find it in your favorite VDB, let them know. We all maintain e-mail addresses for submissions and we all strive to be as complete as possible.

2007 Top Vulnerable Vendors?

http://www.eweek.com/article2/0,1895,2184206,00.asp
http://www.eweek.com/c/a/Security/Report-MS-Apple-Oracle-Are-Top-Vulnerable-Vendors/

New IBM research shows that five vendors are responsible for 12.6 percent of all disclosed vulnerabilities. Not surprising: In the first half of 2007, Microsoft was the top vendor when it came to publicly disclosed vulnerabilities. Likely surprising to some: Apple got second place. IBM Internet Security Systems’ X-Force R&D team released its 2007 report on cyber attacks on Sept. 17, revealing that the top five vulnerable vendors accounted for 12.6 of all disclosed vulnerabilities in the first half of the yearor 411 of 3,272 vulnerabilities disclosed. Here’s the order in which the top 10 vendors stacked up, by percentage of vulnerabilities publicly disclosed in the first half of the year: Microsoft, 4.2 percent Apple, 3 percent Oracle, 2 percent Cisco Systems, 1.9 percent Sun Microsystems, 1.5 percent IBM, 1.3 percent Mozilla, 1.3 percent XOOPS, 1.2 percent BEA, 1.1 percent Linux kernel, 0.9 percent

This article was posted to ISN the other day and struck a nerve. How many times are we going to see vulnerability statistics presented without qualification? Rather than really get into the details, I replied with a single simple example on why such statistics are misleading at best and incorrect at worst. The bulk of my reply follows. My hopes for Lisa or IBM/ISS clarifying this is already dwindling.

One other factor, that Lisa Vaas apparently didn’t ask about, is how ISS X-Force catalogs vulnerabilities, and if their method and standards could impact these numbers at all. Take for example, two X-Force vulnerability database entries: Oracle Critical Patch Update – July 2007 http://xforce.iss.net/xforce/xfdb/35490 18 CVE, 30+ Oracle Oracle Critical Patch Update – January 2007 http://xforce.iss.net/xforce/xfdb/31541 30 CVE, 50+ Oracle So when comparing numbers, you have 2 X-Force entries that equate to 48 CVE entries that equate to *more than 80* unique and distinct vulnerabilities according to Oracle. I’m not a math or stat guy, but I have a feeling that this could seriously skew the statistics above, especially when you consider that Microsoft and Apple both have a more distinct breakdown and separation in the X-Force database. Anyone from IBM/ISS care to clarify? Lisa, did you have more extensive notes on this aspect that didn’t make it in the article perhaps?

Scrubbing the Source Data

A few months ago, Jeff Jones at CSO Online blogged about “Scrubbing the Source Data”, talking about the challenges of using vulnerability data for analysis. Part 1 examined using the National Vulnerability Database (NVD) showing how you can’t blindly rely on the data from VDBs. In his examples he shows that using the data to examine Windows is probably fairly accurate, yet examining Apple is less so and Ubuntu Linux is basically not possible. Unfortunately, there isn’t a part two to the series (yet) as implied by the title and introduction. Jones concludes the post:

Given these accuracy levels for vulnerabilities after the vendor has acknowledged it and provided a fix, it doesn’t seem like too much of a stretch to also conclude that using this data to analyze unpatched data would be equally challenging. Finally, I think this exercise helps demonstrate that anyone leveraging public data sources needs to have a good understanding of both the strengths and the weaknesses that any given data source may have, with respect to what one is trying to analyze or measure, and include steps in their methodology that accomodates accordingly.

McAfee: Microsoft patches 133 Critical/Important Vulns in 2006

http://www.avertlabs.com/research/blog/?p=153

McAfee is reporting that Microsoft patched 133 Critical / Important vulnerabilities in 2006. They also compare this number against previous years to presumably demonstrate that security isn’t getting better at Microsoft.

Oracle RDBMS vs Microsoft SQL Server

http://www.databasesecurity.com/dbsec/comparison.pdf

Introduction

This paper will examine the differences between the security posture of Microsoft’s SQL Server and Oracle’s RDBMS based upon flaws reported by external security researchers and since fixed by the vendor in question. Only flaws affecting the database server software itself have been considered in compiling this data so issues that affect, for example, Oracle Application Server have not been included. The sources of information used whilst compiling the data that forms the basis of this document include:

The Microsoft Security Bulletins web page
The Oracle Security Alerts web page
The CVE website at Mitre.
The SecurityFocus.com website

A general comparison is made covering Oracle 8, 9 and 10 against SQL Server 7, 2000 and 2005. The vendors� flagship database servers are then compared.

[..]

Vulnerability Type Distributions in CVE

http://cwe.mitre.org/documents/vuln-trends.html

Document version: 1.0 Date: October 4, 2006

For the past 5 years, CVE has been tracking the types of errors that lead to publicly reported vulnerabilities, and periodically reporting trends on a limited scale. In support of the Common Weakness Enumeration (CWE) project, and as a result of the interest in this work as mentioned during the “Year of the web application: Hack & Data from the Front lines” panel at the 5th Annual Cyber Security Executive Summit in New York City on September 13, 2006, we have published a more extensive analysis. An updated version will be released once 2006 is complete.

The primary goal of this study is to better understand research trends using publicly reported vulnerabilities. It should be noted that the data is obtained from an uncontrolled population, i.e., decentralized public reports from a research community with diverse goals and interests, with an equally diverse set of vendors and developers. More specialized, exhaustive, and repeatable methods could be devised to evaluate software security. But until such methods reach maturity and widespread acceptance, the overall state of software security can be viewed through the lens of public reports.

[..]

A Time to Patch III: Apple

http://blog.washingtonpost.com/securityfix/2006/05/a_time_to_patch_iii_apple_2.html

A Time to Patch III: Apple

Over the past several months, Security Fix published data showing how long it took Microsoft and Mozilla to issue updates for security flaws. Today, I’d like to present some data I compiled that looks at Apple’s performance on this front.

Here’s what I found: Over the past two years, after being notified about serious security flaws in its products, it took Apple about 91 days on average to issue patches to correct those vulnerabilities. I also found that almost without exception, open-source Linux vendors were months ahead of Apple in fixing the same flaws.

You can download a copy of the charts I put together either in HTML format or as a Microsoft Excel file. I spent a long time on this research, but that doesn’t mean it is free of typos and so forth. If you spot one, or a discrepancy in the data, please drop me line and I will update the data as necessary.

[..]

Depending on how you count flaws..

http://www.computerworld.com/[..]/0,10801,109278,00.html

After flap, Symantec adjusts browser bug count
Depending on how you count flaws, either IE or Firefox could be considered less secure
News Story by Robert McMillan

MARCH 07, 2006 (IDG NEWS SERVICE) – A report issued today by Symantec Corp. seeks to satisfy users of both Mozilla Corp.’s Firefox browser and Microsoft Corp.’s Internet Explorer.

In its latest Internet Security Threat Report, covering the last six months of 2005, the company now features two different ways of counting browser bugs: one that finds that Internet Explorer has the most vulnerabilities, and a second that reveals Firefox as the bug leader.

Thank you Symantec, for generating completely useless vulnerability statistics. When you can manipulate them to support either side of an argument (and do so intentionally), what’s the point? Just define your criteria for counting a vulnerability, define your time frame, and let the results speak for themselves.

Follow

Get every new post delivered to your Inbox.

Join 5,028 other followers