Tag Archives: US-CERT

Vulnerability Counts and OSVDB Advocacy

CVE just announced reaching 30,000 identifiers which is a pretty scary thing. CVE staff have a good eye for catching vulnerabilities from sources away from the mainstream (e.g. bugtraq) and they have the advantage of being a very widely accepted standard for tracking vulnerabilities. As companies and researchers request CVE numbers for disclosures, they get a lot of the information handed to them on a silver platter. Of course, sometimes that platter is full of mud and confusion as vendors don’t always provide clear details to help CVE accurately track and distinguish between multiple vulnerabilities. I’ve also pointed out many times in the past that CVE is a very unique VDB that provides identifiers for vulnerability tracking. They do not provide many fields associated with other VDBs (solution, creditee, etc). As such, they may have a single entry that covers multiple distinct vulnerabilities if they are the same class (XSS, SQLi, RFI), or if there is a lack of details but they know it affects the same product (Oracle). So when we see 30,000 identifiers, we have to realize that the real count of vulnerabilities is significantly higher.

CVE is run by The MITRE Corporation, sponsored / funded by the NCSD (US-CERT) of DHS under government contract. That means our tax dollars fund this database so it should be of particular interest to U.S. taxpayers in the security industry. I know from past discussions with CVE staff and other industry veterans that on any given day, they are more likely to have more work than available staff. That means the rate of vulnerabilities that get published is greater than the resources CVE can maintain to track them. In short, the 30,000 identifiers you see only represents a percentage of the vulnerabilities actually disclosed. We could probably debate what percentage that represents all day long, and I don’t think that is really the point here other than “we know it isn’t all of them”.

Every VDB suffers from the same thing. “Commercial” VDBs like X-Force, BID and Secunia have a full time staff that maintain their databases, like CVE does. Despite having all of these teams (some of them consisting of 10 or more people) maintain VDBs, we still see countless vulnerabilities that are ‘missed’ by all of them. This is not a slight against them in any way; it is a simple manner of resources available and the amount of information out there. Even with a large team sorting disclosed vulnerabilities, some teams spend time validating the findings before adding them to the database (Secunia), which is an incredible benefit for their customers. There is also a long standing parasitic nature to VDBs, with each of them watching the others as best they can, to help ensure they are tracking all the vulnerabilities they can. For example, OSVDB keeps a close eye on Secunia and CVE specifically, and as time permits we look to X-Force, BID, SecurityTracker and others. Each VDB tends to have some researchers that exclusively disclose vulnerabilities directly to the VDB of their choice. So each one I mention above will get word of vulnerabilities that the rest really have no way of knowing about short of watching each other like this. This VDB inbreeding (I will explain the choice of word some other time) is an accepted practice and I have touched on this in the past (CanSecWest 2005).

Due to the inbreeding and OSVDB’s ability to watch other resources, it occasionally frees up our moderators to go looking for more vulnerability information that wasn’t published in the mainstream. This usually involves grueling crawls through vendor knowledge-bases, mind-numbing changelogs, searching CVS type repositories and more. That leads to the point of this lengthy post. In doing this research, we begin to see how many more vulnerabilities are out there in the software we use, that escapes the VDBs most of the time. Only now, after four years and getting an incredible developer to make many aspects of the OSVDB wish-list a reality, do we finally begin to see all of this. As I have whined about for those four years, VDBs need to evolve and move beyond this purely “mainstream reactionary” model. Meaning, we have to stop watching the half dozen usual spots for new vulnerability information, creating our entries, rinsing and repeating. There is a lot more information out there just waiting to be read and added.

In the past few weeks, largely due to the ability to free up time due to the VDB inbreeding mentioned above, we’ve been able to dig into a few products more thoroughly. These examples are not meant to pick on any product / VDB or imply anything other than what is said above. In fact, this type of research is only possible because the other VDBs are doing a good job tracking the mainstream sources, and because some vendors publish full changelogs and don’t try to hide security related fixes. Kudos to all of them.

Example: Search your favorite VDB for ”inspircd”, a popular multi-platform IRC daemon. Compare the results of BID, Secunia, X-Force, SecurityTracker, and http://osvdb.org/ref/blog/inspircd-cve.png. Compare these results to OSVDB after digging into their changelogs. Do these same searches for “xfce” (10 OSVDB, 5 max elsewhere), “safesquid” (6 OSVDB, 1 max elsewhere), “beehive forum” (27 OSVDB, 8 max elsewhere) and “jetty” (25 OSVDB, 12 max elsewhere). Let me emphasize, I did not specifically hand pick these examples to put down any VDB, these are some of the products we’ve investigated in the last few weeks.

The real point here is that no matter what vulnerability disclosure statistic you read, regardless of which VDB it uses (including OSVDB), consider that the real number of vulnerabilities disclosed is likely much higher than any of us know or have documented. As always, if you see vulnerabilities in a vendor KB or changelog, and can’t find it in your favorite VDB, let them know. We all maintain e-mail addresses for submissions and we all strive to be as complete as possible.

US-CERT: A disgrace to vulnerability statistics

Several people have asked OSVDB about their thoughts on the recent US-CERT Cyber Security Bulletin 2005 Summary. Producing vulnerability statistics is trivial to do. All it takes is your favorite data set, a few queries, and off you go. Producing meaningful and useful vulnerability statistics is a real chore. I’ve long been interested in vulnerability statistics, especially related to how they are used and the damage they cause. Creating and maintaining a useful statistics project has been on the OSVDB to-do list for some time, and I personally have not followed up with some folks that had the same interest (Ejovi et al). Until I see such statistics “done right”, I will of course continue to voice my opinion at other efforts.

Some of the following comments are in the form of questions, that have what I feel to be fairly obvious answers. At some point I will no doubt flesh out some of these ideas a bit better. Until then..

Information in the US-CERT Cyber Security Bulletin is a compilation and includes information published by outside sources, so the information should not be considered the result of US-CERT analysis.

Read: Our disclaimer so you can’t blame us for shoddy stats!

Software vulnerabilities are categorized in the appropriate section reflecting the operating system on which the vulnerability was reported; however, this does not mean that the vulnerability only affects the operating system reported since this information is obtained from open-source information.

What’s the point then? If you can’t categorize this software by specific operating system or at least distinguish which are multiple vendor (with better accuracy), is it really that helpful or useful?

Vulnerabilities
* Windows Operating System
* Unix/ Linux Operating System
* Multiple Operating System

A decade later and we’re still doing the “windows vs unix” thing? I guess after the last year of hype, Mac OS X still isn’t mature enough to get it’s own category, or is it now considered “Unix”? To answer that question, yes this list lumps it in with “Unix”, and yes it is curious they don’t break it out after SANS gave it it’s own entry on their list. Not even an “other” category to cover VMS, AS/400, routers or other devices?

Now, let’s look at the very first entry on the list out of curiosity: A Windows Operating System vulnerability titled “1Two Livre d’Or Input Validation Errors Permit Cross-Site Scripting”. This issue can be referenced by CVE 2005-1644, SecurityTracker 1013971, Bugtraq ID or OSVDB 16717. The CERT bulletin labels this issue “High Risk” which is baffling to say the least. A cross-site scripting issue in the error output of a very low distribution guestbook is high risk? What do they label a remote code execution vulnerability in windows? Looking at the first MSIE vuln on the list, it too is rated as “high risk”. Great, way to completely invalidate your entire risk rating.

OK, on to the fun part.. the statistics! Unfortunately, the bulletin is very lacking on wording, explanation, details or additional disclaimers. We get two very brief paragraphs, and the list of vulnerabilities that link to their summary entries. Very unfortunate. No, let me do one better. US-CERT, you are a disgrace to vulnerability databases. I can’t fathom why you even bothered to create this list, and why anyone in their right mind would actually use, reference or quote this trash. The only ‘statistics’ provided by this bulletin:

This bulletin provides a year-end summary of software vulnerabilities that were identified between January 2005 and December 2005. The information is presented only as a index with links to the US-CERT Cyber Security Bulletin the information was published in. There were 5198 reported vulnerabilities: 812 Windows operating system vulnerabilities; 2328 Unix/Linux operating vulnerabilities; and 2058 Multiple operating system vulnerabilities.

The simple truth is that 99.99% of the people reading this document will see the first two paragraphs and move on. They will not read every single entry on that page. That means they walk away with the idea that Unix/Linux is roughly 3x more vulnerable than Windows, when it simply is not the case. While scrolling down, I ran across a section of the Unix/Linux vulnerability list that jumped out at me:

# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# Info-ZIP UnZip File Permission Modification
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)

So the list includes the same entries over and over because they are [updated]. The quoted part above represents four vulnerabilities but appears to have been responsible for twenty entries instead. The PCRE overflow issue gets twelve entries on the CERT list. Why is the Unix/Linux section full of this type of screw up, yet magically the Windows section contains very few? Could it be that the Unix/Linux vendors actually respond to every issue in a more timely fashion? Or is US-CERT intentionally trying to harm the reputation of Unix/Linux? What, don’t like those two conclusions? TOUGH, that is what happens when you release such shoddy ‘research’ or ‘statistics’ (used very lightly).

Fortunately for me, someone at Slashdot noticed the same thing and did some calculations based on removing the [updated] entries; Windows drops from 813 to 671, Unix/Linux drops from 2328 to 891, and Multiple drops from 2057 to 1512. This gives us a total of 3074 vulnerabilities reported (by US-CERT standards), not 5198. With a margin for error so large, how can anyone take them seriously? More to the point, how can the mainstream media journalists over at the Washington Post blog about this, but not challenge the statistics?

A decade later, and the security community still lacks any meaningful statistics for vulnerabilities. Why can’t these outfits with commercial or federal funding actually do a good job and produce solid data that helps instead of confuses and misleads?!

Follow

Get every new post delivered to your Inbox.

Join 5,027 other followers