Monthly Archives: January, 2006

US-CERT: A disgrace to vulnerability statistics

Several people have asked OSVDB about their thoughts on the recent US-CERT Cyber Security Bulletin 2005 Summary. Producing vulnerability statistics is trivial to do. All it takes is your favorite data set, a few queries, and off you go. Producing meaningful and useful vulnerability statistics is a real chore. I’ve long been interested in vulnerability statistics, especially related to how they are used and the damage they cause. Creating and maintaining a useful statistics project has been on the OSVDB to-do list for some time, and I personally have not followed up with some folks that had the same interest (Ejovi et al). Until I see such statistics “done right”, I will of course continue to voice my opinion at other efforts.

Some of the following comments are in the form of questions, that have what I feel to be fairly obvious answers. At some point I will no doubt flesh out some of these ideas a bit better. Until then..

Information in the US-CERT Cyber Security Bulletin is a compilation and includes information published by outside sources, so the information should not be considered the result of US-CERT analysis.

Read: Our disclaimer so you can’t blame us for shoddy stats!

Software vulnerabilities are categorized in the appropriate section reflecting the operating system on which the vulnerability was reported; however, this does not mean that the vulnerability only affects the operating system reported since this information is obtained from open-source information.

What’s the point then? If you can’t categorize this software by specific operating system or at least distinguish which are multiple vendor (with better accuracy), is it really that helpful or useful?

Vulnerabilities
* Windows Operating System
* Unix/ Linux Operating System
* Multiple Operating System

A decade later and we’re still doing the “windows vs unix” thing? I guess after the last year of hype, Mac OS X still isn’t mature enough to get it’s own category, or is it now considered “Unix”? To answer that question, yes this list lumps it in with “Unix”, and yes it is curious they don’t break it out after SANS gave it it’s own entry on their list. Not even an “other” category to cover VMS, AS/400, routers or other devices?

Now, let’s look at the very first entry on the list out of curiosity: A Windows Operating System vulnerability titled “1Two Livre d’Or Input Validation Errors Permit Cross-Site Scripting”. This issue can be referenced by CVE 2005-1644, SecurityTracker 1013971, Bugtraq ID or OSVDB 16717. The CERT bulletin labels this issue “High Risk” which is baffling to say the least. A cross-site scripting issue in the error output of a very low distribution guestbook is high risk? What do they label a remote code execution vulnerability in windows? Looking at the first MSIE vuln on the list, it too is rated as “high risk”. Great, way to completely invalidate your entire risk rating.

OK, on to the fun part.. the statistics! Unfortunately, the bulletin is very lacking on wording, explanation, details or additional disclaimers. We get two very brief paragraphs, and the list of vulnerabilities that link to their summary entries. Very unfortunate. No, let me do one better. US-CERT, you are a disgrace to vulnerability databases. I can’t fathom why you even bothered to create this list, and why anyone in their right mind would actually use, reference or quote this trash. The only ‘statistics’ provided by this bulletin:

This bulletin provides a year-end summary of software vulnerabilities that were identified between January 2005 and December 2005. The information is presented only as a index with links to the US-CERT Cyber Security Bulletin the information was published in. There were 5198 reported vulnerabilities: 812 Windows operating system vulnerabilities; 2328 Unix/Linux operating vulnerabilities; and 2058 Multiple operating system vulnerabilities.

The simple truth is that 99.99% of the people reading this document will see the first two paragraphs and move on. They will not read every single entry on that page. That means they walk away with the idea that Unix/Linux is roughly 3x more vulnerable than Windows, when it simply is not the case. While scrolling down, I ran across a section of the Unix/Linux vulnerability list that jumped out at me:

# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# Info-ZIP UnZip File Permission Modification
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)

So the list includes the same entries over and over because they are [updated]. The quoted part above represents four vulnerabilities but appears to have been responsible for twenty entries instead. The PCRE overflow issue gets twelve entries on the CERT list. Why is the Unix/Linux section full of this type of screw up, yet magically the Windows section contains very few? Could it be that the Unix/Linux vendors actually respond to every issue in a more timely fashion? Or is US-CERT intentionally trying to harm the reputation of Unix/Linux? What, don’t like those two conclusions? TOUGH, that is what happens when you release such shoddy ‘research’ or ‘statistics’ (used very lightly).

Fortunately for me, someone at Slashdot noticed the same thing and did some calculations based on removing the [updated] entries; Windows drops from 813 to 671, Unix/Linux drops from 2328 to 891, and Multiple drops from 2057 to 1512. This gives us a total of 3074 vulnerabilities reported (by US-CERT standards), not 5198. With a margin for error so large, how can anyone take them seriously? More to the point, how can the mainstream media journalists over at the Washington Post blog about this, but not challenge the statistics?

A decade later, and the security community still lacks any meaningful statistics for vulnerabilities. Why can’t these outfits with commercial or federal funding actually do a good job and produce solid data that helps instead of confuses and misleads?!

Site Specific Vulnerabilities

The argument that Vulnerability Databases (VDBs) are evil because they provide the bad guys all of the information on how to hack has gone on for quite some time. Of course, this debate crosses over into the Full Disclosure argument, which asks the question, should the information be public at all? But for VDBs, the real question should not be about whether to provide information on vulnerabilities, but how specific the information cataloged should be?

Some believe that providing exploits or tools to make it easier to exploit a vulnerability is where it crosses the line on providing too much information. While there may be a few valid points on the topic, the real issue, in my opinion, is when resources provide vulnerability information in the context of a specific site/domain.

VDBs are not the only resource that may have site specific vulnerability information. Many search engines such as Google/Yahoo have come under fire as being a collection of security information to assist attackers. In fact, it could be argued that the search engines pose greater security disclosure problems than a VDB, since a search engine takes an attacker immediately to a vulnerable website. There is even a whole site (and following) centered around using Google to quickly find specific vulnerable software on web sites.

OSVDB has a policy that we do not catalog site specific vulnerabilities, and we make all efforts possible to avoid these entries being added to the database. In fact, quite a few VDBs follow this same practice. However, some VDBs will include sites specific vulnerabilities only for high profile websites. For example:

XSS vulnerabilities in Google.com: http://www.watchfire.com/securityzone/advisories/12-21-05.aspx

Google GMail ‘CheckAvailability’ Script May Disclose User Information to Remote Users: http://www.securitytracker.com/alerts/2004/Jul/1010647.html

Yahoo! Mail ‘order’ and ‘sort’ Field Input Validation Flaw Permits Cross-Site Scripting Attacks: http://www.securitytracker.com/alerts/2004/Mar/1009352.html

It is important to note that certain advisories make it very hard to determine if the issue is in a product or on a website. Recently, we have seen what appears to be more site specific issues being released by researchers. For example:

SpearTek Search Field Handling Cross Site Scripting Vulnerability: http://www.frsirt.com/english/advisories/2005/3052

There appears to be quite a few advisories released by r0t that may fall into the site specific category. If you look on r0t’s site, anything that says there is a XSS vulnerabillity in the “Search Module” may very well be a vulnerability in the company website, not in the product–even though he lists version numbers in the advisory.

Considering VDBs such as OSVDB do not have the resources to verify all advisories, we must rely on the information provided, and in some cases site specific vulnerabilities make it into VDBs by accident. In fact, dozens of entries in OSVDB currently have internal flags on them as we try to determine if they are site specific.

Providing a company website or an online service that has a security issue is not something that OSVDB believes to be appropriate, even though many contributors personally find it extremely interesting, and believe it is valuable. Many organizations that rely on online services may find great value in knowing about an issue with their online email provider. The fact that many businesses use sites like Google, Gmail, or Yahoo heavily suggests that vulnerabilities in the services are just as dangerous as vulnerabilities in the software deployed on their own network.

It may make OSVDB look incomplete compared to VDBs that include site specific vulnerabilities, but we are willing to accept that risk. The OSVDB project aims to provide useful information to help improve the level of security for the community, and site specific vulnerability do not fall in that category currently.

Follow

Get every new post delivered to your Inbox.

Join 5,028 other followers