From: Steven M. Christey (coley @ mitre.org)
To: bugtraq @ securityfocus.com
Date: Tue, 4 Oct 2005 17:11:51 -0400 (EDT)
Subject: A common researcher diagnosis error: misreading error messages
I am seeing increasing numbers of reports by researchers who make the same diagnostic error that you just highlighted. They throw some input for one vuln type at an application (say, XSS manipulations), get an error that shows “XSS,” and completely miss the fact that the error message shows a more serious problem at play, such as SQL injection or directory traversal. The XSS is “resultant” from these other “primary” errors.
Similarly, just because you throw a long input at a program and the program fails, it doesn’t necessarily mean that you found a buffer overflow. You could have triggered a memory allocation error; or the program didn’t recognize the argument as a valid argument; or it spotted the long input and returned a null pointer, but forgot to check and led toa null dereference; or multiple other reasons.
For those researchers who care about quality of information, make sure that you interpret error messages correctly, especially if you’re using some generic attack program that throws a lot of junk at an application. Error messages are important clues, but not the whole story.
Vulnerability information analysts – e.g. for vulnerability databases and scanning tools – should be vigilant for these common diagnostic errors.
In this particular instance, it doesn’t help that PHP’s error message generators don’t seem to quote the error messages that are generated, so a lot of “XSS-as-symptom-but-not-cause” problems are reported for PHP apps. Whether this is a problem with PHP itself or not is a separate question.
The Web Application Security Consortium (WASC) is announcing the availability of the Web Security Threat Classification in English, Japanese, Spanish, and Turkish. The material is open source and provided in TXT, PDF, and DOC formats.
A Day in the Life of a Security Bulletin
Hi all- Alexandra Huft here again! I thought you might find it interesting to see “behind the scenes” of how a security vulnerability eventually becomes a security bulletin.
So, I’ll start way back at the beginning. We receive reports from many different finders on issues that may or may not be a vulnerability. The first thing that we do is work to determine that we are able to duplicate what the finder has reported. Sometimes this is very simple, other times we need to go back to the finder for additional information, but whenever possible we try and recreate what they’ve discovered with our own research. We work with the affected product teams and our own experts on the Secure Windows Initiative team (SWI) to reproduce these reports. We also try to keep the finder updated with as much information as we can provide, so that they are aware of where we are in the process. We then work on determining the severity, which is not always the easiest thing. Like you, we all have our opinions, which lead to many a heated discussion in the MSRC Situation Room where we meet several times a week. We all want the best decision for all of our customers.
I’d be interested in seeing the same topic covered by Sun Microsystems, HP, Oracle, and other vendors with large product bases.
Local or remote, seems so simple when classifying a vulnerability. The last few years have really thrown this simple distinction for a loop. Think of a vulnerability that occurs when processing a file, such as a browser rendering a JPG or GIF, or a program like Adobe Reader processing a PDF file. On one hand, you could argue that a browser has to remotely load an image or a user must e-mail a PDF to be opened. On the other hand, what happens when the malformed file is given to you on a floppy disk? What if you are using MSIE to locally browse files on the hard disk? It’s not that local or remote are *wrong*, just not descriptive enough.
This debate has popped up on mail lists in the past year, and has been discussed at every VDB I guarantee you. After a couple years of discussing it internally at OSVDB, we haven’t been able to come up with a better classification scheme. Why? Everything we come up with is just as non-descript or overly complex. We can’t seem to find a good middle ground to cover such distinctions.
Recently, Steven Christey of CVE has come up with a middle ground and begun using it in some entries. For attacks that require external help to somehow deliver hostile material to a victim, he has begun using “external user-complicit attackers” and it seems to be a good fit.
A couple years back, I ran across musicplasma. For those not familiar with the engine, it allows you to type in your favorite music artist/band, and see “related” artists. So I type in “portishead” (mmmm) and see related bands like Tricky, and Sneakerpimps. These are all considered “trip-hop” so the links are expected. Moving a bit farther out, I start to see new bands (at the time) like Zero7, Air, or Hooverphonic (many of which are now on my playlist). So using this graphical representation, it is easy to see related bands and this type of tool is incredible for finding new music.
Shortly after, I started wondering what it would be like to use such an engine on vulnerabilities. What would it do, would it be valuable, would it help anyone? Two years later I still have the same questions, but lean toward the idea that it would be invaluable for vulnerability research, statistical analysis, and trending. People like CVE or OSVDB would love such a tool, and we’ve discussed the idea in the past. This most recently came up when Steven Christey (CVE) mailed asking what rules we adhered to for related OSVDB links within our database. As I said to him in e-mail, the cliff notes answer to if we have rules that govern this is “no”. I know, bad VDB! Despite that, there is a definite intention and desire for such links that would be used more strictly and consistently if we had developers to help us integrate our ideas into the actual database and front end. The gist of the related links is to eventually move toward an engine like MusicPlasma for vulnerabilities. Instead of rewriting portions of the mail I wrote, i’ll lazily quote some relevant parts:
Obviously a *great* tool for music given it is hard to find similar bands to the ones you like, given that most music reviews won’t even disclose if the lead singer is male or female, let alone the real style of the music beyond some pretty broad categories like “rock” or “rap”. Anyway, on an abstract level, using something like this to chart vulns and make an interface for users to chart vulnerabilities would be interesting. You visit osvdbplasma, click on PHP-Nuke, then graphically browse the issues but instead of just ‘similar’, you do it by age and severity. The closest to the PHP-Nuke ring would be the remote code execution on latest versions, then you could follow that out to older issues. You could choose a different path for XSS, Path Disclosure and other classes.
Like I said, maybe not so useful but it would look really cool(tm), and would make it more understandable to end users without much security experience (a long term goal of OSVDB).
Yep, another idea I had a while back, tracking the history of vulns in a set of products. Pick a few that cover a wide range .. Windows, Oracle, PHP-Nuke, John’s Blog. Then look at the vulnerabilities discovered in them, focusing on the types (SQL, PD, XSS, Overflow, etc). See if there are trends in the types discovered, then cross match it with (very rough) dates of when the class of vulnerability was discovered/announced (a task unto itself). Do any of these products get better? Worse? Are there trends on folks discovering the same types as they become ‘popular’ to research? All kinds of neat research to do here.
Not surprising, Christey replied quickly saying that he too had thought of this type of model to view vulnerabilities and added his own ideas for reasons and features of such a project. I don’t think he took me seriously when I suggested mugging top DHS officials to fund such a project.
A couple weeks ago, HexView Security Research brought this to life in the first generation of such an engine. Check out their vulnerability maps. Done in java, they tie in products and platforms to vulnerabilities showing how they are related. Currently, mousing over a vulnerability only offers a title and no additional information, but this is the first step! It’s very cool to see other companies and researchers looking into modeling this type of information.
Anyway, all of this goes back to a long running gripe OSVDB has about the industry and VDBs specifically, and that is lack of evolution. These types of projects would be incredibly fun to work on, and potentially offer great insight into vulnerabilities, research and product history. On the off chance someone reading this knows about rendering such data or has time/expertise, contact us! We’d love to abuse your knowledge and get you involved in making this project happen.
On Security, Is Oracle the Next Microsoft?
September 16, 2005
By Paul F. Roberts
While [Oracle CSO Mary Ann Davidson] acknowledges that some of the criticism from Litchfield and others is valid, outsiders aren’t privy to the 75 percent of product holes that Oracle discovers and fixes internally.
OSVDB has listings for roughly 330 Oracle vulnerabilities. If we take Davidson’s comment at face value and believe the number isn’t inflated, that means those 330 represent 25% of the vulnerabilities in their products. So according to Oracle, they have over 1,300 vulnerabilities in their products that they know of.
Before 2005, it was fairly rare to see a news article specifically covering a vulnerability. They would usually pop up if a vuln was used in a mass compromise, the basis of a worm propagating, or affected large vendors such as Microsoft and Oracle. This year however, it seems more and more news is being written about vulnerabilities. Some may be explained due to vendors being considered more mainstream (Mozilla & Apple), while others may be attention being put on underlying technology that drives more mainstream applications or high profile lists. Two examples of this can be seen in Mailman [OSVDB 13671, Article] and CPAINT [OSVDB 18746, Article].
OSVDB 19255: Firefox flaw found: Remote exploit possible
OSVDB 19227: New Cisco flaw could pose threat to Net
OSVDB 19089: Microsoft Investigates New IE Hole
OSVDB 18956: Reports: Long Registry Names Could Hide Malware
Additionally, it is getting to be routine to see articles covering monthly patch cycles:
Microsoft patches IE, Word, Windows
Microsoft to release six patches, some ‘critical,’ next week
Major Oracle Patch Covers Enterprise Products, Database Server
Apple unloads dozens of fixes for OS X
To stay even more current, articles covering ‘0-day’ vulnerabilities still in various stages of the disclosure cycle.
If a researcher discloses a vulnerability only to VDBs, and some/all of them publish the information, was the vulnerability really disclosed? Yes, of course, but should it have been? Are VDBs responsible for the information? Does it fall on us to check every thing we get and verify the vendor received it first? Snap answer is ‘yes’, but if so, is the answer the same with information published on a mail list? Snap answer is ‘no’.
This creates a situation where VDBs are held to certain standards for responsible disclosure, and are virtually forced to play middle man between the vendor and researcher. VDBs are forced to take on a role they may not have intended to, or take a hit in their reputation for being responsible with information that may put others at risk.
Late night babbling, or is that a shitty deal for VDBs?
The Web Application Security Consortium is proud to present ‘DOM Based Cross Site Scripting or XSS of the Third Kind: A look at an overlooked flavor of XSS ‘ written by Amit Klein. In this article Amit focuses on a little known variant of Cross Site Scripting which attacks a user’s client without sending malicious content to the web server.
This has come up in the past, and again more recently. Is information found on a vendor website, such as a changelog or bugzilla entry, fair game for inclusion in a vulnerability database? Some vendors seem to think this material is off limits. If a person keeps a directory of material regarding vulnerabilities, and it is not password protected or restricted in any way, are we to assume it may be private in some fashion?
The recent complaint does bring up another issue though; assigning vulnerable versions to the database entry. In this case, Secunia apparently listed 1.x when it was a specific release. SecurityFocus’ BID database tends to do this on many entries, listing all prior releases of a product as vulnerable when it hasn’t necessarily been tested. That may be a safe assumption with some software, but not always. As new features are added to a software package, so are new bugs and vulnerabilities.
VDBs using public information such as bug trackers and changelogs may have a long term negative impact though. The Caudium Group has closed its bug tracker to the public in response to Secunia’s vulnerability listing. If more vendors follow suit, this will make more detailed information unavailable to VDBs and impact the quality of the information we can provide.