Category Archives: General Vulnerability Info

WASC Threat Classification in 4 languages

The Web Application Security Consortium (WASC) is announcing the availability of the Web Security Threat Classification in English, Japanese, Spanish, and Turkish. The material is open source and provided in TXT, PDF, and DOC formats.

http://www.webappsec.org/projects/threat/

A Day in the Life of a Security Bulletin

A Day in the Life of a Security Bulletin
http://blogs.technet.com/msrc/archive/2005/09/28/411635.aspx

Hi all- Alexandra Huft here again! I thought you might find it interesting to see “behind the scenes” of how a security vulnerability eventually becomes a security bulletin.

So, I’ll start way back at the beginning. We receive reports from many different finders on issues that may or may not be a vulnerability. The first thing that we do is work to determine that we are able to duplicate what the finder has reported. Sometimes this is very simple, other times we need to go back to the finder for additional information, but whenever possible we try and recreate what they’ve discovered with our own research. We work with the affected product teams and our own experts on the Secure Windows Initiative team (SWI) to reproduce these reports. We also try to keep the finder updated with as much information as we can provide, so that they are aware of where we are in the process. We then work on determining the severity, which is not always the easiest thing. Like you, we all have our opinions, which lead to many a heated discussion in the MSRC Situation Room where we meet several times a week. We all want the best decision for all of our customers.

[..]

I’d be interested in seeing the same topic covered by Sun Microsystems, HP, Oracle, and other vendors with large product bases.

Vulnerability Classification Terminology

Local or remote, seems so simple when classifying a vulnerability. The last few years have really thrown this simple distinction for a loop. Think of a vulnerability that occurs when processing a file, such as a browser rendering a JPG or GIF, or a program like Adobe Reader processing a PDF file. On one hand, you could argue that a browser has to remotely load an image or a user must e-mail a PDF to be opened. On the other hand, what happens when the malformed file is given to you on a floppy disk? What if you are using MSIE to locally browse files on the hard disk? It’s not that local or remote are *wrong*, just not descriptive enough.

This debate has popped up on mail lists in the past year, and has been discussed at every VDB I guarantee you. After a couple years of discussing it internally at OSVDB, we haven’t been able to come up with a better classification scheme. Why? Everything we come up with is just as non-descript or overly complex. We can’t seem to find a good middle ground to cover such distinctions.

Recently, Steven Christey of CVE has come up with a middle ground and begun using it in some entries. For attacks that require external help to somehow deliver hostile material to a victim, he has begun using “external user-complicit attackers” and it seems to be a good fit.

Examples:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2005-2471
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2005-2501

MusicPlasma for Vulnerabilities

A couple years back, I ran across musicplasma. For those not familiar with the engine, it allows you to type in your favorite music artist/band, and see “related” artists. So I type in “portishead” (mmmm) and see related bands like Tricky, and Sneakerpimps. These are all considered “trip-hop” so the links are expected. Moving a bit farther out, I start to see new bands (at the time) like Zero7, Air, or Hooverphonic (many of which are now on my playlist). So using this graphical representation, it is easy to see related bands and this type of tool is incredible for finding new music.

Shortly after, I started wondering what it would be like to use such an engine on vulnerabilities. What would it do, would it be valuable, would it help anyone? Two years later I still have the same questions, but lean toward the idea that it would be invaluable for vulnerability research, statistical analysis, and trending. People like CVE or OSVDB would love such a tool, and we’ve discussed the idea in the past. This most recently came up when Steven Christey (CVE) mailed asking what rules we adhered to for related OSVDB links within our database. As I said to him in e-mail, the cliff notes answer to if we have rules that govern this is “no”. I know, bad VDB! Despite that, there is a definite intention and desire for such links that would be used more strictly and consistently if we had developers to help us integrate our ideas into the actual database and front end. The gist of the related links is to eventually move toward an engine like MusicPlasma for vulnerabilities. Instead of rewriting portions of the mail I wrote, i’ll lazily quote some relevant parts:

Obviously a *great* tool for music given it is hard to find similar bands to the ones you like, given that most music reviews won’t even disclose if the lead singer is male or female, let alone the real style of the music beyond some pretty broad categories like “rock” or “rap”. Anyway, on an abstract level, using something like this to chart vulns and make an interface for users to chart vulnerabilities would be interesting. You visit osvdbplasma, click on PHP-Nuke, then graphically browse the issues but instead of just ‘similar’, you do it by age and severity. The closest to the PHP-Nuke ring would be the remote code execution on latest versions, then you could follow that out to older issues. You could choose a different path for XSS, Path Disclosure and other classes.

Like I said, maybe not so useful but it would look really cool(tm), and would make it more understandable to end users without much security experience (a long term goal of OSVDB).

[..]

Yep, another idea I had a while back, tracking the history of vulns in a set of products. Pick a few that cover a wide range .. Windows, Oracle, PHP-Nuke, John’s Blog. Then look at the vulnerabilities discovered in them, focusing on the types (SQL, PD, XSS, Overflow, etc). See if there are trends in the types discovered, then cross match it with (very rough) dates of when the class of vulnerability was discovered/announced (a task unto itself). Do any of these products get better? Worse? Are there trends on folks discovering the same types as they become ‘popular’ to research? All kinds of neat research to do here.

Not surprising, Christey replied quickly saying that he too had thought of this type of model to view vulnerabilities and added his own ideas for reasons and features of such a project. I don’t think he took me seriously when I suggested mugging top DHS officials to fund such a project.

A couple weeks ago, HexView Security Research brought this to life in the first generation of such an engine. Check out their vulnerability maps. Done in java, they tie in products and platforms to vulnerabilities showing how they are related. Currently, mousing over a vulnerability only offers a title and no additional information, but this is the first step! It’s very cool to see other companies and researchers looking into modeling this type of information.

Anyway, all of this goes back to a long running gripe OSVDB has about the industry and VDBs specifically, and that is lack of evolution. These types of projects would be incredibly fun to work on, and potentially offer great insight into vulnerabilities, research and product history. On the off chance someone reading this knows about rendering such data or has time/expertise, contact us! We’d love to abuse your knowledge and get you involved in making this project happen.

Scary Oracle Numbers

http://www.eweek.com/print_article2/0,1217,a=160368,00.asp

On Security, Is Oracle the Next Microsoft?
September 16, 2005
By Paul F. Roberts

While [Oracle CSO Mary Ann Davidson] acknowledges that some of the criticism from Litchfield and others is valid, outsiders aren’t privy to the 75 percent of product holes that Oracle discovers and fixes internally.

OSVDB has listings for roughly 330 Oracle vulnerabilities. If we take Davidson’s comment at face value and believe the number isn’t inflated, that means those 330 represent 25% of the vulnerabilities in their products. So according to Oracle, they have over 1,300 vulnerabilities in their products that they know of.

Vulnerabilities becoming more mainstream?

Before 2005, it was fairly rare to see a news article specifically covering a vulnerability. They would usually pop up if a vuln was used in a mass compromise, the basis of a worm propagating, or affected large vendors such as Microsoft and Oracle. This year however, it seems more and more news is being written about vulnerabilities. Some may be explained due to vendors being considered more mainstream (Mozilla & Apple), while others may be attention being put on underlying technology that drives more mainstream applications or high profile lists. Two examples of this can be seen in Mailman [OSVDB 13671, Article] and CPAINT [OSVDB 18746, Article].

More recently:
OSVDB 19255: Firefox flaw found: Remote exploit possible
OSVDB 19227: New Cisco flaw could pose threat to Net
OSVDB 19089: Microsoft Investigates New IE Hole
OSVDB 18956: Reports: Long Registry Names Could Hide Malware

Additionally, it is getting to be routine to see articles covering monthly patch cycles:
Microsoft patches IE, Word, Windows
Microsoft to release six patches, some ‘critical,’ next week
Major Oracle Patch Covers Enterprise Products, Database Server
Apple unloads dozens of fixes for OS X

To stay even more current, articles covering ’0-day’ vulnerabilities still in various stages of the disclosure cycle.

If a tree falls in the woods…

If a researcher discloses a vulnerability only to VDBs, and some/all of them publish the information, was the vulnerability really disclosed? Yes, of course, but should it have been? Are VDBs responsible for the information? Does it fall on us to check every thing we get and verify the vendor received it first? Snap answer is ‘yes’, but if so, is the answer the same with information published on a mail list? Snap answer is ‘no’.

This creates a situation where VDBs are held to certain standards for responsible disclosure, and are virtually forced to play middle man between the vendor and researcher. VDBs are forced to take on a role they may not have intended to, or take a hit in their reputation for being responsible with information that may put others at risk.

Late night babbling, or is that a shitty deal for VDBs?

XSS of the Third Kind

http://www.webappsec.org/projects/articles/071105.shtml

The Web Application Security Consortium is proud to present ‘DOM Based Cross Site Scripting or XSS of the Third Kind: A look at an overlooked flavor of XSS ‘ written by Amit Klein. In this article Amit focuses on a little known variant of Cross Site Scripting which attacks a user’s client without sending malicious content to the web server.

Vuln info from public sources and VDB ‘rules’?

This has come up in the past, and again more recently. Is information found on a vendor website, such as a changelog or bugzilla entry, fair game for inclusion in a vulnerability database? Some vendors seem to think this material is off limits. If a person keeps a directory of material regarding vulnerabilities, and it is not password protected or restricted in any way, are we to assume it may be private in some fashion?

The recent complaint does bring up another issue though; assigning vulnerable versions to the database entry. In this case, Secunia apparently listed 1.x when it was a specific release. SecurityFocus’ BID database tends to do this on many entries, listing all prior releases of a product as vulnerable when it hasn’t necessarily been tested. That may be a safe assumption with some software, but not always. As new features are added to a software package, so are new bugs and vulnerabilities.

VDBs using public information such as bug trackers and changelogs may have a long term negative impact though. The Caudium Group has closed its bug tracker to the public in response to Secunia’s vulnerability listing. If more vendors follow suit, this will make more detailed information unavailable to VDBs and impact the quality of the information we can provide.

Classification Headache: Remote vs Local

http://archives.neohapsis.com/archives/bugtraq/2005-07/0238.html

From: Derek Martin (code[at]pizzashack.org)
Date: Thu Jul 14 2005 – 21:39:30 CDT

The issue has come up on bugtraq before, but I think it is worth raising it again. The question is how to classify attacks against users’ client programs which come from the Internet, e.g. an e-mail carrying a malicious trojan horse payload. The reason this is important is because we judge how serious a particular vulnerability is based on how it is classified.

[..]

http://archives.neohapsis.com/archives/bugtraq/2005-07/0239.html

From: Bryan McAninch (bryan[at]mcaninch.org)
Date: Fri Jul 15 2005 – 10:58:47 CDT

I merely skimmed your post, so I apologize if the link I’m providing is not what you’re looking for. From what I read, it soundsas if you might be looking for an attack taxonomy, or something of that nature. An entire chapter of the Computer Security Handbook is devoted to this topic, written by John D. Howard and Thomas A. Longstaff. This document can also be found at CERT’s website – http://www.cert.org/research/taxonomy_988667.pdf

Full thread:
http://archives.neohapsis.com/archives/bugtraq/2005-07/thread.html#238

While this debate is very important to VDBs, the person who started the thread chose an extremely bad example. The real debate comes in for vulnerabilities that don’t require user interaction (ie: not having to click an attachment) such as image processing overflows. It is easy to argue this either way; the overflow exists in the local application, the content comes from a network resource.

Either way, every existing classification system (including OSVDB’d) fall back onto remote vs local, when it is becoming painfully obvious that it needs to evolve. Steven Christey (CVE) has made comments regarding this (before and during this thread), suggesting that we take note of attacks that are “user-complicit” vs “automatic”. This is certainly a large step in the right direction, but is it enough? Will this classification scheme last us a few more years?

Follow

Get every new post delivered to your Inbox.

Join 5,408 other followers