If You Can’t, How Can We?

Steve Christey w/ CVE recently posted that trying to keep up with Linux Kernel issues was getting to be a burden. Issues that may or may not be security related, even Kernel devs don’t fully know. While this is a good example of the issues VDBs face, it’s really the tip of the iceberg. Until their recent adoption of CVE identifiers, trying to distinguish Oracle vulnerabilities from each other was what you did as a gentle relief from a few hours of being water-boarded. Lately, Mozilla advisories are getting worse as they clump a dozen issues with “evidence of memory corruption” into a single advisory, that gets lumped into a single CVE. Doesn’t matter that they can be exploited separately or that some may not be exploitable at all. Reading the bugzilla entries that cover the issues is headache-inducing as their own devs frequently don’t understand the extent of the issues. Oh, if they make the bugzilla entry public. If the Linux Kernel devs and Mozilla browser wonks cannot figure out the extent of the issue, how are VDBs supposed to?

Being “open source” isn’t some get-out-of-VDB free card. You’re supposed to be better than your closed-source rivals. You’re supposed to care about your customers and be open about security issues. An advisory full of “may” and “evidence of” is nothing more than a FUD-filled excuse to blindly upgrade without understanding the real threat or exposure to the end-user.

Steve’s post is a good view of how some VDBs feel about the issue: http://marc.info/?l=oss-security&m=124061708428439&w=2

Tonight, I followed-up on his thoughts and gave more of my own (original: http://marc.info/?l=oss-security&m=124065500729868&w=2):

A question, really?

I’d like to reiterate what Steve Christey said in the last 24 hours, about the Linux Kernel vulnerabilities becoming a serious drain on CVE. Historically, OSVDB has relied on Secunia and CVE to sort out the Linux Kernel vulnerability messes. Both VDBs have full time staff that can dedicate time to figuring out such nuances as those above.

Not to pick on Eugene specifically, but I feel he makes a great example of my point. Nuances that a “Senior Security Engineer at Red Hat” who specialies in “OS and Application Security, Project Management, Vulnerability Analysis, Code-level Auditing, Penetration Testing, Red Hat Products and Services, Financial Services Technical Account Management” cannot definitely distinguish between difference in Kernel vulnerabilities. If Eugene cannot say with certainty these deserve two CVE numbers, how can Steve or his staff?

VDBs deal with thousands of vulnerabilities a year, ranging from PHP applications to Oracle to Windows services to SCADA software to cellular telephones. We’re expected to have a basic understanding of ‘vulnerabilities’, but this isn’t 1995. Software and vulnerabilities have evolved over the years. They have moved from straight-forward overflows (before buffer vs stack vs heap vs underflow) and one type of XSS to a wide variety of issues that are far from trivial to exploit. For fifteen years, it has been a balancing act for VDBs when including Denial of Service (DOS) vulnerabilities because the details are often sparse and it is not clear if an unprivileged user can reasonably affect availability. Jump to today where the software developers cannot, or will not tell the masses what the real issue is.

This isn’t just a Linux Kernel issue at all. The recent round of advisories from Mozilla contain obscure wording that allude to “memory corruption” implying arbitrary code execution. If you follow the links to the bugzilla reports, the wording becomes a quagmire of terms that not even the developers can keep up on [1] [2]. That’s if they even open the bugzilla entry reference in the advisory [3]. Again, how are people not intimately familiar with the code base supposed to understand these reports and give a reasonable definition of the vulnerability? How do we translate that mess of coder jargon into a 1 – 10 score for severity?

It is important that VDBs continue to track these issues, and it is great that we have more insight and contact with the development teams of various projects. However, this insight and contact has paved the way for a new set of problems that over-tax an already burdened effort. MITRE receives almost 5 million dollars a year from the U.S. government to fund the C*E effort, including CVE [Based on FOIA information]. If they cannot keep up with these vulnerabilities, how do their “competitors”, especially free / open source ones [5], have a chance?

Projects like the Linux Kernel are familiar with CVE entries. Many Linux distributions are CVE Numbering Authorities, and can assign a CVE entry to a particular vulnerability. It’s time that you (collectively) properly document and explain vulnerabilities so that VDBs don’t have to do the source code analysis, patch reversals or play 20 questions with the development team. Provide a clear understanding of what the vulnerability is so that we may properly document it, and customers can then judge the severity of issue and act on it accordingly.

I believe this is a case where over-exposure to near-proprietary technical details of a product have become the antithesis of closed-source vague disclosures like those from Microsoft or Oracle [Which are just as difficult to deal with in a totally different way.].

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 4,759 other followers

%d bloggers like this: