Tag Archives: Steve Christey

I could do this all day… (Poor vuln stats from @GFISoftware)

Despite the talk given at BlackHat 2013 by Steve Christey and myself, companies continue to produce pedestrian and inaccurate statistics. This batch comes from Cristian Florian at GFI Software and offers little more than confusing and misleading statistics. Florian falls into many of the traps and pitfalls outlined previously.

These are compiled from data from the National Vulnerability Database (NVD).

There’s your first problem, using a drastically inferior data set than is available. The next bit really invalidates the rest of the article:

On average, 13 new vulnerabilities per day were reported in 2013, for a total of 4,794 security vulnerabilities: the highest number in the last five years.

This is laughable. OSVDB cataloged 10,472 disclosed vulnerabilities for 2013 (average of 28 a day), meaning these statistics were generated with less than half of known vulnerabilities. 2013 was our third year of breaking 10,000 vulnerabilities, where the rest have a single year (2006) if any at all. Seriously; what is the point of generating statistics when you knowingly use a data set lacking so much? Given that 2012 was another ’10k’ year, the statement about it being the highest number in the last five years is also wrong.

Around one-third of these vulnerabilities were classified ‘high severity’, meaning that an exploit for these vulnerabilities would have a high impact on the attacked systems.

By who? Who generated these CVSS scores exactly, and why isn’t that disclaimed in the article? Why no mention of the ‘CVSS 10′ scoring problem as VDBs must default to that for a completely unspecified issue? With a serious number of vulnerabilities either scored by vendors with a history of incorrect scoring, or VDBs forced to use ’10′ for unspecified issues, these numbers are completely meaningless and skewed.

The vulnerabilities were discovered in software provided by 760 different vendors, but the top 10 vendors were found to have 50% of the vulnerabilities:

I would imagine Oracle is accurate on this table, as we have cataloged 570 vulnerabilites in 2013 from them. However, the rest of the table is inaccurate because #2 is wrong. You say Cisco with 373, I say ffmpeg with 490. You say #10 is HP with 112 and I counter that WebKit had 139 (which in turn adds to Apple and Google among others). You do factor in that whole “software library” thing, right? For example, what products incorporate ffmpeg that have their own vulnerabilities? These are contenders for taking the #1 and #2 spot on the table.

Most Targeted Operating Systems in 2013

As we frequently see, no mention of severity here. Of the 363 Microsoft vulnerabilities in 2013, compared to the 161 Linux Kernel issues, impact and severity is important to look at. Privilege escalation and code execution is typical in Microsoft, while authenticated local denial of service accounts for 22% of the Linux issues (and only 1% for Microsoft).

In 2013 web browsers continued to justle – as in previous years – for first place on the list of third-party applications with the most security vulnerabilities. If Mozilla Firefox had the most security vulnerabilities reported last year and in 2009, Google Chrome had the “honor” in 2010 and 2011, it is now the turn of Microsoft Internet Explorer to lead with 128 vulnerabilities, 117 of them ‘critical’.

We already know your numbers are horribly wrong, as you don’t factor in WebKit vulnerabilities that affect multiple browsers. Further, what is with the sorting of this table putting MSIE up top despite it not being reported with the most vulnerabilities?

Sticking to just the browsers, Google Chrome had 297 reported vulnerabilities in 2013 and that does not count additional WebKit issues that very likely affect it. Next is Mozilla and then Microsoft IE with Safari at the lowest (again, ignoring the WebKit issue).

howdoireportavuln.com – Good intentions, needs fix-ups though.

Tonight, shortly before retiring from a long day of vulnerability import, I caught a tweet mentioning a web site about reporting vulnerabilities. Created on 15-aug-2013 per whois, the footer shows it was written by Fraser Scott, aka @zeroXten on Twitter.

http://howdoireportavuln.com/

I love focused web sites that are informative, and make a point in their simplicity. Of course, most of these sites are humorous or parody, or simply making fun of the common Internet user.

This time, the web site is directly related to what we do. I want to be very clear here; I like the goal of this site. I like the simplistic approach to helping the reader decide which path is best for them. I want to see this site become the top result when searching for “how do I disclose a vulnerability?” This commentary is only meant to help the author improve the site. Please, take this advice to heart, and don’t hesitate if you would like additional feedback. [Update: After starting this blog last night, before publishing this morning, he already reached out. Awesome.]


Under the ‘What’ category, there are three general disclosure options:

NON DISCLOSURE, RESPONSIBLE DISCLOSURE, and FULL DISCLOSURE

First, you are missing a fourth option of ‘limited disclosure’. Researchers can announce they have found a vulnerability in given software, state the implications, and be done with it. Public reports of code execution in some software will encourage the vendor to prioritize the fix, as customers may begin putting pressure on them. Adding a video showing the code execution reinforces the severity. It often doesn’t help a VDB like ours, because such a disclosure typically doesn’t have enough actionable information. However, it is one way a researcher can disclose, and still protect themselves.

Second, “responsible”? No. The term was possibly coined by Steve Christey, further used by Russ Cooper, that was polarized by Cooper as well as Scott Culp at Microsoft (“Information Anarchy”, really?), in a (successful) effort to brand researchers as “irresponsible” if they don’t conform to vendor disclosure demands. The appropriate term more widely recognized, and fair to both sides, is that of “coordinated” disclosure. Culp’s term forgets that vendors can be irresponsible if they don’t prioritize critical vulnerabilities while customers are known to be vulnerable with public exploit code floating about. Since then, Microsoft and many other companies have adopted “coordinated” to refer to the disclosure process.

Under the ‘Who’ category, there are more things to consider:

SEND AN EMAIL

These days, it is rare to see domains following RFC-compliant addresses. That is a practice mostly lost to the old days. Telling readers to try to “Contact us” tab/link that invariably shows on web pages is better. Oh wait, you do that. However, that comes well after the big header reading TECHNICAL SUPPORT which may throw people off.

As a quick side note: “how to notifying them of security issues”. This is one of many spelling or grammar errors. Please run the text through a basic grammar checker.

Under the ‘How’ category:

STAY ANONYMOUS

This is excellent advice, except that using Tor bit since there are serious questions about the security/anonymity of it. If researchers are worried, they should look at a variety of options including using a coffee shop’s wireless, hotel wireless, etc.

BE YOURSELF

This is also a great point, but more to the point, make sure your mail is polite and NOT THREATENING. Don’t threaten to disclose on your own timeline. See how the vendor handles the vulnerability report without any indication of disclosing it. Give them benefit of the doubt. If you get hints they are stalling at some point, then gently suggest it may be in the best interest of their customers to disclose. Remind them that vulnerabilities are rarely discovered by a single person and that they can’t assume you are the only one who has found it. You are just the only one who apparently decided to help the vendor.

THE DISCLOSURE

Post to Full-Disclosure sure, or other options that may be more beneficial to you. Bugtraq has a history of stronger moderation, they tend to weed out crap. Send it directly to vulnerability databases and let them publish it anonymously. VDBs like Secunia generally validate all vulnerabilities before posting to their database. That may help you down the road if your intentions are called into question. Post to the OSS-security mail list if the vulnerability is in open-source software, so you get the community involved. For that list, getting a CVE identifier and having others on the list verifying or sanity checking your findings, it gives more positive attention to the technical issues instead of the politics of disclosure.

FOR SALE

Using a bug bounty system is a great idea as it keeps the new researcher from dealing with disclosure politics generally. Let people experienced with the process, who have an established relationship and history with the vendor handle it. However, don’t steer newcomers to ZDI immediately. In fact, don’t name them specifically unless you have a vested interest in helping them, and if so, state it. Instead, break it down into vendor bug bounty programs and third-party programs. Provide a link to Bugcrowd’s excellent resource on a list of current bounty programs.

FINALLY

The fine print of course. Under CITATIONS, I love that you reference the Errata legal threats page, but this should go much higher on the page. Make sure new disclosers know the potential mess they are getting into. We know people don’t read the fine print. This could also be a good lead-in to using a third-party bounty or vulnerability handling service.

howdoi

It’s great that you make this easy to share with everyone and their dog, but please consider getting a bit more feedback before publishing a site like this. It appears you did this in less than a day, when an extra 24 hours shows you could have made a stronger offering. You are clearly eager to make it better. You have already reached out to me, and likely Steve Christey if not others. As I said, with some edits and fix-ups, this will be a great resource.

Buying Into the Bias: Why Vulnerability Statistics Suck

Last week, Steve Christey and I gave a presentation at Black Hat Briefings 2013 in Las Vegas about vulnerability statistics. We submitted a brief whitepaper on the topic, reproduced below, to accompany the slides that are now available.

buying_into_bias


Buying Into the Bias: Why Vulnerability Statistics Suck
By Steve Christey (MITRE) and Brian Martin (Open Security Foundation)
July 11, 2013

Academic researchers, journalists, security vendors, software vendors, and professional analysts often analyze vulnerability statistics using large repositories of vulnerability data, such as “Common Vulnerabilities and Exposures” (CVE), the Open Sourced Vulnerability Database (OSVDB), and other sources of aggregated vulnerability information. These statistics are claimed to demonstrate trends in vulnerability disclosure, such as the number or type of vulnerabilities, or their relative severity. Worse, they are typically misused to compare competing products to assess which one offers the best security.

Most of these statistical analyses demonstrate a serious fault in methodology, or are pure speculation in the long run. They use the easily-available, but drastically misunderstood data to craft irrelevant questions based on wild assumptions, while never figuring out (or even asking the sources about) the limitations of the data. This leads to a wide variety of bias that typically goes unchallenged, that ultimately forms statistics that make headlines and, far worse, are used to justify security budget and spending.

As maintainers of two well-known vulnerability information repositories, we’re sick of hearing about research that is quickly determined to be sloppy after it’s been released and gained public attention. In almost every case, the research casts aside any logical approach to generating the statistics. They frequently do not release their methodology, and they rarely disclaim the serious pitfalls in their conclusions. This stems from their serious lack of understanding about the data source they use, and how it operates. In short, vulnerability databases (VDBs) are very different and very fickle creatures. They are constantly evolving and see the world of vulnerabilities through very different glasses.

This paper and its associated presentation introduce a framework in which vulnerability statistics can be judged and improved. The better we get about talking about the issues, the better the chances of truly improving how vulnerability statistics are generated and interpreted.

Bias, We All Have It

Bias is inherent in everything humans do. Even the most rigorous and well-documented process can be affected by levels of bias that we simply do not understand are working against us. This is part of human nature. As with all things, bias is present in the creation of the VDBs, how the databases are populated with vulnerability data, and the subsequent analysis of that data. Not all bias is bad; for example, VDBs have a bias to avoid providing inaccurate information whenever possible, and each VDB effectively has a customer base whose needs directly drive what content is published.

Bias comes in many forms that we see as strongly influencing vulnerability statistics, via a number of actors involved in the process. It is important to remember that VDBs catalog the public disclosure of security vulnerabilities by a wide variety of people with vastly different skills and motivations. The disclosure process varies from person to person and introduces bias for sure, but even before the disclosure occurs, bias has already entered the picture.

Consider the general sequence of events that lead to a vulnerability being cataloged in a VDB.

  1. A researcher chooses a piece of software to examine.
  2. Each researcher operates with a different skill set and focus, using tools or techniques with varying strengths and weaknesses; these differences can impact which vulnerabilities are capable of being discovered.
  3. During the process, the researcher will find at least one vulnerability, often more.
  4. The researcher may or may not opt for vendor involvement in verifying or fixing the issue.
  5. At some point, the researcher may choose to disclose the vulnerability. That disclosure will not be in a common format, may suffer from language barriers, may not be technically accurate, may leave out critical details that impact the severity of the vulnerability (e.g. administrator authentication required), may be a duplicate of prior research, or introduce a number of other problems.
  6. Many VDBs attempt to catalog all public disclosures of information. This is a “best effort” activity, as there are simply too many sources for any one VDB to monitor, and accuracy problems can increase the expense of analyzing a single disclosure.
  7. If the VDB maintainers see the disclosure mentioned above, they will add it to the database if it meets their criteria, which is not always public. If the VDB does not see it, they will not add it. If the VDB disagrees with the disclosure (i.e. believes it to be inaccurate), they may not add it.

By this point, there are a number of criteria that may prevent the disclosure from ever making it into a VDB. Without using the word, the above steps have introduced several types of bias that impact the process. These biases carry forward into any subsequent examination of the database in any manner.

Types of Bias

Specific to the vulnerability disclosure aggregation process that VDBs go through every day, there are four primary types of bias that enter the picture. Note that while each of these can be seen in researchers, vendors, and VDBs, some are more common to one than the others. There are other types of bias that could also apply, but they are beyond the scope of this paper.

Selection bias covers what gets selected for study. In the case of disclosure, this refers to the researcher’s bias in selecting software and the methodology used to test the software for vulnerabilities; for example, a researcher might only investigate software written in a specific language and only look for a handful of the most common vulnerability types. In the case of VDBs, this involves how the VDB discovers and handles vulnerability disclosures from researchers and vendors. Perhaps the largest influence on selection bias is that many VDBs monitor a limited source of disclosures. It is not necessary to argue what “limited” means. Suffice it to say, no VDB is remotely complete on monitoring every source of vulnerability data that is public on the net. Lack of resources – primarily the time of those working on the database – causes a VDB to prioritize sources of information. With an increasing number of regional or country-based CERT groups disclosing vulnerabilities in their native tongue, VDBs have a harder time processing the information. Each vulnerability that is disclosed but does not end up in the VDB, ultimately factors into statistics such as “there were X vulnerabilities disclosed last year”.

Publication bias governs what portion of the research gets published. This ranges from “none”, to sparse information, to incredible technical detail about every finding. Somewhere between selection and publication bias, the researcher will determine how much time they are spending on this particular product, what vulnerabilities they are interested in, and more. All of this folds into what gets published. VDBs may discover a researcher’s disclosure, but then decide not to publish the vulnerability due to other criteria.

Abstraction bias is a term that we crafted to explain the process that VDBs use to assign identifiers to vulnerabilities. Depending on the purpose and stated goal of the VDB, the same 10 vulnerabilities may be given a single identifier by one database, and 10 identifiers by a different one. This level of abstraction is an absolutely critical factor when analyzing the data to generate vulnerability statistics. This is also the most prevalent source of problems for analysis, as researchers rarely understand the concept of abstraction, why it varies, and how to overcome it as an obstacle in generating meaningful statistics. Researchers will use whichever abstraction is most appropriate or convenient for them; after all, there are many different consumers for a researcher advisory, not just VDBs. Abstraction bias is also frequently seen in vendors, and occasionally researchers in the way they disclose one vulnerability multiple times, as it affects different software that bundles additional vendor’s software in it.

Measurement bias refers to potential errors in how a vulnerability is analyzed, verified, and catalogued. For example, with researchers, this bias might be in the form of failing to verify that a potential issue is actually a vulnerability, or in over-estimating the severity of the issue compared to how consumers might prioritize the issue. With vendors, measurement bias may affect how the vendor prioritizes an issue to be fixed, or in under-estimating the severity of the issue. With VDBs, measurement bias may also occur if analysts do not appropriately reflect the severity of the issue, or if inaccuracies are introduced while studying incomplete vulnerability disclosures, such as missing a version of the product that is affected by the vulnerability. It could be argued that abstraction bias is a certain type of measurement bias (since it involves using inconsistent “units of measurement”), but for the purposes of understanding vulnerability statistics, abstraction bias deserves special attention.

Measurement bias, as it affects statistics, is arguably the domain of VDBs, since most statistics are calculated using an underlying VDB instead of the original disclosures. As the primary sources of vulnerability data aggregation, several factors come into play when performing database updates.

Why Bias Matters, in Detail

These forms of bias can work together to create interesting spikes in vulnerability disclosure trends. To the VDB worker, they are typically apparent and sometimes amusing. To an outsider just using a data set to generate statistics, they can be a serious pitfall.

In August, 2008, a single researcher using rudimentary, yet effective methods for finding symlink vulnerabilities single handedly caused a significant spike in symlink vulnerability disclosures over the past 10 years. Starting in 2012 and continuing up to the publication of this paper, a pair of researchers have significantly impacted the number of disclosures in a single product. Not only has this caused a huge spike for the vulnerability count related to the product, it has led to them being ranked as two of the top vulnerability disclosers since January, 2012. Later this year, we expect there to be articles written regarding the number of supervisory control and data acquisition (SCADA) vulnerabilities disclosed from 2012 to 2013. Those articles will be based purely on vulnerability counts as determined from VDBs, likely with no mention of why the numbers are skewed. One prominent researcher who published many SCADA flaws has changed his personal disclosure policy. Instead of publicly disclosing details, he now keeps them private as part of a competitive advantage of his new business.

Another popular place for vulnerability statistics to break down is related to vulnerability severity. Researchers and journalists like to mention the raw number of vulnerabilities in two products and try to compare their relative security. They frequently overlook the severity of the vulnerabilities and may not note that while one product had twice as many disclosures, a significant percentage of them were low severity. Further, they do not understand how the industry-standard CVSSv2 scoring system works, or the bias that can creep in when using it to score vulnerabilities. Considering that a vague disclosure that has little actionable details will frequently be scored for the worst possible impact, that also drastically skews the severity ratings.

Conclusion

The forms of bias and how they may impact vulnerability statistics outlined in this paper are just the beginning. For each party involved, for each type of bias, there are many considerations that must be made. Accurate and meaningful vulnerability statistics are not impossible; they are just very difficult to accurately generate and disclaim.

Our 2013 BlackHat Briefings USA talk hopes to explore many of these points, outline the types of bias, and show concrete examples of misleading statistics. In addition, we will show how you can easily spot questionable statistics, and give some tips on generating and disclaiming good statistics.

If You Can’t, How Can We?

Steve Christey w/ CVE recently posted that trying to keep up with Linux Kernel issues was getting to be a burden. Issues that may or may not be security related, even Kernel devs don’t fully know. While this is a good example of the issues VDBs face, it’s really the tip of the iceberg. Until their recent adoption of CVE identifiers, trying to distinguish Oracle vulnerabilities from each other was what you did as a gentle relief from a few hours of being water-boarded. Lately, Mozilla advisories are getting worse as they clump a dozen issues with “evidence of memory corruption” into a single advisory, that gets lumped into a single CVE. Doesn’t matter that they can be exploited separately or that some may not be exploitable at all. Reading the bugzilla entries that cover the issues is headache-inducing as their own devs frequently don’t understand the extent of the issues. Oh, if they make the bugzilla entry public. If the Linux Kernel devs and Mozilla browser wonks cannot figure out the extent of the issue, how are VDBs supposed to?

Being “open source” isn’t some get-out-of-VDB free card. You’re supposed to be better than your closed-source rivals. You’re supposed to care about your customers and be open about security issues. An advisory full of “may” and “evidence of” is nothing more than a FUD-filled excuse to blindly upgrade without understanding the real threat or exposure to the end-user.

Steve’s post is a good view of how some VDBs feel about the issue: http://marc.info/?l=oss-security&m=124061708428439&w=2

Tonight, I followed-up on his thoughts and gave more of my own (original: http://marc.info/?l=oss-security&m=124065500729868&w=2):

A question, really?

I’d like to reiterate what Steve Christey said in the last 24 hours, about the Linux Kernel vulnerabilities becoming a serious drain on CVE. Historically, OSVDB has relied on Secunia and CVE to sort out the Linux Kernel vulnerability messes. Both VDBs have full time staff that can dedicate time to figuring out such nuances as those above.

Not to pick on Eugene specifically, but I feel he makes a great example of my point. Nuances that a “Senior Security Engineer at Red Hat” who specialies in “OS and Application Security, Project Management, Vulnerability Analysis, Code-level Auditing, Penetration Testing, Red Hat Products and Services, Financial Services Technical Account Management” cannot definitely distinguish between difference in Kernel vulnerabilities. If Eugene cannot say with certainty these deserve two CVE numbers, how can Steve or his staff?

VDBs deal with thousands of vulnerabilities a year, ranging from PHP applications to Oracle to Windows services to SCADA software to cellular telephones. We’re expected to have a basic understanding of ‘vulnerabilities’, but this isn’t 1995. Software and vulnerabilities have evolved over the years. They have moved from straight-forward overflows (before buffer vs stack vs heap vs underflow) and one type of XSS to a wide variety of issues that are far from trivial to exploit. For fifteen years, it has been a balancing act for VDBs when including Denial of Service (DOS) vulnerabilities because the details are often sparse and it is not clear if an unprivileged user can reasonably affect availability. Jump to today where the software developers cannot, or will not tell the masses what the real issue is.

This isn’t just a Linux Kernel issue at all. The recent round of advisories from Mozilla contain obscure wording that allude to “memory corruption” implying arbitrary code execution. If you follow the links to the bugzilla reports, the wording becomes a quagmire of terms that not even the developers can keep up on [1] [2]. That’s if they even open the bugzilla entry reference in the advisory [3]. Again, how are people not intimately familiar with the code base supposed to understand these reports and give a reasonable definition of the vulnerability? How do we translate that mess of coder jargon into a 1 – 10 score for severity?

It is important that VDBs continue to track these issues, and it is great that we have more insight and contact with the development teams of various projects. However, this insight and contact has paved the way for a new set of problems that over-tax an already burdened effort. MITRE receives almost 5 million dollars a year from the U.S. government to fund the C*E effort, including CVE [Based on FOIA information]. If they cannot keep up with these vulnerabilities, how do their “competitors”, especially free / open source ones [5], have a chance?

Projects like the Linux Kernel are familiar with CVE entries. Many Linux distributions are CVE Numbering Authorities, and can assign a CVE entry to a particular vulnerability. It’s time that you (collectively) properly document and explain vulnerabilities so that VDBs don’t have to do the source code analysis, patch reversals or play 20 questions with the development team. Provide a clear understanding of what the vulnerability is so that we may properly document it, and customers can then judge the severity of issue and act on it accordingly.

I believe this is a case where over-exposure to near-proprietary technical details of a product have become the antithesis of closed-source vague disclosures like those from Microsoft or Oracle [Which are just as difficult to deal with in a totally different way.].

Who discovered the most vulns?

This is a question OSVDB moderators, CVE staff and countless other VDB maintainers have asked. Today, Gunter Ollmann with IBM X-Force released his research trying to answer this question. Before you read on, I think this research is excellent. The relatively few criticisms I bring up are not the fault of Ollmann’s research and methodology, but the fault of his VDB of choice (and *every* other VDB) not having a complete data set.

Skimming his list, my first thought was that he was missing someone. Doing a quick search of OSVDB, I see that Lostmon Lords (aka ‘lostmon’) has close to 350 vulnerabilities published. How could the top ten list miss someone like this when his #10 only had 147? Read down to Ollmann’s caveat and there is a valid point, but sketchy wording. The data he is using relies on this information being public. As the caveat says though, “because they were disclosed on non-public lists” implies that the only source he or X-Force are using are mail lists such as Bugtraq and Full-disclosure. Back in the day, that was a pretty reliable source for a very high percentage of vulnerability information. In recent years though, a VDB must look at other sources of information to get a better picture. Web sites such as milw0rm get a steady stream of vulnerability information that is frequently not cross-posted to mail lists. In addition, many researchers (including lostmon) mail their discoveries directly to the VDBs and bypass the public mail lists. If researchers mail a few VDBs and not the rest, it creates a situation where the VDBs must start watching each other. This in turn leads to “VDB inbreeding” that Jake and I mentioned at CanSecWest 2005, which is a necessary evil if you want more data on vulnerabilities.

In May of 2008, OSVDB did the same research Ollmann did and we came up with different results. This was based on data we had available, which is still admittedly very incomplete (always need data manglers.) So who is right? Neither of us. Well, perhaps he is, perhaps we are, but unfortunately we’re both working with incomplete databases. As a matter of my opinion, I believe OSVDB has better coverage of vulnerabilities, while X-Force clearly has better consistency in their data and a fraction of the gaps we do.

Last, this data is interesting as is, but would be really fascinating if it was mixed with ‘researcher confidence’ (a big thing of Steve Christey/CVE and myself), in which we track a researcher’s track record for accuracy in disclosure. Someone that disclosed 500 vulnerabilities last year with a 10% error rate should not be above someone who found 475 with a 0% error rate. In addition, as Ollmann’s caveat says, these are pure numbers and do not factor in hundreds of XSS versus remote code execution in operating system default install services. Having a weight system that can be applied to a vulnerability (e.g., XSS = 3, SQLi = 7, remote code exec = 9) that is then factored into researcher could move beyond “who discovered the most” and perhaps start to answer “who found the most respectable vulnerabilities”.

Coffee makers are SCADA, right?!

Steven Christey of CVE posted asking a question about VDBs and the inclusion of coffee makers. Yes, you read that correctly, vulnerabilities are being found in coffee makers that are network accessible. Don’t be surprised, we all knew the day was coming when every household appliance would become IP aware.

Before you laugh and spew your own coffee all over the keyboard, consider that the vulnerabilities are legitimate in the sense that a remote attacker can manipulate how the device performs and possibly do physical damage to the unit. This is really no different than SCADA devices such as air conditioners that are IP aware.

Some replies (like mine) were a bit more serious suggesting this type of vulnerability is definitely worth inclusion in OSVDB. If we can’t draw the line between coffee makers, air conditioners and other SCADA devices today, we will be able to in a year or years from now? At some point, the blur between computing device and household appliance will be too hard to distinguish. Rather than waste too much time arguing that line, why not track these few vulnerabilities now that might be a bit primitive, but will surely show historic value if nothing else.

Other replies were a bit less serious but fun, suggesting that making weak (or no) coffee would lead to disgruntled code writers that produce poor code filled with more vulnerabilities. Either way, count on us to include vulnerabilities in your favorite IP aware devices, kitchen, computing or otherwise, to this database.

Not local.. Not remote..

Several of us working on VDBs have debated over the years how best to handle vulnerabilities that aren’t necessarily remote or local. Issues like image or archive handling vulnerabilities, where the program processing a malformed file is prone to an overflow, traversal or denial of service. While one may argue they are ‘remote’ in the sense that if I e-mail you the file, the attack is definitely remote in a sense. But, if the malformed file is loaded via a floppy disk, the attack certainly isn’t ‘local’ or ‘requires physical’ access necessarily. So we need something that covers the grey area between vectors. A while back Steven Christey at CVE began using “context-dependent attacker” to describe such vulnerabilities. OSVDB tried to come up with another term for this but after some time, we couldn’t. So, from here on out, you will start noticing the use of “context-dependent attacker” in our vulnerability descriptions more frequently, and eventually when the classification scheme is overhauled it will appear there too.

The Perfect Patch Storm

Steven Christey of CVE recently commented on the fact that Microsoft, Adobe, Cisco, Sun and HP all released multi-issue advisories on the same day (Feb 13). My first reaction was to come up with an amusing graphic depicting this perfect storm. Due to not having any graphic editing skills and too much cynicism, I now wonder if these are the same vendors that continually bitch about irresponsible disclosure and it “hurting their customers”?

These same customers are now being subjected to patches for at least five major vendors on the same day. In some IT shops, this is devastating and difficult to manage and recover from. If a single patch has problems it forces the entire upgrade schedule to come to a halt until the problem can be resolved. If these vendors cared for their customers like they pretend to when someone releases a critical issue w/o vendor coordination, then they would consider staggering the patches to help alleviate the burden it causes on their beloved customers.

The Upside to the Provenance Problem

As mentioned before, Christey of CVE mentions an ongoing problem in the vulnerability world is that of “provenance”, meaning “where the hell did that come from?!” Vulnerability Databases (VDB’s) like CVE and OSVDB are big on provenance. We want to know exactly where the information came from and include it in our entry. Other VDBs like Secunia or SecurityFocus’ BID are bad about it. That is to be expected though, we’re talking free/open vs commercial/closed. BID/Secunia gain some value by appearing to be magical when it comes to finding vulnerability information.

For anal retentive freaks, the provenance problem does have one upside. For years now, OSVDB has been aggressive in digging up vulnerability information, regardless of age or severity. This typically means spending hours upon hours of reading changelogs, which usually are not the most stimulating reading one can find, and more often than not lack any form of clarity or simplicity. This also means relying on a bit of guesswork at times, as changelog entries tend to be short and sweet, leaving a lot to the imagination. One such example is the recently published Empire Server Unspecified Vulnerabilities from Secunia. As always, unspecified and no reference as to where the vulnerability information came from means looking at the vendor site, bug tracking systems, news archives, changelogs and more. An hour after digging, I still can’t find where exactly Secunia got “Some vulnerabilities with unknown impacts have been reported in Empire Server. The vulnerabilities are caused due to unspecified errors in the game server.” One of the OSVDB manglers (mdodge) was able to track down where this information came from while I was knee deep in changelog.

Despite that, OSVDB will soon have as many as 12 more entries for the Empire server from other Changelog entries, and I’m not even 10% done reading. I’m conflicted here.. part of me is sad, because the only other previous entry OSVDB has for this is an old entry dating back to 1981 and nothing since. That is depressing in the world of VDBs, and a discredit to what we do. Consider the entries in various VDBs: CVE 0, ISS 0, ST 0, BID 0, Secunia 1, OSVDB 2. Yet the changelog for the 4.x branch suggests over a dozen vulnerabilities?

The other part of me is happy as this is one more product that will be better represented in a VDB as far as security goes. People want better vulnerability trending, a more accurate vulnerability history and a better idea of a product’s security. That won’t happen without a more complete picture of the vulnerabilities in a given product.

Vulnerability History

Steven Christey (CVE) recently posted about vulnerability history and complexity. The recent sendmail vulnerability has brought up discussion about both topics and adds another interesting piece of history to the venerable sendmail package. One point to walk away with is that while sendmail has a long history of vulnerabilities, the last five years have shown the product to be considerably more secure. While overflows still haunt the ~ 25 year old software package, they are growing fewer and requiring considerably more complex methods to exploit them. The latest discovery is by no means a run-of-the-mill remote overflow, rather it takes considerable skill to find and exploit the flaw.

Using vulnerability history to help evaluate the current security posture of software is a bit sketchy, but certainly helps. If a program starts out with standard overflows, race conditions, symlink issues, XSS or SQL injections, it’s basically expected. If years pass and new versions of the same package continue to exhibit the same coding practices that lead to these vulnerabilities, you begin to get an idea of the quality of code as it relates to security. On the other hand, if years pass and the vulnerabilities are published with more time between each, and the difficulty exploiting them increases, it shows the developers are security conscious and producing more secure code. As always, the lack of published vulnerabilities in a product doesn’t mean it is free from defect, just that they possibly have not been found or published.

Fun fact: The first documented Sendmail vulnerability was on Aug 23, 1981.

Follow

Get every new post delivered to your Inbox.

Join 4,759 other followers