While chatting with a journalist about risks and ratings. I think the conversation started with a discussion on CVSS, but moved on to more general risk ratings. This lead me to wonder about the usefulness of Internet risk/threat ratings that some security companies maintain. Does anyone use them? Do they help anyone other than journalists who tend to reference them as if it gives us a meaning or measure of the current risk?
Such ratings suffer from the same problem any other vulnerability/risk scoring system does. They tend to be overly complex, too subjective, or too simple. When I took notes to remind myself to blog about this, I noted the ratings of a few security outfits:
Symantec Threatcon is Level 3: High (currently at 1, max 4)
ISS Alertcon is Level 2 and says we will be at this level until Jan 6 (currently at 1, anticipated through Feb 2, max 4)
SANS is at Yellow (currently at Green, 1 of 4)
Does any of that really help you?
The initial ratings were taken in the midst of the WMF massacre, in which thousands of sites hosted hostile code that could compromise a system just by browsing a web page. Attack vectors quickly spread to spam/email and IM networks. According to SANS, “It is extremely hard to protect against this vulnerability”. Secunia and Symantec flagged the vulnerability “Extremely critical,” and rated it 9.4 on its 10-point scale respectively. Within hours, computer virus and spyware authors were using the flaw to distribute malicious programs that could allow them to take over and remotely control afflicted computers. This vulnerability affects all flavors of Windows which runs on an estimated 90% of personal computers. Some reports indicate 10% of computers had fallen victim to this exploit.
Now, we’re in the middle of the CME-24/Blackworm/Nyxem/Blackmal/Mywife outbreak, and all three are at level 1. Does this rating system help you?
The Open Source Vulnerability Database (OSVDB), a project to catalog and describe the world’s security vulnerabilities, has had a challenging yet successful year. The project is fortunate to have the continued support of some devoted volunteers, yet remains challenged to keep up with the increasing number of vulnerability reports, as well as work on the back-log of historical information. Volunteers are continually sought to help us achieve our short and long-term goals.
Despite resource constraints, there have been many exciting successes in 2005:
- A major project goal of obtaining 501(c)3 non-profit status from the U.S. IRS was achieved. Obtaining non-profit status was critical to the long-term viability of the project. This status allows OSVDB to take charitable donations to help cover operating expenses, while providing a tax benefit to donor companies and individuals.
- The vulnerability database has grown to over 22,000 entries thanks to the dedicated work of Brian Martin, OSVDB Content Manager. At the end of December, over 10,000 of those vulnerabilities were worked on by volunteers to provide more detailed and cross-referenced information. Our volunteer “Data Manglers” and Brian have helped ensure OSVDB is the most complete resource for vulnerability information on the Internet.
- OSVDB started a blog in April, as a way for us to keep the public better informed on the project’s status. Very quickly we realized the blog was a perfect place to discuss and comment on various aspects of vulnerabilities, and has become a successful mechanism for communicating with the security industry. If you have suggestions for topics, or would like to join the discussion, please visit the OSVDB blog.
We would like to also recognize our sponsors and thank them for their support. Digital Defense, Churchill & Harriman, Audit My PC, and Opengear have all provided important resources to OSVDB over the past year. We would also like to thank Renaud Deraison of the Nessus Project and HD Moore of the Metasploit Project for their support. Lastly, we of course want to thank our volunteers, and note that several of them have contributed to Nessus Network Auditing, available from Syngress Publishing.
We are very pleased with the progress and growth of OSVDB over the past year, but do not want to downplay the importance of recruiting new volunteers, as well as retaining our current ones, in order to get through the considerable back-log of vulnerabilities that need further work. This task is daunting, but will not only help retain valuable historical vulnerability information, but will also allow OSVDB to generate meaningful statistics for past and current years.
We have had a great year, and are looking forward to another one! We are of course still seeking assistance to help keep OSVDB successful–the project has many ideas in need of financial and volunteer support to implement. For more information on supporting OSVDB through volunteering or sponsorship, please contact firstname.lastname@example.org.
From time to time, vendors will contact OSVDB to notify us of solutions to vulnerabilities included in the database. These are almost always very professional mails, usually polite, and sometimes include all the details we need/want. These mails may say something along the lines of “we have fixed this issue” which prompts us to ask if it is a patch, upgrade or workaround. Other times they are very descriptive and provide all the information we need to update our entry, add more detail and provide the best information to our users and their customers.
Every once in a while, we get a real winner. On Dec 29, 2005, Global I.S. S.A. contacted us regarding entry 21429, saying “This vulnerability has been addressed.” Within minutes I replied asking if this was in the form of an upgrade or patch but did not hear back from them. On Jan 2, 2006, they contacted us again asking “This is our second request for a change. Is anybody home?” So they didn’t receive my initial reply I assumed (nor did they acknowledge my second reply), but that isn’t what grabbed me. The rest of their mail did:
The vulnerability you refer to has been resolved.
For security we do not release the nature of the solution/s.
It is criminally negligent to publish hacks on the web without first notifying the author.
Let us know if you have a question.
On top of the veiled legal threat (which I love!), their comment that they do not release the nature of the solution is baffling.. more-so that they do this “for security”. Vendors, take note: the one time you want to be completely open and honest with information is when it comes to solutions to vulnerabilities. Withholding information or making it unclear/confusing only contributes to insecurity as customers don’t know the extent of the issue, nor how to easily mitigate the vulnerability.
2004-08-04: 34 flaws found in Oracle database software
2004-09-03: US gov and sec firms warn of critical Oracle flaws
2004-10-15: Oracle Warns of Critical Exploits
2005-01-20: Oracle Patch Fixes 23 ‘Critical’ Vulnerabilities
2005-10-19: Oracle fixes bugs with mega patch
2006-01-18: Oracle fixes pile of bugs
In the interest of helping journalists cover Oracle.. perhaps they should just move to a templated form to save time?
[YOUR TITLE], [YOUR PUBLICATION]
Oracle released on [DAY_OF_WEEK] fixes for a [LONG/HUGE/MONSTROUS] list of security vulnerabilities in [ONE/MANY/ALL] of its products. The quarterly patch contained patches for [NUMBER] vulnerabilities.
Titled “Critical Patch Update”, the patch provides [FIXES/REMEDIES/MITIGATION] for [NUMBER] flaws in the Database products, [NUMBER] flaws in the Application Server, [NUMBER] flaws in the COllaboration Suite, [NUMBER] of flaws in the E-Business Suite, [NUMBER] of flaws in the PeopleSoft Enterprise Portal, and [NUMBER] of flaws in the [NEW_TECHNOLOGY_OR_ACQUISITION].
Many of the flaws have been deemed critical by Oracle, meaning they are trivial to exploit, were likely discovered around 880 days ago, and are trivially abused by low to moderately skilled [HACKERS/ATTACKERS/CRACKERS]. Some of these flaws may be used in the next worm-of-the-week.
“[DULL_QUOTE_FROM_COMPANY_WHO_DISCOVERED_0_OF_THE_FLAWS]” security company [COMPANY] said yesterday as they upped their internet risk warning system number (IRWSN) to [ARBITRARY_NUMBER]. “This is another example of why our products will help protect customers who chose to deploy Oracle software” [ARBITRARY_CSO_NAME] stated.
“[BULLSHIT_QUOTE_ABOUT_PROACTIVE_SECURITY_FROM_ORACLE” countered Mary Ann Davidson, CSO at Oracle. “These hackers providing us with free security testing and showing their impatience after a mere 880 days are what causes problems. If these jackass criminals would stop being hackers, our products would not be broken into and our customers would stay safe!”
Oracle has been criticized for being slow to fix security flaws by everyone ranging from L0rD D1cKw4v3R to US-CERT to the Pope.
Brian Krebs has a fantastic post on his blog covering the time it takes for Microsoft to release a patch, and if they are getting any better at it. Here are a few relevant paragraphs from it, but I encourage you to read the entire article. It appears to be a well developed article that is heavily researched and quite balanced. Makes me wonder if his editors shot it down for some reason. If they did, shame on them.
A few months back while researching a Microsoft patch from way back in 2003, I began to wonder whether anyone had ever conducted a longitudinal study of Redmond’s patch process to see whether the company was indeed getting more nimble at fixing security problems.
Finding no such comprehensive research, Security Fix set about digging through the publicly available data for each patch that Microsoft issued over the past three years that earned a “critical” rating. Microsoft considers a patch “critical” if it fixes a security hole that attackers could use to break into and take control over vulnerable Windows computers.
Here’s what we found: Over the past three years, Microsoft has actually taken longer to issue critical fixes when researchers waited to disclose their research until after the company issued a patch. In 2003, Microsoft took an average of three months to issue patches for problems reported to them. In 2004, that time frame shot up to 134.5 days, a number that remained virtually unchanged in 2005.
First off, these are the kind of statistics and research that I mean when I talk about the lack of evolution of vulnerability databases. This type of information is interesting, useful, and needed in our industry. This begins to give customers a solid idea on just how responsive our vendors are, and just how long we stay at risk with unpatched vulnerabilities. This is also the type of data that any solid vulnerability database should be able to produce with a few clicks of the mouse.
This type of article can be written due to the right data being available. Specifically, a well documented and detailed time line of the life of a vulnerability. Discovery, disclosure to the vendor, vendor acknowledgement, public disclosure, and patch date are required to generate this type of information. People like Steven Christey (CVE) and Chris Wysopal (VulnWatch) have been pushing for this information to be made public, often behind the scenes in extensive mail to vendors. In the future if we finally get these types of statistics for all vendors over a longer period of time, you will need to thank them for seeing it early on and helping to make it happen.
This type of data is of particular interest to OSVDB and has been worked into our database (to a degree) from the beginning. We currently track the disclosure date, discovery date and exploit publish date for each vulnerability, as best we can. Sometimes this data is not available but we include it when it is. One of our outstanding development/bugzilla entries involving adding a couple more date fields, specifically vendor acknowledge date and vendor solution date. With these five fields, we can begin to trend this type of vendor response time with accuracy, and with a better historical perspective.
While Krebs used Microsoft as an example, are you aware that other vendors are worse than Microsoft? Some of the large Unix vendors have been slow to patch for the last twenty years! Take the recent disclosure of a bug in uustat on Sun Microsystems Solaris Operating System. iDefense recently reported the problem and included a time line of the disclosure process.
08/11/2004 Initial vendor contact
08/11/2004 Initial vendor response
01/10/2006 Coordinated public disclosure
Yes, one year and five months for Sun Microsystems to fix a standard buffer overflow in a SUID binary. The same thing that has plagued them as far back as January 1997 (maybe as far back as December 6, 1994, but details aren’t clear). It would be nice to see this type of data available for all vendors on demand, and it will be in due time. Move beyond the basic stats and consider if we apply this based on the severity of the vulnerability. Does it change the vendor’s response time (consistently)? Compare the time lines along with who discovered the vulnerability, and how it was disclosed (responsibly or no). Do those factors change the vendor’s response time?
The answers to those questions have been on our minds for a long time and are just a few of the many goals of OSVDB.
Through its Science and Technology Directorate, the department has given $1.24 million in funding to Stanford University, Coverity and Symantec to hunt for security bugs in open-source software and to improve Coverity’s commercial tool for source code analysis, representatives for the three grant recipients told CNET News.com.
The Homeland Security Department grant will be paid over a three-year period, with $841,276 going to Stanford, $297,000 to Coverity and $100,000 to Symantec, according to San Francisco-based technology provider Coverity, which plans to announce the award publicly on Wednesday.
The project, while generally welcomed, has come in for some criticism from the open-source community. The bug database should help make open-source software more secure, but in a roundabout way, said Ben Laurie, a director of the Apache Foundation who is also involved with OpenSSL. A more direct way would be to provide the code analysis tools to the open-source developers themselves, he said.
So DHS uses $1.24 million dollars to fund a university and two commercial companies. The money will be used to develop source code auditing tools that will remain private. Coverity and Symantec will use the software on open-source software (which is good), but is arguably a huge PR move to help grease the wheels of the money flow. Coverity and Symantec will also be able to use these tools for their customers, which will pay them money for this service.
Why exactly do my tax dollars pay for the commercial development of tools that are not released to the public? As Ben Laurie states, why can’t he get a copy of these tax payer funded tools to run on the code his team develops? Why must they submit their code to a commercial third party for review to get any value from this software?
Given the date of this announcement, coupled with the announcement of Stanford’s PHP-CHECKER makes me wonder when the funds started rolling. There are obviously questions to be answered regarding Stanford’s project (that I already asked). This also makes me wonder what legal and ethical questions should be asked about tax dollars being spent by the DHS, for a university to fund the development of a security tool that could potentially do great good if released for all to use.
It’s too bad there is more than a year long wait for FOIA requests made to the DHS.
Something lead you to the product that ended up on your systems. Be it a feature, a look, ease of use, or price, it was a driving force in your decision. Changing to a different product isn’t easily done, especially if your current solution is heavily integrated or customers/users are familiar with it. Besides, what other product can fill your needs that doesn’t have vulnerabilities of it’s own? Look at the amount of vulnerabilities released along with the diversity of the products. Whether it is no name freebies or million dollar commercial installations, every package seems to have vulnerabilities that would drive you back to where you started.
Offering a “solution” of “Use another product” doesn’t seem very intuitive, logical, or helpful to customers.
In the context of advisories, it’s simple, to help track documents and avoid confusion. Much the same reason a vulnerability database assigns a unique number to an issue. If there is confusion when discussing a vulnerability, you reference the unique ID and ideally, confusion goes away. That said, why does Hewlett-Packard feel the need to assign multiple tracking IDs to a single document/advisory?
HP-UX running WBEM Services Denial of Service (DoS) http://archives.neohapsis.com/archives/bugtraq/2005-12/0231.html
So this is “SSRT051026 rev. 1”, “Document ID: c00582373”, and HPSBMA02088. Three drastically different tracking numbers for the same document. Fortunately, all three were referenced in the same place this time, but still.. why must vendors do this?
Steve Christey (CVE Editor) wrote an open letter to several mailing lists regarding the nature of vulnerability statistics. What he said is spot on, and most of what I would have pointed out had my previous rant been more broad, and not a direct attack on a specific group. I am posting his entire letter here, because it needs to be said, read, understood, and drilled into the heads of so many people. I am reformatting this for the blog, you can read an original copy via a mail list.
Open Letter on the Interpretation of “Vulnerability Statistics”
Author: Steve Christey, CVE Editor
Date: January 4, 2006
As the new year begins, there will be many temptations to generate, comment, or report on vulnerability statistics based on totals from 2005. The original reports will likely come from publicly available Refined Vulnerability Information (RVI) sources – that is, vulnerability databases (including CVE/NVD), notification services, and periodic summary producers.
RVI sources collect unstructured vulnerability information from Raw Sources. Then, they refine, correlate, and redistribute the information to others. Raw sources include mailing lists like Bugtraq, Vulnwatch, and Full-Disclosure, web sites like PacketStorm and Securiteam, blogs, conferences, newsgroups, direct emails, etc.
In my opinion, RVI sources are still a year or two away from being able to produce reliable, repeatable, and COMPARABLE statistics. In general, consumers should treat current statistics as suggestive, not conclusive.
Vulnerability statistics are difficult to interpret due to several factors:
- – VARIATIONS IN EDITORIAL POLICY. An RVI source’s editorial policy dictates HOW MANY vulnerabilities are reported, and WHICH vulnerabilities are reported. RVIs have widely varying policies. You can’t even compare an RVI against itself, unless you can be sure that its editorial policy has not changed within the relevant data set. The editorial policies of RVIs seem to take a few years before they stabilize, and there is evidence that they can change periodically.
- – FRACTURED VULNERABILITY INFORMATION. Each RVI source collects its information from its own list of raw sources – web sites, mailing lists, blogs, etc. RVIs can also use other RVIs as sources. Apparently for competitive reasons, some RVIs might not identify the raw source that was used for a vulnerability item, which is one aspect of what I refer to as the provenance problem. Long gone are the days when a couple mailing lists or newsgroups were the raw source for 90% of widely available vulnerability information. Based on what I have seen, the provenance problem is only going to get worse.
- – LACK OF COMPLETE CROSS-REFERENCING BETWEEN RVI SOURCES. No RVI has an exhaustive set of cross-references, so no RVI can be sure that it is 100% comprehensive, even with respect to its own editorial policy. Some RVIs compete with each other directly, so they don’t cross-reference each other. Some sources could theoretically support all public cross-references – most notably OSVDB and CVE – but they do not, due to resource limitations or other priorities.
- – UNMEASURABLE RESEARCH COMMUNITY BIAS. Vulnerability researchers vary widely in skill sets, thoroughness, preference for certain vulnerability types or product classes, and so on. This collectively produces a bias that is not currently measurable against the number of latent vulnerabilities that actually exist. Example: web browser vulnerabilities were once thought to belong to Internet Explorer only, until people actually started researching other browsers; many elite researchers concentrate on a small number of operating systems or product classes; basic SQL injection and XSS are very easy to find manually; etc.
- – UNMEASURABLE DISCLOSURE BIAS. Vendors and researchers vary widely in their disclosure models, which creates an unmeasurable bias. For example, one vendor might hire an independent auditor and patch all reported vulnerabilities without publicly announcing any of them, or a different vendor might publish advisories even for very low-risk issues. One researcher might disclose without coordinating with the vendor at all, whereas another researcher might never disclose an issue until a patch is provided, even if the vendor takes an inordinate amount of time to respond. Note that many large-scale comparisons, such as “Linux vs. Windows,” can not be verified due to unmeasurable bias, and/or editorial policy of the core RVI that was used to conduct the comparison.
EDITORIAL POLICY VARIATIONS
This is just a sample of variations in editorial policy. There are legitimate reasons for each variation, usually due to audience needs or availability of analytical resources.
COMPLETENESS (what is included):
- SEVERITY. Some RVIs do not include very low-risk items such as a bug that causes path disclosure in an error message in certain non-operational configurations. Secunia and SecurityFocus do not do this, although they might note this when other issues are identified. Others include low-risk issues, such as CVE, ISS X-Force, US-CERT Security Bulletins, and OSVDB.
- VERACITY. Some RVIs will only publish vulnerabilities when they are confident that the original, raw report is legitimate – or if they’re verified it themselves. Others will publish reports when they are first detected from the raw sources. Still others will only publish reports when they are included in other RVIs, which makes them subject to the editorial policies of those RVIs unless care is taken. For example, US-CERT’s Vulnerability Notes have a high veracity requirement before they are published; OSVDB and CVE have a lower requirement for veracity, although they have correction mechanisms in place if veracity is questioned, and CVE has a two-stage approach (candidates and entries).
- PRODUCT SPACE. Some RVIs might omit certain products that have very limited distribution, are in the beta development stage, or are not applicable to the intended audience. For example, version 0.0.1 of a low-distribution package might be omitted, or if the RVI is intended for a business audience, video game vulnerabilities might be excluded. On the other hand, some “beta” products have extremely wide distribution.
- OTHER VARIATIONS. Other variations exist but have not been studied or categorized at this time. One example, though, is historical completeness. Most RVIs do not cover vulnerabilities before the RVI was first launched, whereas others – such as CVE and OSVDB – can include issues that are older than the RVI itself. As another example: a few years ago, Neohapsis made an editorial decision to omit most PHP application vulnerabilities from their summaries, if they were obscure products, or if the
vulnerability was not exploitable in a typical operational configuration.
ABSTRACTION (how vulnerabilities are “counted”):
- VULNERABILITY TYPE. Some RVIs distinguish between types of vulnerabilities (e.g. buffer overflow, format string, symlink, XSS, SQL injection). CVE, OSVDB, ISS X-Force, and US-CERT Vulnerability Notes perform this distinction; Secunia, FrSIRT, and US-CERT Cyber Security Bulletins do not. Bugtraq IDs vary. As vulnerability classification becomes more detailed, there is more room for variation (e.g. integer overflows and off-by-ones might be separated from “classic” overflows).
- REPLICATION. Some RVIs will produce multiple records for the same core vulnerability, even based on the RVI’s own definition. Usually this is done when the same vulnerability affects multiple vendors, or if important information is released at a later date. Secunia and US-CERT Security Bulletins use replication; so might vendor advisories (for each supported distribution). OSVDB, Bugtraq ID, CVE, US-CERT Vulnerability Notes, and ISS X-Force do not – or, they use different replication than others. Replication’s impact on statistics is not well understood.
- OTHER VARIATIONS. Other abstraction variations exist but have not been studied or categorized at this time. As one example, if an SQL injection vulnerability affects multiple executables in the same product, OSVDB will create one record for each affected program, whereas CVE will combine them.
- RVIs differ in how quickly they must release vulnerability information. While this used to vary significantly in the past, these days most public RVIs have very short timelines, from the hour of release to within a few days. Vulnerability information can be volatile in the early stages, so an RVI’s requirements for timeliness directly affects its veracity and completeness.
- All RVIs deal with limited resources or time, which significantly affects completeness, especially with respect to veracity, or timeliness (which is strongly associated with the ability to achieve completeness). Abstraction might also be affected, although usually to a lesser degree, except in the case of large-scale disclosures.
In my opinion:
You should not interpret any RVI’s statistics without considering its editorial policy. For example, the US-CERT Cyber Security Bulletin Summary for 2005 uses statistics that include replication. (As a side note, a causal glance at the bulletin’s contents makes it clear that it cannot be used to compare Windows to Linux as operating systems.)
In addition, you should not compare statistics from different RVIs until (a) the RVIs are clear about their editorial policy and (b) the differences in editorial policy can be normalized. Example: based on my PRELIMINARY investigations of a few hours’ work, OSVDB would have about 50% more records than CVE, even though it has the same underlying number of vulnerabilities and the same completeness policy for recent issues.
Third, for the sake of more knowledgeable analysis, RVIs should consider developing and publishing their own editorial policies.
(Note that based on CVE’s experience, this can be difficult to do.) Consumers should be aware that some RVIs might not be open about their raw sources, veracity analysis, and/or completeness.
Finally: while RVIs are not yet ready to provide usable, conclusive statistics, there is a solid chance that they will be able to do so in the near future. Then, the only problem will be whether the statistics are properly interpreted. But that is beyond the scope of this letter.
P.S. This post was written for the purpose of timely technical exchange. Members of the press are politely requested to consult me before directly attributing quotes from this article, especially with respect to stated opinion.