Thanks to Dave, we now have a completely re-written creditee system. For years, we operated off a four field system (name, email, company, url) for tracking vulnerability researchers. While we tracked that information, it was not flexible and led to serious problems with data integrity. Even worse, it didn’t allow for long term tracking of a researcher’s disclosure history. There were several cases where the system couldn’t handle proper data tracking, for example:
- If John Doe works for CompX and discloses a vulnerability, that becomes set in stone as associated with his name. This is problematic if John Doe goes to CompZ and discloses additional vulnerabilities.
- The above scenario is even more problematic if John Doe then releases a vulnerability through a program such as iDefense or ZDI.
- If two researchers shared the same name, there was no way to differentiate them.
While creating a creditee system to track this may seem straightforward, it is surprisingly difficult. After a lot of brainstorming and trying to determine where the system may fall short, we came up with something. What we are now referring to as “creditee v2″ will be used with a clean set of data. All previous creditee data entered is labeled (internally) as “v1″ and will only display if there is no v2 data.
The new creditee system is a bit more complex, but allows for one individual to be associated with multiple e-mail addresses, companies or organizations. We can also now track the country of the researcher and company separately to account for multi-national companies. With a better data set, we can now do a lot more analysis and generate interesting statistics for vulnerability researchers. As an example of the new system, you can now easily see all vulnerabilities associated with your name, e-mail addresses and affiliations. Clicking on the affiliation will show all researchers and the vulnerabilities disclosed by a given organization.
Even better, this system allows for one click access to your prior vulnerability disclosures. This could be useful for resumes, web page bios and more. We fully encourage you to “ego mangle” to help us fill in the data. Create an account, find your vulnerabilities in the database and fill in the details associated with that disclosure. Note: we are tracking the information associated with the disclosure, not necessarily your current e-mail or affiliation. If you can’t find your vulnerability in the database, mail moderators[at]osvdb.org with details. We’ll help you find it or add it in case it is missing. We’re still working out several bugs in the system, but this is a great overhaul and a foundation of another long term feature enhancement: “researcher confidence”.
Last week, OSVDB enhanced the search results capability by adding a considerable amount of filter capability, a simple “results by year” graph and export capability. Rather than draft a huge walkthrough, open a search in a new tab and title search for “microsoft windows”.
As always, the results will display showing the OSVDB ID, disclosure date and OSVDB title. On the left however, are several new options. First, a summary graph will be displayed showing the number of vulnerabilities by year, based on your search results. Next, you can toggle the displayed fields to add CVE, CVSSv2 score and/or the percent complete. The percent complete refers to the status of the OSVDB entry, and how many fields have been completed. Below that are one click filters that let you further refine your search results by the following criteria:
- Reference Type – only show results that contain a given type of reference
- Category – show results based on the vulnerability category
- Disclosure Year – refine results by limiting to a specific year
- CVSS Score – only show entries that are scored in a given range
- Percent Complete – filter results based on how complete the OSVDB entry is
Once you have your ideal search results, you can then export them to XML, custom RSS feed or CSV. The export will only work for the first 100 results. If you need a bigger data set to work with,
we encourage you to download the database instead.
With the new search capability, you should be able to perform very detailed searches, easily manipulate the results and even import them into another application or presentation. If you have other ideas of how a VDB search can be refined to provide more flexibility and power, contact us!
Using the ‘Advanced Search‘, you can now search the database by entering a CVSSv2 score range (e.g., 8 to 10) or by a specific CVSSv2 attribute (e.g., Confidentiality : Partial). To search for entries with only a 10 score, use the search range 10 to 10.
Using this search mechanism, we can see there are 3,217 entries in the database with a score of 10 and 9,266 entries that involve a complete loss of availability.
We hope this flexibility allows for even more refined searches to better help your project or organization. Stay tuned, this is one of many new search features planned.
OSVDB’s classification system is designed to categorize certain attributes of a vulnerability. This facilitates custom searches by a specific attribute, helps researchers develop metrics and gives a better picture of the vulnerability landscape. Until now, we’ve tracked if an exploit is ‘available’, ‘unavailable’, ‘rumored / private’ or ‘unknown’. While this was a good start for exploit status, it has quickly outgrown usefulness. Today, OSVDB overhauled the exploit classification to use the following:
- exploit public – A working exploit is publicly available.
- exploit rumored – An exploit is rumored to exist, but cannot be confirmed.
- exploit private – An exploit exists, but is not available to the public or in a commercial framework (e.g., vulnerability pre-disclosure groups like iDefense or ZDI, researcher developed but unreleased).
- exploit commercial – An exploit has been created and is available to customers in a commercial framework such as Canvas or CORE Impact.
- exploit unknown – The status of a working exploit is unknown.
In addition, we are moving one existing classification to the ‘exploit’ column since it is relevant to this category:
- exploit wormified – An exploit has been crafted to spread via ‘worm’ or ‘virus’.
As always, if you have suggestions or questions about the classification system, please mail moderators[at]osvdb.org!
In addition to overhauling the ‘exploit’ classification, additional touch-ups and reorganization has been done to the classification system. For volunteers that help mangle entries, watch out as items have shifted in flight. For users of OSVDB, these will be mostly cosmetic changes and should not impact searching.
- Disclosure column has been re-ordered
- Location column has been re-ordered
- Several locations have been touched-up. Use of ‘required’ is consistent now.
- Context-dependent – Moved from OSVDB to Location
- Mobile Phone expanded to include ‘Hand-held’ devices that may not be a phone
- Patch now includes RCS as some fixes are only available from CVS, SVN, etc.
- Removed ‘best practice’, no longer useful. We do not support SANS Top 20 x-refs any longer, since they don’t support the “20″ in the Top 20.
- Removed ‘no solution’. Until we have more volunteers and timely updates for all entries, ‘solution unknown’ is more accurate.
- Removed ‘hijacking’ attack type. Obsolete, not really an attack type of its own.
Along with the score, we display the date that NVD generated it and give users a method for recommending updates if they feel the score is inaccurate. While this is long overdue, this is one of many CVSS related features we will be adding in the near future. For those wondering about the delay in adding CVSS support, the cliff notes answer is “we had reservations about the scoring system”. Back in 2005, Jake and I had a long chat with a couple of the creators of CVSS and brought up our concerns. Our goal was to create our own scoring system, but internal debate (and procrastination) lead to neither being implemented. Rather than creating our own system, we finally opted to use what has become an industry standard. Some of our planned CVSS score enhancements on the to-do list, no particular order:
- Method for adding our own CVSS score. There are thousands of entries in OSVDB that do not have a CVE assignment, and as a result, no NVD based CVSS score.
- A more robust moderation queue to handle proposed changes. This may optionally have a one-click method for us to notify NVD of our change so they may consider revising their score.
- Ensure the CVSSv2 score is part of the database dumps, available for download.
- Method for tracking CVSS score historically. As NVD revises their score, or we do, a user should be able to see the history of changes.
- Compare our/NVD scores with other public tools, display discrepancy if different. For example, if a Nessus plugin scores an issue differently than NVD, show both scores so users may consider which is more accurate.
- Track researcher generated CVSS score. While infrequent, some advisories provide scoring. If different than NVD, display the discrepancy.
As always, if you have ideas on how we could better handle CVSS scoring, or have additional ideas for features, please contact us!
Results of Mangle-A-Thon 2009
Mangle-A-Thon 2009 went very well. In addition to some 20 or so primary sources matched for DataLossDB, several volunteers managed to improve the “complete-ness” of OSVDB by over a tenth of a percent. That may not sound like much, but with over 58 thousand vulnerabilities in that database, a tenth of a percent (0.1% for you math types) is a huge help.
We would like to send an enormous “Thank You!” to all those who came and helped out. You did a service to the entire industry by lending your time and efforts. Another enormous “Thank You!” to Midnight Research Labs Boston for hosting the event. The venue was perfect, and your efforts both in planning the event, and contributing at the event were invaluable. Thanks again to Voltage Security for sponsoring the event and providing the food and drink; it made the 12+ hours achievable.
We would like to extend one last “Thank You!” to all those who not only mangled at the event, but went home and have since mangled some more. That is exactly what we had hoped for: a community contribution by and for the security community, and we hope you enjoyed the experience and will continue to work with us.
The success of the event may drive us to do another one (probably not until next year), maybe in a different city, or it might just and up right where we did it this time. Maybe we’ll have it happen across a couple cities next time! Let us know what you think… any suggestions are welcome!
I always mean to post these more often, but I find myself bogged down in adding entries and putting off blog updates. Quite a few little blurbs and thoughts related to OSVDB content.
- I love vendors who maintain good changelogs. A good changelog has many attributes: version release with date, links to bugs/forums when appropriate, clear but concise language, categorize entries such as ‘security’ or ‘feature’, etc. Further, the changelog should be easy to find and stay updated. Rhinosoft (they maintain many other products as well) is a company that serves as a great example of this.
- On the flip side, I despise vendors with bad changelogs. One example is IBM who keeps these ridiculously large changelogs, mostly in CAPS with overly vague wording for many issues. As an example, check out this 1.4 meg changelog and try to pick out all the security issues.
Searching OSVDB – Our search engine got an overhaul a while back. While better overall, there are still a few bugs in it. Our dev is going to be available part time come Oct 1, so hopefully they will be knocked out in short order. Until then:
- If search results seem wrong, try using all lower case or exact case. Known bug that some searches seem to work with one, and not the other.
- We use keywords when appropriate. This can be useful for example, if you want to see all vulnerabilities in Zoller’s recent multi-browser disclosure. Search All Text for “one bug to rule them all”.
- Using references as a search field can be valuable. If you want to see all vulnerabilities in PHP (the core language), you can’t title search because of so many PHP applications littering the results. Instead, reference search “php.net” for a concise list.
- If you search for two terms, it will show results with both words. Searching with three terms will show results with any two words. Known bug! Until fixed, you can work around this by using “+one +two +three” search syntax, with a plus leading each keyword.
- OSVDB is also tracking vulnerabilities in electronic voting machines. While still in progress, we have scoured the excellent technical reports from the State of California on Premier Election Solutions (formerly Diebold) and have made good progress on Hart InterCivic. To see all of these vulnerabilities, search All Text for “Electronic Voting Machine”.
- I recently finished combing through the old Zardoz mail list archives. All of the vulnerabilities from that list, operated by Neil Gorsuch between 1989 and 1991, are now in the database. For those interested in historical vulnerabilities, reference search “securitydigest.org/zardoz” to see them. 63 vulnerabilities, only 7 of which have CVE references. Unfortunately, the mail list archive is not complete. If anyone has digests 126, 128, 206, 214, 305, 306, 308, 309, 310 or 314, please send them in!
- Similar to Zardoz, but already in OSVDB for over a year, you can reference search “securitydigest.org/unix” for the old Unix Security Mailing List disclosed vulnerabilities. There is some overlap with Zardoz here, but it should yield 57 results, 6 of which have a CVE reference.
- For crypto geeks, you can title search “algorithm” to get a good list of cryptographic algorithms, and when they were demonstrated to be sufficiently weak or completely broken. These go back to 1977 and the New Data Seal (NDS) Algorithm.
- I recently noticed another case of a vendor threatening mail list archives. Looking at the Neohapsis archive or the lists.grok.org.uk archive of a recent report on Inquira vulnerabilities, you can see each has redacted information. Mail list archives provide a valuable service and typically get little to no benefit for doing so. Despite that, it would be nice if they would post the actual legal threat letter when this occurs.
- The OSVDB vendor dictionary has been around for a while, but needs additional work. It is the first step in not only providing vendor security contact information, but building a framework for “vendor confidence”. This will eventually allow researchers to determine how cooperative a vendor is and if it is worth their time to responsibly disclose a vulnerability. As it stands, the Vendor Dictionary is primitive and needs to evolve quickly. One example of a problem we ran into, is a researcher submitted a case where they had a ‘bad dealing’ with a given vendor and it is included in the notes. The vendor contacted us, quite surprised to see it, and asked if we agreed with it. I responded that no, that was far from our own dealing with the vendor and that they had been great to work with in disclosing vulnerabilities, providing additional details or answering general questions. Reading our entry on the vendor doesn’t reflect that, and it should. Hopefully in the coming months, with a part time developer, we can begin to address this.
- When sanitizing takes its toll. BID 28219 has a link to an exploit that appears to have aggressively sanitized characters. Or did the researcher actually send that in? VDBs need to be mindful of this and add a note if they are displaying the submission as is.
Join OSF in Somerville, MA on September 19th, 2009 from 8am to midnight for Mangle-A-Thon, and help us mangle vulnerabilities into the Open Source Vulnerability Database (OSVDB), and mangle data loss incidents and primary sources into the DataLossDB.
The event, hosted by Midnight Research Labs Boston, is free and sponsored by Voltage Security, which will assist us in providing food and drink for attendees. OSF moderators will walk participants through the projects and teach participants how volunteers maintain the entirety of both data sets. Our goal is to get as much new and accurate data into both databases as possible, possibly add a couple of new recruits into the fold, and have a good time doing it.
Have suggestions regarding the projects? The lead developer (Dave) will be there, as will lead content guys for both projects (Kelly and Craig). You can actually see your suggestions implemented right there at the event… but only if you attend. :)
Midnight Research Labs Boston
30 Dane Street
Saturday, September 19th, 2009
8am to midnight
(three time slots: 8am – 1pm, 1pm – 6pm, 6pm – midnight, register for all or some)
Register via the “Register” link at: http://mangleathon.opensecurityfoundation.org/