The Vulnerability Disclosure Game: Are We More Secure?
By Marcus J. Ranum
Do you remember the original premise of the disclosure game? By publicly announcing vulnerabilities in products we will force the vendors to be more responsive in fixing them, and security will be better. Remember that one? Tell me, dear reader, after 10 years of flash-alerts, rushed patch cycles and zero-day attacks, do you think security has gotten better?
I know that Microsoft, Oracle and others have spent huge amounts of money improving the security of their software. Never mind the fact that 99.99 percent of the computer users in the world would rather they had spent that money making their software cheaper or faster, I suppose it’s a great thing to see that software security is being taken seriously. Security has gotten more expensive. But do you think security has gotten better?
It’s a tad ironic that the only way we could ever hope to answer this question is if the vendors practiced full-disclosure! The only way this question could be answered is to see a list of all the vulnerabilities that vendors like Microsoft or Oracle have found and fixed through in-house auditing. If they have found and fixed 1,000 vulnerabilities compared to the 250 publicly disclosed (arbitrary numbers), then yes, security has gotten better. Right? If software is shipping with less vulnerabilities per lines of code, then security has improved, and the “we’ll force your hand” crowd had something to do with it.
If twenty years of brutal full disclosure really did teach them the importance of security by forcing them to spend considerable money on said security, then didn’t those wiley “we’ll force your hand” folks in the 90’s do what they claimed, although a little differently than planned?
Full Disclosure of Security Vulnerabilities a ‘Damned Good Idea’
By Bruce Schneier
Full disclosure – the practice of making the details of security vulnerabilities public – is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.
I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.
No reply needed.
Microsoft: Responsible Vulnerability Disclosure Protects Users
By Mark Miller, Director, Microsoft Security Response Center
Responsible disclosure, reporting a vulnerability directly to the vendor and allowing sufficient time to produce an update, benefits the users and everyone else in the security ecosystem by providing the most comprehensive and highest-quality security update possible.
Provided “sufficient time” doesn’t drag out too long, else the computer criminal (who are in the ‘security ecosystem’) benefit greatly from responsible disclosure too.
From my experience helping customers digest and respond to full disclosure reports, I can tell you that responsible disclosure, while not perfect, doesn’t increase risk as full disclosure can.
Except “your experience” wouldn’t take full disclosure cases into account appropriately. Look at some of the vulnerabilities reported in Windows, Real, Novell and other big vendors. Notice that in more and more cases, we’re seeing the vendor acknowledge multiple researchers who found the issues independantly. That is proof that multiple people know about vulnerabilities pre-disclosure, be it full or responsible. If a computer criminal has such vulnerability information that remains unpatched for a year due to the vendor producing “the most comprehensive and highest-quality security update possible”, then the risk is far worse than the responsible disclosure your experience encompasses.
Vendors only take these shortcuts because we have to, knowing that once vulnerability details are published the time to exploit can be exceedingly short-many times in the range of days or hours.
See above, the bolded “proof” I mention. If vendors are going to move along with their head in the sand, pretending that there is a single person with the vulnerability or exploit details, and pretending that they alone control the disclosure, the vendors are naive beyond imagination.
The security researcher community is an integral part of this change, with Microsoft products experiencing approximately 75 percent responsible disclosure.
I’d love to see the chart showing issues in Microsoft products (as listed in OSVDB), relevant dates (disclosed to vendor, patch date, public disclosure) and the resulting statistics. My gut says it would be less than 75%.
I know we’re all getting tired of the Remote File Inclusion (RFI) vulnerabilities being disclosed that end up being debunked, but this one takes the cake so far (yes I’m behind on e-mail).
Fri Jun 16 2006
(1) path/action.php, and to files in path/nucleus including (2) media.php, (3) /xmlrpc/server.php, and (4) /xmlrpc/api_metaweblog.inc.php
Sat Jun 17 2006
Demonstrated that the vulnerability is bogus.
Mon Oct 30 2006
Mon Oct 30 2006
Demonstrated (again) that the vulnerability is bogus.
So not only is it fake, it was also previously disclosed and debunked. I swear, Bugtraq moderators should seriously consider blocking any RFI disclosure from hotmail.com. Would save Vulnerability Databases a lot of time.
Somewhere out there is a point-and-click web application that allows neophyte “security researchers” (yes, that is a joke) to quickly whip up their very own Bugtraq or Full-Disclosure post. I’m sure others have noticed this as well? More and more of the disclosures have too much in common, and unfortunately for VDBs, more and more are completely bogus reports. I feel bad for the vendors as much as I feel for those of us trying to track vulnerabilities. Anyway, some of the many things these disclosures have in common:
– Title (example: EasyBannerFree (functions.php) Remote File Include Exploit)
– # Everything is commented as if this is supposed to be a script
– The remote file inclusion is http://shell.txt or SHELLURL.COM
– It has a single line of source code quoted to “validate” the finding (example: rrequire ( $path_to_script.”globals.inc.php”);
– May have 80 lines of perl code to exploit a single http:// line, because it looks cool
– Contains more greets/thanks than vulnerability information
– If their disclosure is proven false, they never seem to reply to explain themselves
Odds are strong they won’t include the vendor or give enough information to find it via extensive searching. Odds are good it will not contain the version supposedly affected and contain typos in the script or variable names. And worst of all, it is a glorified “grep and gripe” disclosure. Meaning, they grep out the ‘require’ line, don’t bother to check any other portion of the code, and assume it is vulnerable. Some will go so far as to say stuff like “ (tested on Version 1.13)” even though it is quickly proven false.
So, “security researchers” disclosing all these remote file inclusion bugs. Test your finds before you publish, no more grep and gripe crap please.
January Set As ‘Month Of Apple Bugs’
The “Month of Apple Bugs” project, which will be similar to November’s “Month of Kernel Bugs” campaign, will be hosted by the kernel bug poster who goes by the initials “LMH,” and his partner, Kevin Finisterre, a researcher who has posted numerous Mac vulnerabilities and analyses on his own site.
More interesting this time, Landon Fuller has begun using his own blog to release unofficial patches for the MOAB vulnerabilities as they are released.
No, not a typo. A couple weeks back, Argeniss “was proud to announce that we are starting on December the “Week of Oracle Database Bugs” (WoODB).” A couple days ago they abruptly called off the WoODB with the following message:
We are sad to announce that due to many problems the Week of Oracle Database Bugs gets suspended.
We would like to ask for apologizes to people who supported this and were really excited with the idea, also we would like to thank the people who contributed with Oracle vulnerabilities.
It’s hard to ignore the obvious possibility (especially with so many other people saying the same) that they solicited the community to support their effort by submitting unpublished Oracle vulnerabilities, then arbitrarily shut the effort down while keeping all the information and not sharing it as stated. Argeniss, why not give us the full story? Were you threatened by Oracle? Drastic change of ethical stance? Pure greed when you realized the value of a hundred contributions?
First it was the Month of Browser Bugs (MoBB), now it is the Month of Kernel Bugs (MoKB). When I first read about it, I immediately thought of thirty odd entries about Linux Kernel Local DoS conditions. My pessimism is born out of the numerous local DoS attacks against the Linux Kernel. Microsoft fans use this to say that Linux has so many more bugs than Microsoft, but i’m sure if we documented every way to make any version of Windows blue screen, we’d be cutting ourselves.
Fortunately, the MoKB has started out very well by offering vulnerabilities in Mac OS X Kernel Wireless Drivers, Linux, FreeBSD, Solaris, and Windows. Only 11 days in, and all of that! The folks putting this together are doing an outstanding job putting this together, researching the vulnerabilities and presenting them.
In the months and years to come, what else will we see? What would you like to see the most.. Month of ______ Bugs.
Fall behind and someone will always beat you to the punch! Gadi Evron posted an entry over at Securiteam on the topic of using Google’s Codesearch to find vulns. Since he and others are writing about this, I don’t have to! However, i’ll post a few more thoughts before anyone else maybe!
First, we have this great ability to (ab)use Google’s Codesearch to find vulnerabilities through fast code analysis. Is this a fun but very short fad? Or will we see people use this to disclose vulnerabilities and give credit to their method? Will it lead to a lot of false positives> like we’re seeing with remote file inclusion? Several ‘researchers’ are grep’ing for a single stringle, finding it, and posting it as a remote file inclusion vulnerability without really analyzing the code or testing their own “proof of concept”. Hopefully, researchers will use this new tool to not only find vulnerabilities, but truly validate their finding before disclosing.
Second, who is going to be the first to create an interface that smoothly links the Google Codesearch with a robust static code analyzer? Imagine a web interface where you choose a few key things like what language, what types of vulnerabilities, and click click for all the results. The program would then use the Codesearch results to pipe into the code analyzer and spit out a list of high probability vulnerabilities.
Some of these ideas courtesy of email discussions with Chris Wysopal, Mudge and others.
Paul Clark, Systems Librarian at the Wilderness Coast Public Libraries, has created an excellent timeline of Full Disclosure related articles. Unfortunately, mail to him is bouncing and it hasn’t been updated since 2004. Would be great to see someone pick this up.