The WASC Statistics Project is the first attempt at an industry wide collection of application vulnerability statistics in order to identify the existence and proliferation of application security issues on enterprise websites. Anonymous data correlating vulnerability numbers and trends across organization size, industry vertical and geographic area are being collected and analyzed to identify the prevalence of threats facing today’s online businesses. Such empirical data aims to provide the first true statistics on application layer vulnerabilities.
Using the Web Security Threat Classification (http://www.webappsec.org/projects/threat/) as a baseline, data is currently being collected and contributed by more than a half dozen major security vendors with the list of contributors growing regularly.
We are actively seeking others to contribute data.
If you would like to be involved with the project, please contact Erik Caso (ecaso AT ntobjectives DOT com)
Interesting article for several reasons. Below are some of the interesting quotes that stood out to me and may prove to be interesting topics.
Hackers exploit Windows patches
By Mark Ward
Last Updated: Thursday, 26 February, 2004, 10:54 GMT
“We have never had vulnerabilities exploited before the patch was known,” [David Aucsmith, Microsoft Security Business and Technology Unit] said.
I don’t think Aucsmith nor any vendor can say this with any certainty. If a vulnerability is found by a security company and disclosed to the vendor, it leads to a patch down the road. When the patch comes out, many people will reverse engineer it to figure out the vulnerability as most of us know. On the same note, like the exploits, IDS signatures follow the exploits that follow the patches. So if an unpatched ‘0-day vulnerability’ is being exploited, how do we know? There will be a significantly lower chance of detecting such an attack to know this statement is true.
“It’s a myth that hackers find the holes,” said Nigel Beighton, who runs a research project for security firm Symantec that attempts to predict which vulnerabilities will be exploited next.
Very interesting! Symantec attempts to predict which vulnerabilities will be exploited next. I wonder how =) It would be easy to do a high level analysis (expect to see this from mi2g or Gartner): “We predict that the X vulnerability which is a remote system level compromise that does not require authentication will be widely exploited in short order.” We can all predict this and be right most of the time. I assume Symantec does something above and beyond that…
“Almost all attacks against our software are against the legacy systems,” [David Aucsmith] said. “If you want more secure software, upgrade.”
This makes you wonder if Microsoft doesn’t care more about security because these nasty vulnerabilities are the best argument for buying the latest version they offer. Beyond that, how many of the vulnerabilities last reported affect their latest products? This quote seems like pure marketing spin.
The last few months have seen a lot more talk about the “Days of Risk”. In short, vendors like Microsoft say the days of risk are the time between vulnerability information (or an exploit) being released and a system being patched. So if a new vulnerability is announced on Tuesday, and I patch on Friday, there were three days of risk. This makes sense.. and this is also why many vendors advocate responsible disclosure and coordinated vulnerability announcements.
So what has been happening lately? I’ve noticed that my Windows XP systems “auto-update” feature is lagging heavily. Vulnerabilities are announced on a Tuesday, and it is as many as six days before my machine will alert me, download and install the patches. The point of this post is to question, is six days a lot of risk? To get an idea, lets look at a few of the recent vulnerabilities announced by Microsoft.
MS05-016, Windows MSHTA Shell Application Association Arbitrary Remote Script Execution
Disclosure: 2005-04-12 // Exploit: 2005-04-13
MS05-021, Exchange Server SMTP Extended Verb Remote Overflow
Disclosure: 2005-04-12 // Exploit: 2005-04-19
MS05-020, IE DHTML Object Memory Corruption Code Execution
Disclosure: 2005-04-12 // Exploit: 2005-04-12
So we have 0 days, 1 day and 7 days. Due to the lag in Microsoft making the patches available (I honestly don’t care what their excuse is), my computers are vulnerable and there is nothing I can do about it. I don’t think I need to address the fact that many of these vulnerabilities had fully working exploit code developed long before the Microsoft advisories either. Sure, they were held by the researchers and not disclosed, but information is shared, information is leaked, and information is stolen. Fact of life that only increases days of risk.
The Open Source Vulnerability Database, a project to catalog and describe the world’s security vulnerabilities, has continued to focus on improving database content and increasing services offered to the security community.
Since the official launch of OSVDB in March 2004, the vulnerability database has grown from 1000 to over 6700 complete entries. This rapid growth has far surpassed initial estimates, and the project’s many successes show that the open source community can truly deliver world-class security information.
OSVDB’s rapid success is directly attributed to the dedicated volunteers who help populate, maintain and enhance the database. Their hard work has already allowed OSVDB to exceed the amount of vulnerability information available in some databases. At the current rate of growth, the project is poised to surpass the other vulnerability databases by the end of 2005. “It will soon become mandatory for security professionals to use OSVDB if they want the most thorough information available,” says Brian Martin, one of the project leaders.
The OSVDB leadership team has been aggressively working to ensure the long term viability of the project. After improving content to be recognized as an industry leader, the team determined that incorporating as a non-profit organization was imperative to OSVDB’s future success. Founded to formally run the OSVDB project, the Open Security Foundation has been approved as a 501(c)3 non-profit organization under United States law. Jake Kouns, OSVDB project lead, says, “Achieving our non-profit status will allow us to seek funding and ensure free vulnerability information will be available for years to come.”
Two of the OSVDB project leaders, Brian Martin and Jake Kouns, will be presenting a talk called “Vulnerability Databases: Everything is Vulnerable” at cansecwest/core05 in May 2005. The presentation aims to provide an unbiased review of vulnerability databases, and addresses the value they should provide to security practitioners.
Software Vendors Should Come Clean on Security Holes
By Jim Rapoza
March 28, 2005
Opinion: When it comes to fixing bugs and vulnerabilities, the Sgt. Schultz approach amounts to nothing.
Most people tend to agree with the old adage: Knowledge is power. But there are some groups that think knowledge is a bad thing— knowledge on the part of others, at least—and these groups work hard to keep people in a state of blissful ignorance.
Interesting article, but one portion stood out to me:
From the point a vulnerability is discovered and a remedy is made available, the clock starts ticking. The longer you wait to address the threat, the closer you encroach upon negligence. This is just one demonstration for providing due care.
Given the long history of debate on what constitutes responsible disclosure (3 days? 2 weeks? 3 months?), attempting to define negligence in the sense of “windows of risk” may be debated for years to come. Schoenberg poses his question and directs it to the corporate world and deployment of technology. What happens when we turn this time table toward the vendors and patching? Suddenly, we have dozens of cases of some vendors (Sun, HP) being guilty of “gross negligence”.
Jennifer Granick has published a new paper titled The Price of Restricting Vulnerability Publications.
There are calls from some quarters to restrict the publication of information about security vulnerabilities in an effort to limit the number of people with the knowledge and ability to attack computer systems. Scientists in other fields have considered similar proposals and rejected them, or adopted only narrow, voluntary restrictions. As in other fields of science, there is a real danger that publication restrictions will inhibit the advancement of the state of the art in computer security. Proponents of disclosure restrictions argue that computer security information is different from other scientific research because it is often expressed in the form of functioning software code. Code has a dual nature, as both speech and tool. While researchers readily understand the information expressed in code, code enables many more people to do harm more readily than with the non-functional information typical of most research publications. Yet, there are strong reasons to reject the argument that code is different, and that restrictions are therefore good policy. Code’s functionality may help security as much as it hurts it and the open distribution of functional code has valuable effects for consumers, including the ability to pressure vendors for more secure products and to counteract monopolistic practices.
By Jaikumar Vijayan
MARCH 25, 2005
A threat by Sybase Inc. to sue a U.K.-based security research firm if it publicly discloses the details of eight holes it found in Sybase’s database software last year is evoking sharp criticism from some IT managers but sympathetic comments from others.
Blocking the release of vulnerability information “would set a bad precedent” for the software industry, said Tim Powers, senior network administrator at Southwire Co., a Carrollton, Ga.-based maker of electrical wires and cables.
Responsible disclosure of software flaws by vulnerability researchers has “significantly improved” the security of products, Powers said. “Preventing disclosure through the threat of legal action can only hurt security,” he said.