Category Archives: Vulnerability Statistics

Mac vs Windows – More “Statistics”

Yet another article comparing Mac vs Windows, and using statistics to back it up. Since this is getting to be a common occurrence, I won’t go into the usual lecture about statistics, how they can easily be manipulated to back any argument (including how VAX/VMS is the most in/secure OS in the world!), how you must fully qualify the data you used to generate your statistics, and all the other tricks that make statistics the best tool to create a convincing argument (lie?). I’m not saying this because I think Mac or Windows is more or less secure. I’m saying this because I don’t feel the following article is accurate or well written. Even the readers who commented bring up some very valid points and questions for the author. Add to that it seems that the author (George Ou) is somewhat outspoken and a fan of Microsoft, his credibility and bias toward rivals comes into question. I’d love for Secunia to officially respond to this article, since he uses their database and rating system to generate his stats.

George Ou’s relevant conclusions: Between Feb 04 and Feb 06, Mac OS X had 5 “extremely critical” (1 unpatched) vulnerabilities and MS Windows had 2 “extremely critical” (0 unpatched) vulnerabilities. Mac OS X had 173 high and 59 moderate vulns, while MS Windows had 49 high and 41 moderate vulns. Ou goes to conclude “The data is clear, and Apple has a lot more vulnerabilities of every kind ranging from moderately critical to extremely critical.

Vulnerability statistics for Mac and Windows
http://blogs.zdnet.com/Ou/?p=165

One of many good comments challenging the piece:
http://www.zdnet.com/[..]messageID=356498&start=-1

Past criticism of Ou’s work, and signs he may be biased toward Microsoft:
http://www.google.com/search?hl=en&q=George+Ou

A Time to Patch

http://blogs.washingtonpost.com/securityfix/2006/01/a_timeline_of_m.html

Brian Krebs has a fantastic post on his blog covering the time it takes for Microsoft to release a patch, and if they are getting any better at it. Here are a few relevant paragraphs from it, but I encourage you to read the entire article. It appears to be a well developed article that is heavily researched and quite balanced. Makes me wonder if his editors shot it down for some reason. If they did, shame on them.

A few months back while researching a Microsoft patch from way back in 2003, I began to wonder whether anyone had ever conducted a longitudinal study of Redmond’s patch process to see whether the company was indeed getting more nimble at fixing security problems.

Finding no such comprehensive research, Security Fix set about digging through the publicly available data for each patch that Microsoft issued over the past three years that earned a “critical” rating. Microsoft considers a patch “critical” if it fixes a security hole that attackers could use to break into and take control over vulnerable Windows computers.

Here’s what we found: Over the past three years, Microsoft has actually taken longer to issue critical fixes when researchers waited to disclose their research until after the company issued a patch. In 2003, Microsoft took an average of three months to issue patches for problems reported to them. In 2004, that time frame shot up to 134.5 days, a number that remained virtually unchanged in 2005.

First off, these are the kind of statistics and research that I mean when I talk about the lack of evolution of vulnerability databases. This type of information is interesting, useful, and needed in our industry. This begins to give customers a solid idea on just how responsive our vendors are, and just how long we stay at risk with unpatched vulnerabilities. This is also the type of data that any solid vulnerability database should be able to produce with a few clicks of the mouse.

This type of article can be written due to the right data being available. Specifically, a well documented and detailed time line of the life of a vulnerability. Discovery, disclosure to the vendor, vendor acknowledgement, public disclosure, and patch date are required to generate this type of information. People like Steven Christey (CVE) and Chris Wysopal (VulnWatch) have been pushing for this information to be made public, often behind the scenes in extensive mail to vendors. In the future if we finally get these types of statistics for all vendors over a longer period of time, you will need to thank them for seeing it early on and helping to make it happen.

This type of data is of particular interest to OSVDB and has been worked into our database (to a degree) from the beginning. We currently track the disclosure date, discovery date and exploit publish date for each vulnerability, as best we can. Sometimes this data is not available but we include it when it is. One of our outstanding development/bugzilla entries involving adding a couple more date fields, specifically vendor acknowledge date and vendor solution date. With these five fields, we can begin to trend this type of vendor response time with accuracy, and with a better historical perspective.

While Krebs used Microsoft as an example, are you aware that other vendors are worse than Microsoft? Some of the large Unix vendors have been slow to patch for the last twenty years! Take the recent disclosure of a bug in uustat on Sun Microsystems Solaris Operating System. iDefense recently reported the problem and included a time line of the disclosure process.

08/11/2004 Initial vendor contact
08/11/2004 Initial vendor response
01/10/2006 Coordinated public disclosure

Yes, one year and five months for Sun Microsystems to fix a standard buffer overflow in a SUID binary. The same thing that has plagued them as far back as January 1997 (maybe as far back as December 6, 1994, but details aren’t clear). It would be nice to see this type of data available for all vendors on demand, and it will be in due time. Move beyond the basic stats and consider if we apply this based on the severity of the vulnerability. Does it change the vendor’s response time (consistently)? Compare the time lines along with who discovered the vulnerability, and how it was disclosed (responsibly or no). Do those factors change the vendor’s response time?

The answers to those questions have been on our minds for a long time and are just a few of the many goals of OSVDB.

Open Letter on the Interpretation of “Vulnerability Statistics”

Steve Christey (CVE Editor) wrote an open letter to several mailing lists regarding the nature of vulnerability statistics. What he said is spot on, and most of what I would have pointed out had my previous rant been more broad, and not a direct attack on a specific group. I am posting his entire letter here, because it needs to be said, read, understood, and drilled into the heads of so many people. I am reformatting this for the blog, you can read an original copy via a mail list.


Open Letter on the Interpretation of “Vulnerability Statistics”

Author: Steve Christey, CVE Editor
Date: January 4, 2006

As the new year begins, there will be many temptations to generate, comment, or report on vulnerability statistics based on totals from 2005. The original reports will likely come from publicly available Refined Vulnerability Information (RVI) sources – that is, vulnerability databases (including CVE/NVD), notification services, and periodic summary producers.

RVI sources collect unstructured vulnerability information from Raw Sources. Then, they refine, correlate, and redistribute the information to others. Raw sources include mailing lists like Bugtraq, Vulnwatch, and Full-Disclosure, web sites like PacketStorm and Securiteam, blogs, conferences, newsgroups, direct emails, etc.

In my opinion, RVI sources are still a year or two away from being able to produce reliable, repeatable, and COMPARABLE statistics. In general, consumers should treat current statistics as suggestive, not conclusive.

Vulnerability statistics are difficult to interpret due to several factors:

  • - VARIATIONS IN EDITORIAL POLICY. An RVI source’s editorial policy dictates HOW MANY vulnerabilities are reported, and WHICH vulnerabilities are reported. RVIs have widely varying policies. You can’t even compare an RVI against itself, unless you can be sure that its editorial policy has not changed within the relevant data set. The editorial policies of RVIs seem to take a few years before they stabilize, and there is evidence that they can change periodically.
  • – FRACTURED VULNERABILITY INFORMATION. Each RVI source collects its information from its own list of raw sources – web sites, mailing lists, blogs, etc. RVIs can also use other RVIs as sources. Apparently for competitive reasons, some RVIs might not identify the raw source that was used for a vulnerability item, which is one aspect of what I refer to as the provenance problem. Long gone are the days when a couple mailing lists or newsgroups were the raw source for 90% of widely available vulnerability information. Based on what I have seen, the provenance problem is only going to get worse.
  • – LACK OF COMPLETE CROSS-REFERENCING BETWEEN RVI SOURCES. No RVI has an exhaustive set of cross-references, so no RVI can be sure that it is 100% comprehensive, even with respect to its own editorial policy. Some RVIs compete with each other directly, so they don’t cross-reference each other. Some sources could theoretically support all public cross-references – most notably OSVDB and CVE – but they do not, due to resource limitations or other priorities.
  • – UNMEASURABLE RESEARCH COMMUNITY BIAS. Vulnerability researchers vary widely in skill sets, thoroughness, preference for certain vulnerability types or product classes, and so on. This collectively produces a bias that is not currently measurable against the number of latent vulnerabilities that actually exist. Example: web browser vulnerabilities were once thought to belong to Internet Explorer only, until people actually started researching other browsers; many elite researchers concentrate on a small number of operating systems or product classes; basic SQL injection and XSS are very easy to find manually; etc.
  • – UNMEASURABLE DISCLOSURE BIAS. Vendors and researchers vary widely in their disclosure models, which creates an unmeasurable bias. For example, one vendor might hire an independent auditor and patch all reported vulnerabilities without publicly announcing any of them, or a different vendor might publish advisories even for very low-risk issues. One researcher might disclose without coordinating with the vendor at all, whereas another researcher might never disclose an issue until a patch is provided, even if the vendor takes an inordinate amount of time to respond. Note that many large-scale comparisons, such as “Linux vs. Windows,” can not be verified due to unmeasurable bias, and/or editorial policy of the core RVI that was used to conduct the comparison.

EDITORIAL POLICY VARIATIONS

This is just a sample of variations in editorial policy. There are legitimate reasons for each variation, usually due to audience needs or availability of analytical resources.

COMPLETENESS (what is included):

  1. SEVERITY. Some RVIs do not include very low-risk items such as a bug that causes path disclosure in an error message in certain non-operational configurations. Secunia and SecurityFocus do not do this, although they might note this when other issues are identified. Others include low-risk issues, such as CVE, ISS X-Force, US-CERT Security Bulletins, and OSVDB.
  2. VERACITY. Some RVIs will only publish vulnerabilities when they are confident that the original, raw report is legitimate – or if they’re verified it themselves. Others will publish reports when they are first detected from the raw sources. Still others will only publish reports when they are included in other RVIs, which makes them subject to the editorial policies of those RVIs unless care is taken. For example, US-CERT’s Vulnerability Notes have a high veracity requirement before they are published; OSVDB and CVE have a lower requirement for veracity, although they have correction mechanisms in place if veracity is questioned, and CVE has a two-stage approach (candidates and entries).
  3. PRODUCT SPACE. Some RVIs might omit certain products that have very limited distribution, are in the beta development stage, or are not applicable to the intended audience. For example, version 0.0.1 of a low-distribution package might be omitted, or if the RVI is intended for a business audience, video game vulnerabilities might be excluded. On the other hand, some “beta” products have extremely wide distribution.
  4. OTHER VARIATIONS. Other variations exist but have not been studied or categorized at this time. One example, though, is historical completeness. Most RVIs do not cover vulnerabilities before the RVI was first launched, whereas others – such as CVE and OSVDB – can include issues that are older than the RVI itself. As another example: a few years ago, Neohapsis made an editorial decision to omit most PHP application vulnerabilities from their summaries, if they were obscure products, or if the
    vulnerability was not exploitable in a typical operational configuration.

    ABSTRACTION (how vulnerabilities are “counted”):

  5. VULNERABILITY TYPE. Some RVIs distinguish between types of vulnerabilities (e.g. buffer overflow, format string, symlink, XSS, SQL injection). CVE, OSVDB, ISS X-Force, and US-CERT Vulnerability Notes perform this distinction; Secunia, FrSIRT, and US-CERT Cyber Security Bulletins do not. Bugtraq IDs vary. As vulnerability classification becomes more detailed, there is more room for variation (e.g. integer overflows and off-by-ones might be separated from “classic” overflows).
  6. REPLICATION. Some RVIs will produce multiple records for the same core vulnerability, even based on the RVI’s own definition. Usually this is done when the same vulnerability affects multiple vendors, or if important information is released at a later date. Secunia and US-CERT Security Bulletins use replication; so might vendor advisories (for each supported distribution). OSVDB, Bugtraq ID, CVE, US-CERT Vulnerability Notes, and ISS X-Force do not – or, they use different replication than others. Replication’s impact on statistics is not well understood.
  7. OTHER VARIATIONS. Other abstraction variations exist but have not been studied or categorized at this time. As one example, if an SQL injection vulnerability affects multiple executables in the same product, OSVDB will create one record for each affected program, whereas CVE will combine them.

    TIMELINESS:

  8. RVIs differ in how quickly they must release vulnerability information. While this used to vary significantly in the past, these days most public RVIs have very short timelines, from the hour of release to within a few days. Vulnerability information can be volatile in the early stages, so an RVI’s requirements for timeliness directly affects its veracity and completeness.

    REALITY:

  9. All RVIs deal with limited resources or time, which significantly affects completeness, especially with respect to veracity, or timeliness (which is strongly associated with the ability to achieve completeness). Abstraction might also be affected, although usually to a lesser degree, except in the case of large-scale disclosures.

Conclusion

In my opinion:

You should not interpret any RVI’s statistics without considering its editorial policy. For example, the US-CERT Cyber Security Bulletin Summary for 2005 uses statistics that include replication. (As a side note, a causal glance at the bulletin’s contents makes it clear that it cannot be used to compare Windows to Linux as operating systems.)

In addition, you should not compare statistics from different RVIs until (a) the RVIs are clear about their editorial policy and (b) the differences in editorial policy can be normalized. Example: based on my PRELIMINARY investigations of a few hours’ work, OSVDB would have about 50% more records than CVE, even though it has the same underlying number of vulnerabilities and the same completeness policy for recent issues.

Third, for the sake of more knowledgeable analysis, RVIs should consider developing and publishing their own editorial policies.
(Note that based on CVE’s experience, this can be difficult to do.) Consumers should be aware that some RVIs might not be open about their raw sources, veracity analysis, and/or completeness.

Finally: while RVIs are not yet ready to provide usable, conclusive statistics, there is a solid chance that they will be able to do so in the near future. Then, the only problem will be whether the statistics are properly interpreted. But that is beyond the scope of this letter.

Steve Christey
CVE Editor

P.S. This post was written for the purpose of timely technical exchange. Members of the press are politely requested to consult me before directly attributing quotes from this article, especially with respect to stated opinion.

US-CERT: A disgrace to vulnerability statistics

Several people have asked OSVDB about their thoughts on the recent US-CERT Cyber Security Bulletin 2005 Summary. Producing vulnerability statistics is trivial to do. All it takes is your favorite data set, a few queries, and off you go. Producing meaningful and useful vulnerability statistics is a real chore. I’ve long been interested in vulnerability statistics, especially related to how they are used and the damage they cause. Creating and maintaining a useful statistics project has been on the OSVDB to-do list for some time, and I personally have not followed up with some folks that had the same interest (Ejovi et al). Until I see such statistics “done right”, I will of course continue to voice my opinion at other efforts.

Some of the following comments are in the form of questions, that have what I feel to be fairly obvious answers. At some point I will no doubt flesh out some of these ideas a bit better. Until then..

Information in the US-CERT Cyber Security Bulletin is a compilation and includes information published by outside sources, so the information should not be considered the result of US-CERT analysis.

Read: Our disclaimer so you can’t blame us for shoddy stats!

Software vulnerabilities are categorized in the appropriate section reflecting the operating system on which the vulnerability was reported; however, this does not mean that the vulnerability only affects the operating system reported since this information is obtained from open-source information.

What’s the point then? If you can’t categorize this software by specific operating system or at least distinguish which are multiple vendor (with better accuracy), is it really that helpful or useful?

Vulnerabilities
* Windows Operating System
* Unix/ Linux Operating System
* Multiple Operating System

A decade later and we’re still doing the “windows vs unix” thing? I guess after the last year of hype, Mac OS X still isn’t mature enough to get it’s own category, or is it now considered “Unix”? To answer that question, yes this list lumps it in with “Unix”, and yes it is curious they don’t break it out after SANS gave it it’s own entry on their list. Not even an “other” category to cover VMS, AS/400, routers or other devices?

Now, let’s look at the very first entry on the list out of curiosity: A Windows Operating System vulnerability titled “1Two Livre d’Or Input Validation Errors Permit Cross-Site Scripting”. This issue can be referenced by CVE 2005-1644, SecurityTracker 1013971, Bugtraq ID or OSVDB 16717. The CERT bulletin labels this issue “High Risk” which is baffling to say the least. A cross-site scripting issue in the error output of a very low distribution guestbook is high risk? What do they label a remote code execution vulnerability in windows? Looking at the first MSIE vuln on the list, it too is rated as “high risk”. Great, way to completely invalidate your entire risk rating.

OK, on to the fun part.. the statistics! Unfortunately, the bulletin is very lacking on wording, explanation, details or additional disclaimers. We get two very brief paragraphs, and the list of vulnerabilities that link to their summary entries. Very unfortunate. No, let me do one better. US-CERT, you are a disgrace to vulnerability databases. I can’t fathom why you even bothered to create this list, and why anyone in their right mind would actually use, reference or quote this trash. The only ‘statistics’ provided by this bulletin:

This bulletin provides a year-end summary of software vulnerabilities that were identified between January 2005 and December 2005. The information is presented only as a index with links to the US-CERT Cyber Security Bulletin the information was published in. There were 5198 reported vulnerabilities: 812 Windows operating system vulnerabilities; 2328 Unix/Linux operating vulnerabilities; and 2058 Multiple operating system vulnerabilities.

The simple truth is that 99.99% of the people reading this document will see the first two paragraphs and move on. They will not read every single entry on that page. That means they walk away with the idea that Unix/Linux is roughly 3x more vulnerable than Windows, when it simply is not the case. While scrolling down, I ran across a section of the Unix/Linux vulnerability list that jumped out at me:

# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Photoshop Document Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# ImageMagick Remote EXIF Parsing Buffer Overflow (Updated)
# Info-ZIP UnZip File Permission Modification
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP UnZip File Permission Modification (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)
# Info-ZIP Zip Remote Recursive Directory Compression Buffer Overflow (Updated)

So the list includes the same entries over and over because they are [updated]. The quoted part above represents four vulnerabilities but appears to have been responsible for twenty entries instead. The PCRE overflow issue gets twelve entries on the CERT list. Why is the Unix/Linux section full of this type of screw up, yet magically the Windows section contains very few? Could it be that the Unix/Linux vendors actually respond to every issue in a more timely fashion? Or is US-CERT intentionally trying to harm the reputation of Unix/Linux? What, don’t like those two conclusions? TOUGH, that is what happens when you release such shoddy ‘research’ or ‘statistics’ (used very lightly).

Fortunately for me, someone at Slashdot noticed the same thing and did some calculations based on removing the [updated] entries; Windows drops from 813 to 671, Unix/Linux drops from 2328 to 891, and Multiple drops from 2057 to 1512. This gives us a total of 3074 vulnerabilities reported (by US-CERT standards), not 5198. With a margin for error so large, how can anyone take them seriously? More to the point, how can the mainstream media journalists over at the Washington Post blog about this, but not challenge the statistics?

A decade later, and the security community still lacks any meaningful statistics for vulnerabilities. Why can’t these outfits with commercial or federal funding actually do a good job and produce solid data that helps instead of confuses and misleads?!

“OSS means slower patches” – huh?!

http://australianit.news.com.au/articles/0,7204[..].html
OSS means slower patches
Chris Jenkins
SEPTEMBER 19, 2005

This was posted to Full-Disclosure where I first replied, and ISN picked up. Articles like this do nothing positive for our industry. Jenkins should not waste his time writing fluff pieces like this, and he should do some digging or at least question other sources. Of course, this is not the first time Symantec’s vuln stats have been questioned either. Since that post, no one at Symantec has given any insight as to how they derive their statistics and what lead to their conclusions.

I haven’t had time to read the full report mirrored here, but I have a feeling it will bring more questions than answers like the previous one did.

Full text of my reply:

The obvious criticism:

“The Mozilla family of browsers had the highest number of vulnerabilities during the first six months of 2005, with 25,” the Symantec report says. “Eighteen of these, or 72 per cent, were rated as high severity. Microsoft Internet Explorer had 13 vendor confirmed vulnerabilities, of which eight, or 62 per cent, were considered high severity.”

Microsoft IE had at least 19 vulnerabilities from 2005-01-01 to 2005-06-30. Why does Symantec make the distinction of “X vulnerabilities in Mozilla” vs “MSIE had X *vendor confirmed vulnerabilities*”? This all to conveniently allows the silently patched vulnerabilities to slip through the cracks of our statistics. Does Mozilla’s honesty in acknowledging vulnerabilities come back to bite them in the ass?

Mozilla browsers had more than 25, but are 72 per cent really “high severity”? Download information spoofing x2, File extension spoofing, URL restriction bypass, DoS x2, redirect spoofing, XSS, link status bar spoofing, Dialog overlapping, URL Wrap Obfuscation.. are all of these really “high severity”? Is that theoretical, practical, or hype?

Now, the media/Symantec driven propaganda (for lack of better word?):

THE growing popularity of open-source browsers and software may be responsible for the increasing gap between the exposure of a vulnerability and the provision of patch to fix it, security software vendor Symantec has said.

Mr Sykes said the increasing popularity of open source software, such as the Mozilla Foundation’s Firefox browser, could be part of the reason for the increase in the gap between vulnerability and patch, with the open source development model itself part of the problem. “It is relying on the goodwill and best efforts of many people, and that doesn’t have the same commercial imperative,” he said. “I’m sure that is part of what is causing the blow-out in the patch window.”

The growth in Firefox vulnerability reports coincides with its increasing popularity with users. “It is very clear that Firefox is gaining acceptance and I would therefore expect to see it targeted,” Mr Sykes said. “People don’t attack browsers and systems per se, they attack the people that use them,” he said. “As soon as large banks started using Linux, Linux vulnerabilities started to get exploited.”

The premise of this article is open source software is to blame for longer vendor response times. In laymen’s terms, blame vendors like Mozilla for having vulnerabilities patched slower? Err, compared to what? This shallow article doesn’t even qualify that statement! Slower than previous vulnerabilities? Slower than non open source? Given the article directly compares Mozilla browsers to Microsoft IE, it is trivial to assume the claim is made in relation to closed source vendors such as Microsoft. So then what .. 30 days “blown out” to 54 days is some huge time gap compared
to Microsoft IE patches? What clueless *moron* really believes this crap they are shoveling? Is it Symantec or Chris Jenkins or Australian IT?

Given that Symantec won’t even quote previous statistics: “Symantec had not published previously statistics on the average time required to produce patches, but Mr Sykes said data showed the lag had previously been about 30 days.” Given that Jenkins/AusIT/Symantec won’t give us any statistics (even questionable ones) regarding MSIE patches, we’re supposed to take this at face value? It is *well documented* that Microsoft takes well over 30 days to patch vulnerabilities. It is also becoming crystal clear that Microsoft is hiding behind their “30 day patch cycle” to imply
that is the longest they go before patching a vulnerability, when it simply is not the case. Taking a look at a *single vendor* [1] and their experience with reporting vulnerabilities to Microsoft, we see that they give MS a 60 day window to patch vulnerabilities, and are consistently overdue. As of this mail, the worse is *ONLY* 114 days past due (we’ve seen it closer to 250 days before). So again, where are these implications coming from? Where does this statement/conclusion/observation that “OSS causes slower patches” come from exactly?

[1] http://www.eeye.com/html/research/upcoming/index.html

600 Security Vulnerabilities in Q1 2005

http://www.betanews.com/article/600_Security_Vulnerabilities_in_Q1_2005/1115067858

600 Security Vulnerabilities in Q1 2005
By Nate Mook, BetaNews
May 2, 2005, 5:04 PM

According to a study published Monday by the SANS Institute, more than 600 new security vulnerabilities cropped up in the first three months of 2005. Although Microsoft leads the top 20 most critical security issues, hackers are turning their attention to third party software such as media players and databases.

SANS says the new list represents only security vulnerabilities found or patched in Q1 2005. Although SANS usually issues a yearly Top20 list, the group has moved to quarterly updates to aid organizations in recognizing potential security issues that could affect them.

Vulnerabilities found or patched? This is an odd way to track vulnerabilities in a given time frame. Aside from that, 600 in Q1 expands to roughly 2400 in 2005, significantly less than previous years. The real question.. where did SANS get their statistics from?

“Web Application Security Statistics” Project

http://www.webappsec.org/projects/statistics/

The WASC Statistics Project is the first attempt at an industry wide collection of application vulnerability statistics in order to identify the existence and proliferation of application security issues on enterprise websites. Anonymous data correlating vulnerability numbers and trends across organization size, industry vertical and geographic area are being collected and analyzed to identify the prevalence of threats facing today’s online businesses. Such empirical data aims to provide the first true statistics on application layer vulnerabilities.

Using the Web Security Threat Classification (http://www.webappsec.org/projects/threat/) as a baseline, data is currently being collected and contributed by more than a half dozen major security vendors with the list of contributors growing regularly.

We are actively seeking others to contribute data.

If you would like to be involved with the project, please contact Erik Caso (ecaso AT ntobjectives DOT com)

Follow

Get every new post delivered to your Inbox.

Join 5,408 other followers