Category Archives: Vulnerability Statistics

Android versus iOS Security – Not Again…

About two weeks ago, another round of vulnerability stats got passed around. Like others before, it claims to use CVE to compare Apple iOS versus Android in an attempt to establish which is more secure based on “vulnerability counts”. The statistics put forth are basically meaningless, because like most people using a VDB to generate stats, they don’t fully understand their data source. This is one type of bias that enters the picture when generating statistics, and one of many points Steve Christey (MITRE/CVE) and I will be making next week at BlackHat (Wednesday afternoon).

As with other vulnerability statistics, I will debunk the latest by showing why the conclusions are not based on a solid understanding of vulnerabilities, or vulnerability data sources. The post is published on The Verge, written by ‘Mechanicix’. The results match last year’s Symantec Internet Security Threat Report (as mentioned in the comments), as well as the results published this year by Sourcefire in their paper titled “25 Years of Security Vulns“. In all three cases, they use the same data set (CVE), and do the same rudimentary counting to reach their results.

The gist of the finding is that Apple iOS is considerably less secure than Android, as iOS had 238 reported vulnerabilities versus the 27 reported in Android, based on CVE and illustrated through CVEdetails.com.

Total iOS Vulnerabilities 2007-2013: 238
Total Android Vulnerabilities 2009-2013: 27

Keeping in mind those numbers, if you look at the CVE entries that are included, a number of problems are obvious:

  1. We see that the comparison timeframes differ by two years. There are at least 3 vulnerabilities in Android SDK reported before 2009, two of which have CVEs (CVE-2008-0985 and CVE-2008-0986).
  2. These totals are based on CVE identifiers, which does not necessarily reflect a 1-to-1 vulnerability mapping, as they document. You absolutely cannot count CVE as a substitute for vulnerabilities, they are not the same.
  3. The vulnerability totals are incorrect due to using CVE, a data source that has serious gaps in coverage. For example, OSVDB has 71 documented vulnerabilities for Android, and we do not make any claims that our coverage is complete.
  4. The iOS results include vulnerabilities in WebKit, the framework iOS Safari uses. This is problematic for several reasons.
    1. First, that means Mechanicix is now comparing the Android OS to the iOS operating system and applications.
    2. Second, WebKit vulnerabilities account for 109 of the CVE results, almost half of the total reported.
    3. Third, if they did count WebKit intentionally then the numbers are way off as there were around 700 WebKit vulnerabilities reported in that time frame.
    4. Fourth, the default browser in Android uses WebKit, yet they weren’t counted against that platform.
  5. The results include 16 vulnerabilities in Safari itself (or in WebKit and just not diagnosed as such), the default browser.
  6. At least 4 of the 238 are vulnerabilities in Google Chrome (as opposed to WebKit) with no mention of iOS in the CVE.
  7. A wide variety of iOS applications are included in the list including Office Viewer, iMessage, Mail, Broadcom BCM4325 and BCM4329 Wi-Fi chips, Calendar, FreeType, libxslt, and more.

When you factor in all of the above, Android likely comes out on top for the number of vulnerabilities when comparing the operating systems. Once again, vulnerability statistics seem simple on the surface. When you consider the above, and further consider that there are likely more points that influence vulnerability counts, we see that it is anything other than simple.

CVE Vulnerabilities: How Your Dataset Influences Statistics

Readers may recall that I blogged about a similar topic just over a month ago, in an article titled Advisories != Vulnerabilities, and How It Affects Statistics. In this installment, instead of “advisories”, we have “CVEs” and the inherent problems when using CVE identifiers in the place of “vulnerabilities”. Doing so is technically inaccurate, and it negatively influences statistics, ultimately leading to bad conclusions.

NSS Labs just released an extensive report titled “Vulnerability Threat Trends; A Decade in Review, Transition on the Way“, by Stefan Frei. While the report is interesting, and the fundamental methodology is sound, Frei uses a dataset that is not designed for true vulnerability statistics. Additionally, I believe that some factors that Frei attributes to trends are incorrect. I offer this blog as open feedback to bring additional perspective to the realm of vulnerability stats, which is a long ways from approaching maturity.

Vulnerabilities versus CVE

In the NSS Labs paper, they define a vulnerability as “a weakness in software that enables an attacker to compromise the integrity, availability, or confidentiality of the software or the data that it processes.” This is as good a definition as any. The key point here is a weakness, singular. What Frei fails to point out, is that the CVE dictionary is not a vulnerability database in the same sense as many others. It is a specialty database designed primarily to assign a unique identifier to a vulnerability, or a group of vulnerabilities, to coordinate tracking and discussion. While CVE says “CVE Identifiers are unique, common identifiers for publicly known information security vulnerabilities” , it is more important to note the way CVE abstracts, which is covered in great detail. From the CVE page on abstraction:

CVE Abstraction Content Decisions (CDs) provide guidelines about when to combine multiple reports, bugs, and/or attack vectors into a single CVE name (“MERGE”), and when to create separate CVE names (“SPLIT”).

This clearly denotes that a single CVE may represent multiple vulnerabilities. With that in mind, every statistic generated by NSS Labs for this report is not accurate, and their numbers are not reproduceable using any other vulnerability dataset (unless it too is only based on CVE data and does not abstract differently, e.g. NVD). This distinction puts the report’s statements and conclusions in a different light:

As of January 2013 the NVD listed 53,489 vulnerabilities ..
In the last ten years on average 4,660 vulnerabilities were disclosed per year ..
.. with an all-‐time high of 6,462 vulnerabilities counted in 2006 ..

The abstraction distinction means that these numbers aren’t just technically inaccurate (i.e. terminology), they are factually inaccurate (i.e. actual stats when abstracting on a per-vulnerability basis). In each case where Frei uses the term “vulnerability”, he really means “CVE”. When you consider that a single CVE may cover as many as 66 or more distinct vulnerabilities, it really invalidates any statistic generated using this dataset as he did. For example:

However, in 2012 alone the number of vulnerabilities increased again to a considerable 5,225 (80% of the all-‐time high), which is 12% above the ten-‐year average. This is the largest increase observed in the past six years and ends the trend of moderate declines since 2006.

Based on my explanation, what does 5,225 really mean? If we agree for the sake of argument, that CVE averages two distinct vulnerabilities per CVE assignment, that is now over 10,000 vulnerabilities. How does that in turn change any observations on trending?

The report’s key findings offer 7 high-level conclusions based on the CVE data. To put all of the above in more perspective, I will examine a few of them and use an alternate dataset, OSVDB, that abstracts entries on a per-vulnerability basis. With those numbers, we can see how the findings stand. NSS Labs report text is quoted below.

The five year long trend in decreasing vulnerability disclosures ended abruptly in 2012 with a +12% increase

Based on OSVDB data, this is incorrect. Both 2009 (7,879) -> 2010 (8,835) as well as 2011 (7,565) -> 2012 (8,919) showed an upward trend.

More than 90 percent of the vulnerabilities disclosed are moderately or highly critical – and therefore relevant

If we assume “moderately” is “Medium” criticality, as later defined in the report, is 4.0 -‐ 6.9 then OSVDB shows 57,373 entries that are CVSSv2 4.0 – 10.0, out of 82,123 total. That means 90% is considerably higher than we show. Note: we do not have complete CVSSv2 data for 100% of our entries, but we do have them for all entries affiliated with the ones Frei examined and more. If “moderately critical” and “highly critical” refer to different ranges, then they should be more clearly defined.

It is also important to note that this finding is a red herring, due to the way CVSS scoring works. A remote path disclosure in a web application scores a 5.0 base score (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N). This skews the scoring data considerably higher than many in the industry would agree with, as 5.0 is the same score you get for many XSS vulnerabilities that can have more serious impact.

9 percent of vulnerabilities disclosed in 2012 are extremely critical (with CVSS score>9.9) paired with low attack/exploitation complexity

This is another red herring, because any CVSS 10.0 score means that “low complexity” was factored in. The wording in the report implies that a > 9.9 score could be paired with higher complexity, which isn’t possible. Further, CVSS is scored for the worst case scenario when details are not available (e.g. CVE-2012-5895). Given the number of “unspecified” issues, this may seriously skew the number of CVSSv2 10.0 scores.

Finally, there was one other element to this report that was used in the overview, and later in the document, that is used to attribute a shift in disclosure trends. From the overview:

The parallel and massive drop of vulnerability disclosures by the two long established purchase programs iDefense VCP and TippingPoint ZDI indicate a transition in the way vulnerability and exploit information is handled in the industry.

I believe this is a case of “correlation does not mean causation“. While these are the two most recognized third-party bug bounty programs around, there are many variables at play here. In the bigger picture, shifts in these programs do not necessarily mean anything. Some of the factors that may have influenced disclosure numbers for those two programs include:

  • There are more bug bounty programs available. Some may offer better price or incentive for disclosing through them, stealing business from iDefense/ZDI.
  • Both companies have enjoyed their share of internal politics that affected at least one program. In 2012, several people involved in the ZDI program left the company to form their startup. It has been theorized that since their departure, ZDI has not built the team back up and that disclosures were affected as a result.
  • ZDI had a small bout of external politics, in which one of their most prevalent bounty collectors (Luigi Auriemma) had a serious disagreement about ZDI’s handling of a vulnerability, as relates to Portnoy and Exodus. Auriemma’s shift to disclose via his own company would dramatically affect ZDI disclosure totals alone.
  • Both of these companies have a moving list of software that they offer a bounty on. As it changes, it may result in spikes of disclosures via their programs.

Regardless, iDefense and ZDI represent a small percentage of overall disclosures, it is curious that Frei opted to focus on this so prominently as a reason for vulnerability trends changing without considering some influencing factors. Even during a good year, 2011 for example, iDefense (42) and ZDI (297) together accounted for 339 out of 7,565 vulnerabilities, only ~ 4.5% of the overall disclosures. There are many other trends that could just as easily explain relatively small shifts in disclosure totals. When making statements about trends in vulnerability disclosure and how it affects statistics, it isn’t something that should be done by casual observers. They simply miss a lot of the low-level details you glean on the day-to-day vulnerability handling and cataloging.

To be clear, I am not against using CVE/NVD data to generate statistics. However, when doing so, it is important that the dataset be explained and qualified before going into analysis. The perception and definition of what “a vulnerability” is changes based on the person or VDB. In vulnerability statistics, not all vulnerabilities are created equal.

Advisories != Vulnerabilities, and How It Affects Statistics

I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!

I noticed a Tweet from @SCMagazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.

Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.

It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.

A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12” (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!

If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.

Base Source Critical Important Moderate Low
2012 Advisories (83) 35 (42.2%) 46 (55.4%) 2 (2.4%)
2012 CVE (160) 100 (62.5%) 18 (11.3%) 39 (24.4%) 3 (1.8%)
2012 Total (176) 101 (57.4%) 19 (10.8%) 41 (23.3%) 15 (8.5%)

It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:

   

The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.

In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.

  • MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
  • MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
  • Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
  • OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
  • Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
  • One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.

These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.

February Update: OSVDB Winter 2010 Fundraising Goal

Back in early January, I issued a challenge to donate to OSF’s Winter Fundraiser for every new vulnerability pushed into OSVDB. Two of the three months have come and gone, and even though January was a little more productive than February in terms of new vulnerabilities, the moderation team is still making good progress:

2010-02-01: 13 vulns pushed, 133 vulns updated
2010-02-02: 31 vulns pushed, 79 vulns updated
2010-02-03: 25 vulns pushed, 145 vulns updated
2010-02-04: 21 vulns pushed, 31 vulns updated
2010-02-05: 25 vulns pushed, 153 vulns updated
2010-02-06: 8 vulns pushed, 76 vulns updated
2010-02-07: 3 vulns pushed, 278 vulns updated
2010-02-08: 27 vulns pushed, 64 vulns updated
2010-02-09: 47 vulns pushed, 159 vulns updated
2010-02-10: 37 vulns pushed, 160 vulns updated
2010-02-11: 16 vulns pushed, 59 vulns updated
2010-02-12: 27 vulns pushed, 128 vulns updated
2010-02-13: 10 vulns pushed, 51 vulns updated
2010-02-14: 4 vulns pushed, 112 vulns updated
2010-02-15: 12 vulns pushed, 81 vulns updated
2010-02-16: 23 vulns pushed, 181 vulns updated
2010-02-17: 28 vulns pushed, 235 vulns updated
2010-02-18: 25 vulns pushed, 119 vulns updated
2010-02-19: 43 vulns pushed, 261 vulns updated
2010-02-20: 11 vulns pushed, 126 vulns updated
2010-02-21: 2 vulns pushed, 34 vulns updated
2010-02-22: 3 vulns pushed, 64 vulns updated
2010-02-23: 41 vulns pushed, 221 vulns updated
2010-02-24: 37 vulns pushed, 112 vulns updated
2010-02-25: 15 vulns pushed, 138 vulns updated
2010-02-26: 17 vulns pushed, 146 vulns updated
2010-02-27: 9 vulns pushed, 17 vulns updated
2010-02-28: 8 vulns pushed, 24 vulns updated

With 568 new vulnerabilities pushed in February, we’re now up to 1,223 new entries for 2010; personally, I’d like to see that number hit at least 2,000 by the end of March (3,000 may be out of reach, but never say never), but that will depend on the time and efforts of our moderation team and the amount of vulnerabilities uncovered by our multiple reference sources. Please remember that I will donate $0.50 to OSF for every new vulnerability pushed into the database through April 1 (and no, there will not be an April Fools announcement saying that the challenge has been called off), and we’re hoping to obtain some matching offers to help offset the costs of maintaining the database. A special “thank you” goes to all parties who have offered to match the challenge so far, and we hope others who find OSVDB to be a valuable resource can jump in and help us out as well.

31 more days for the challenge… and away… we… go.

– Lyger

Adobe, Qualys, CVE, and Math

Elinor Mills wrote an article titled Firefox, Adobe top buggiest-software list. In it, she quotes Qualys as providing vulnerability statistics for Mozilla, Adobe and others. Qualys states:

The number of vulnerabilities in Adobe programs rose from 14 last year to 45 this year, while those in Microsoft software dropped from 44 to 41, according to Qualys. Internet Explorer, Windows Media Player and Microsoft Office together had 30 vulnerabilities.

This caught my attention immediately, as I know I have mangled more than 45 Adobe entries this year.

First, the “number of vulnerabilities” game will always have wiggle room, which has been discussed before. A big factor for statistic discrepancy when using public databases is the level of abstraction. CVE tends to bunch up vulnerabilities in a single CVE, where OSVDB tends to break them out. Over the past year, X-Force and BID have started abstracting more and more as well.

Either way, Qualys cited their source, NVD, which is entirely based on CVE. How they got 45 vulns in “Adobe programs” baffles me. My count says 97 Adobe vulns, 95 of them have CVEs assigned to them (covered by a total of 93 CVEs). OSVDB abstracted the entries like CVE did for the most part, but split out CVE-2009-1872 as distinct XSS vulnerabilities. OSVDB also has two entries that do not have CVE, 55820 and 56281.

Where did Qualys get 45 if they are using the same CVE data set OSVDB does? This discrepancy has nothing to do with abstraction, so something else appears to be going on. Doing a few more searches, I believe I figured it out. Searching OSVDB for “Adobe Reader” in 2009 yields 44 entries, one off from their cited 45. That could be easily explained as OSVDB also has 9 “Adobe Multiple Products” entries that could cover Reader as well. This may in turn be a breakdown where Qualys or Mills did not specify “Adobe Software” (cumulative, all software they release) versus “Adobe Reader” or some other specific software they release.

Qualys tallied 102 vulnerabilities that were found in Firefox this year, up from 90 last year.

What is certainly a discrepancy due to abstraction, OSVDB has 74 vulnerabilities specific to Mozilla Firefox (two without CVE), 11 for “Mozilla Multiple Browsers” (Firefox, Seamonkey, etc) and 81 for “Mozilla Multiple Products” (Firefox, Thunderbird, etc). While my numbers are somewhat anecdotal, because I cannot remember every single entry, I can say that most of the ‘multiple’ vulnerabilities include Firefox. That means OSVDB tracked as many as, but possibly less than, 166 vulnerabilities in Firefox.

Microsoft software dropped from 44 to 41, according to Qualys. Internet Explorer, Windows Media Player and Microsoft Office together had 30 vulnerabilities.

According to my searches on OSVDB, we get the following numbers:

  • 234 vulnerabilities in Microsoft, only 4 without CVE
  • 50 vulnerabilities in MSIE, all with CVE
  • 4 vulnerabilities in Windows Media Player, 1 without CVE
  • 52 vulnerabilities in Office, all with CVE. (based on “Office” being Excel, Powerpoint, Word and Outlook.
  • 92 vulnerabilities in Windows, only 2 without CVE

When dealing with vulnerability numbers and statistics, like anything else, it’s all about qualifying your numbers. Saying “Adobe Software” is different than “Adobe Acrobat” or “Adobe Reader” as the software installation base is drastically different. Given the different levels of abstraction in VDBs, it is also equally important to qualify what “a vulnerability” (singular) is. Where CVE/NVD will group several vulnerabilities in one identifier, other databases may abstract and assign unique identifiers to each distinct vulnerability.

Qualys, since you provided the stats to CNet, could you clarify?

Search Filters & Custom Exports

Last week, OSVDB enhanced the search results capability by adding a considerable amount of filter capability, a simple “results by year” graph and export capability. Rather than draft a huge walkthrough, open a search in a new tab and title search for “microsoft windows”.

As always, the results will display showing the OSVDB ID, disclosure date and OSVDB title. On the left however, are several new options. First, a summary graph will be displayed showing the number of vulnerabilities by year, based on your search results. Next, you can toggle the displayed fields to add CVE, CVSSv2 score and/or the percent complete. The percent complete refers to the status of the OSVDB entry, and how many fields have been completed. Below that are one click filters that let you further refine your search results by the following criteria:

  • Reference Type – only show results that contain a given type of reference
  • Category – show results based on the vulnerability category
  • Disclosure Year – refine results by limiting to a specific year
  • CVSS Score – only show entries that are scored in a given range
  • Percent Complete – filter results based on how complete the OSVDB entry is

Once you have your ideal search results, you can then export them to XML, custom RSS feed or CSV. The export will only work for the first 100 results. If you need a bigger data set to work with, we encourage you to download the database instead.

With the new search capability, you should be able to perform very detailed searches, easily manipulate the results and even import them into another application or presentation. If you have other ideas of how a VDB search can be refined to provide more flexibility and power, contact us!

Who discovered the most vulns?

This is a question OSVDB moderators, CVE staff and countless other VDB maintainers have asked. Today, Gunter Ollmann with IBM X-Force released his research trying to answer this question. Before you read on, I think this research is excellent. The relatively few criticisms I bring up are not the fault of Ollmann’s research and methodology, but the fault of his VDB of choice (and *every* other VDB) not having a complete data set.

Skimming his list, my first thought was that he was missing someone. Doing a quick search of OSVDB, I see that Lostmon Lords (aka ‘lostmon’) has close to 350 vulnerabilities published. How could the top ten list miss someone like this when his #10 only had 147? Read down to Ollmann’s caveat and there is a valid point, but sketchy wording. The data he is using relies on this information being public. As the caveat says though, “because they were disclosed on non-public lists” implies that the only source he or X-Force are using are mail lists such as Bugtraq and Full-disclosure. Back in the day, that was a pretty reliable source for a very high percentage of vulnerability information. In recent years though, a VDB must look at other sources of information to get a better picture. Web sites such as milw0rm get a steady stream of vulnerability information that is frequently not cross-posted to mail lists. In addition, many researchers (including lostmon) mail their discoveries directly to the VDBs and bypass the public mail lists. If researchers mail a few VDBs and not the rest, it creates a situation where the VDBs must start watching each other. This in turn leads to “VDB inbreeding” that Jake and I mentioned at CanSecWest 2005, which is a necessary evil if you want more data on vulnerabilities.

In May of 2008, OSVDB did the same research Ollmann did and we came up with different results. This was based on data we had available, which is still admittedly very incomplete (always need data manglers.) So who is right? Neither of us. Well, perhaps he is, perhaps we are, but unfortunately we’re both working with incomplete databases. As a matter of my opinion, I believe OSVDB has better coverage of vulnerabilities, while X-Force clearly has better consistency in their data and a fraction of the gaps we do.

Last, this data is interesting as is, but would be really fascinating if it was mixed with ‘researcher confidence’ (a big thing of Steve Christey/CVE and myself), in which we track a researcher’s track record for accuracy in disclosure. Someone that disclosed 500 vulnerabilities last year with a 10% error rate should not be above someone who found 475 with a 0% error rate. In addition, as Ollmann’s caveat says, these are pure numbers and do not factor in hundreds of XSS versus remote code execution in operating system default install services. Having a weight system that can be applied to a vulnerability (e.g., XSS = 3, SQLi = 7, remote code exec = 9) that is then factored into researcher could move beyond “who discovered the most” and perhaps start to answer “who found the most respectable vulnerabilities”.

Top vulnerability researcher?

Who is the top vulnerability researcher? Who has discovered the most computer security vulnerabilities? Which country has the most researchers and publishes the most vulnerabilities? Who has discovered the most critical vulnerabilities?

From looking at OSVDB here are the top 12 researchers in terms of volume:

Rank / Creditee / # Vulns

  1. r0t 770
  2. Lostmon Lords 241
  3. rgod 239
  4. Aliaksandr Hartsuyeu 201
  5. Kacper 199
  6. James Bercegay 180
  7. luny 142
  8. Diabolic Crab 139
  9. Janek Vind “waraxe” 136
  10. JeiAr 117
  11. Dedi Dwianto 86
  12. M.Hasran Addahroni 79

Take a look at the other OSVDB Browse categories and note you can even click on a Creditee’s name and see all of the vulnerabilities that they have discovered here: http://osvdb.org/browse

Of course our statistics are based off of the content in OSVDB and we need your help to provide better statistics. If you are a researcher, it would help if you could take the time to create an OSVDB account and update the vulnerabilities that you have discovered!

You can signup for an OSVDB account here: https://osvdb.org/account/signup

Here is a quick overview:

  • Search for your vulnerabilities at http://osvdb.org/search/advsearch
  • Click on your vuln, then click “Edit Vulnerability” -Click the Credits menu item, if credit is missing click “Toggle Add Author…”
  • You name may already be in the database, as you type it will search OSVDB to see if your information is there. If so, select and click “Add Author”.
  • Once you add the creditee information you can update your information or if your name is not there you can add it as a new creditee.

Rinse and repeat!

Vulnerability Counts and OSVDB Advocacy

CVE just announced reaching 30,000 identifiers which is a pretty scary thing. CVE staff have a good eye for catching vulnerabilities from sources away from the mainstream (e.g. bugtraq) and they have the advantage of being a very widely accepted standard for tracking vulnerabilities. As companies and researchers request CVE numbers for disclosures, they get a lot of the information handed to them on a silver platter. Of course, sometimes that platter is full of mud and confusion as vendors don’t always provide clear details to help CVE accurately track and distinguish between multiple vulnerabilities. I’ve also pointed out many times in the past that CVE is a very unique VDB that provides identifiers for vulnerability tracking. They do not provide many fields associated with other VDBs (solution, creditee, etc). As such, they may have a single entry that covers multiple distinct vulnerabilities if they are the same class (XSS, SQLi, RFI), or if there is a lack of details but they know it affects the same product (Oracle). So when we see 30,000 identifiers, we have to realize that the real count of vulnerabilities is significantly higher.

CVE is run by The MITRE Corporation, sponsored / funded by the NCSD (US-CERT) of DHS under government contract. That means our tax dollars fund this database so it should be of particular interest to U.S. taxpayers in the security industry. I know from past discussions with CVE staff and other industry veterans that on any given day, they are more likely to have more work than available staff. That means the rate of vulnerabilities that get published is greater than the resources CVE can maintain to track them. In short, the 30,000 identifiers you see only represents a percentage of the vulnerabilities actually disclosed. We could probably debate what percentage that represents all day long, and I don’t think that is really the point here other than “we know it isn’t all of them”.

Every VDB suffers from the same thing. “Commercial” VDBs like X-Force, BID and Secunia have a full time staff that maintain their databases, like CVE does. Despite having all of these teams (some of them consisting of 10 or more people) maintain VDBs, we still see countless vulnerabilities that are ‘missed’ by all of them. This is not a slight against them in any way; it is a simple manner of resources available and the amount of information out there. Even with a large team sorting disclosed vulnerabilities, some teams spend time validating the findings before adding them to the database (Secunia), which is an incredible benefit for their customers. There is also a long standing parasitic nature to VDBs, with each of them watching the others as best they can, to help ensure they are tracking all the vulnerabilities they can. For example, OSVDB keeps a close eye on Secunia and CVE specifically, and as time permits we look to X-Force, BID, SecurityTracker and others. Each VDB tends to have some researchers that exclusively disclose vulnerabilities directly to the VDB of their choice. So each one I mention above will get word of vulnerabilities that the rest really have no way of knowing about short of watching each other like this. This VDB inbreeding (I will explain the choice of word some other time) is an accepted practice and I have touched on this in the past (CanSecWest 2005).

Due to the inbreeding and OSVDB’s ability to watch other resources, it occasionally frees up our moderators to go looking for more vulnerability information that wasn’t published in the mainstream. This usually involves grueling crawls through vendor knowledge-bases, mind-numbing changelogs, searching CVS type repositories and more. That leads to the point of this lengthy post. In doing this research, we begin to see how many more vulnerabilities are out there in the software we use, that escapes the VDBs most of the time. Only now, after four years and getting an incredible developer to make many aspects of the OSVDB wish-list a reality, do we finally begin to see all of this. As I have whined about for those four years, VDBs need to evolve and move beyond this purely “mainstream reactionary” model. Meaning, we have to stop watching the half dozen usual spots for new vulnerability information, creating our entries, rinsing and repeating. There is a lot more information out there just waiting to be read and added.

In the past few weeks, largely due to the ability to free up time due to the VDB inbreeding mentioned above, we’ve been able to dig into a few products more thoroughly. These examples are not meant to pick on any product / VDB or imply anything other than what is said above. In fact, this type of research is only possible because the other VDBs are doing a good job tracking the mainstream sources, and because some vendors publish full changelogs and don’t try to hide security related fixes. Kudos to all of them.

Example: Search your favorite VDB for ”inspircd”, a popular multi-platform IRC daemon. Compare the results of BID, Secunia, X-Force, SecurityTracker, and http://osvdb.org/ref/blog/inspircd-cve.png. Compare these results to OSVDB after digging into their changelogs. Do these same searches for “xfce” (10 OSVDB, 5 max elsewhere), “safesquid” (6 OSVDB, 1 max elsewhere), “beehive forum” (27 OSVDB, 8 max elsewhere) and “jetty” (25 OSVDB, 12 max elsewhere). Let me emphasize, I did not specifically hand pick these examples to put down any VDB, these are some of the products we’ve investigated in the last few weeks.

The real point here is that no matter what vulnerability disclosure statistic you read, regardless of which VDB it uses (including OSVDB), consider that the real number of vulnerabilities disclosed is likely much higher than any of us know or have documented. As always, if you see vulnerabilities in a vendor KB or changelog, and can’t find it in your favorite VDB, let them know. We all maintain e-mail addresses for submissions and we all strive to be as complete as possible.

2007 Top Vulnerable Vendors?

http://www.eweek.com/article2/0,1895,2184206,00.asp
http://www.eweek.com/c/a/Security/Report-MS-Apple-Oracle-Are-Top-Vulnerable-Vendors/

New IBM research shows that five vendors are responsible for 12.6 percent of all disclosed vulnerabilities. Not surprising: In the first half of 2007, Microsoft was the top vendor when it came to publicly disclosed vulnerabilities. Likely surprising to some: Apple got second place. IBM Internet Security Systems’ X-Force R&D team released its 2007 report on cyber attacks on Sept. 17, revealing that the top five vulnerable vendors accounted for 12.6 of all disclosed vulnerabilities in the first half of the yearor 411 of 3,272 vulnerabilities disclosed. Here’s the order in which the top 10 vendors stacked up, by percentage of vulnerabilities publicly disclosed in the first half of the year: Microsoft, 4.2 percent Apple, 3 percent Oracle, 2 percent Cisco Systems, 1.9 percent Sun Microsystems, 1.5 percent IBM, 1.3 percent Mozilla, 1.3 percent XOOPS, 1.2 percent BEA, 1.1 percent Linux kernel, 0.9 percent

This article was posted to ISN the other day and struck a nerve. How many times are we going to see vulnerability statistics presented without qualification? Rather than really get into the details, I replied with a single simple example on why such statistics are misleading at best and incorrect at worst. The bulk of my reply follows. My hopes for Lisa or IBM/ISS clarifying this is already dwindling.

One other factor, that Lisa Vaas apparently didn’t ask about, is how ISS X-Force catalogs vulnerabilities, and if their method and standards could impact these numbers at all. Take for example, two X-Force vulnerability database entries: Oracle Critical Patch Update – July 2007 http://xforce.iss.net/xforce/xfdb/35490 18 CVE, 30+ Oracle Oracle Critical Patch Update – January 2007 http://xforce.iss.net/xforce/xfdb/31541 30 CVE, 50+ Oracle So when comparing numbers, you have 2 X-Force entries that equate to 48 CVE entries that equate to *more than 80* unique and distinct vulnerabilities according to Oracle. I’m not a math or stat guy, but I have a feeling that this could seriously skew the statistics above, especially when you consider that Microsoft and Apple both have a more distinct breakdown and separation in the X-Force database. Anyone from IBM/ISS care to clarify? Lisa, did you have more extensive notes on this aspect that didn’t make it in the article perhaps?

Follow

Get every new post delivered to your Inbox.

Join 6,582 other followers