Tag Archives: Secunia

Reviewing the Secunia 2015 Vulnerability Review (A Redux)

It’s that time of year again! Vulnerability databases whip up reports touting statistics and observations based on their last year of collecting data. It’s understandable, especially for a commercial database, to show why your data source is the best. In the past, we haven’t had a strong desire to whip up a flashy PDF with our take on vulnerability disclosure statistics. Instead, we’ve focused more on educating others on why their statistics are often so horribly wrong or misleading. It should come as no surprise then, as we find ourselves forced to point out the errors in Secunia’s ways. We did the same thing last year in a long blog, giving more perspective and our own numbers for balance.

After reading this year’s report from Secunia, we could basically use last year’s blog as a template and just plug in new numbers. Instead, it is important to point out that after last year’s review, Secunia opted not to revise their methodology. Worse, they did not take any pointers from a presentation on vulnerability statistics Steve Christey (CVE / MITRE) and I did at BlackHat before last year’s report, pointing out the most common flaws in statistics and where the bias comes from. It is easy to chalk up last year’s report as being naive and not fully appreciating or understanding such statistics. This year though, they have no excuse. The numbers they report seem purposefully misleading and are a disservice to their customers and the community.

The headline of their press release and crux of their report is that they identified 15,435 vulnerabilities in 2014. This is entirely inaccurate, and completely misleading. This is fundamentally due to the same very flawed methodology they used last year, where they count a single vulnerability as many as 210 times (e.g. this year CVE-2014-0160, aka Heartbleed). Then factor in all of the other high profile “logo” vulnerabilities (e.g. FREAK, ShellShock), especially in OpenSSL, and the same handful of vulnerabilities will get counted hundreds or even a thousand times. To be abundantly clear, a vulnerability in a third-party library such as OpenSSL is one vulnerability. It doesn’t matter how many other products use and integrate that code, the fundamental flaw is in the library. Counting each product that implements OpenSSL as a distinct vulnerability, rather than a distinct occurrence of a vulnerability, is wrong. Worse, it actually highlights just how poor their statistics are, if you do accept their flawed methodology, as it is heavily used among thousands of applications that Secunia doesn’t cover, even when a vendor like IBM issues numerous advisories that they miss. No matter how you cut it, their numbers are invalid.

So poking at our database, understanding we don’t quite have a 100% mapping to Secunia, but feel it is pretty close, we see that around 5,280 of our entries have a Secunia reference and 4,530 of those also have a CVE reference (compared to more than 8,500 entries we have with a CVE reference). Even counting for a good margin of error, it still appears that Secunia does not cover almost 3,000 vulnerabilities in 2014, which are covered by CVE entries. Those familiar with this blog know that we are more critical of CVE than any other Vulnerability Database. If a commercial VDB with hundreds of high-dollar clients can’t even keep a 100% mapping with an over-funded, U.S. government run VDB that is considered the “lowest common denominator” by many, that speaks volumes. I could end the blog on that note alone, because anything less than mediocrity is a disgrace. How do you trust, and pay for, a vulnerability intelligence service missing thousands of publicly disclosed vulnerabilities, that are served up on a platter and available to anyone for free use and inclusion in their own database? You don’t. After all, remember this quote from the Secunia report last year?

CVE has become a de facto industry standard used to uniquely identify vulnerabilities, which have achieved wide acceptance in the security industry.

All of the above is not only important, it is absolutely critical to appropriately frame their statistics. Not only does Secunia avoid using the minimum industry standard for vulnerability aggregation, they opt to use their own methodology, which they now know beyond doubt seriously inflates their ‘vulnerability’ count. Last year, it was by almost three times more than reality. This year, it was also by about three times more than reality, as they only covered approximately 5,500 unique vulnerabilities. That is almost 3,000 less than CVE and 8,000 less than OSVDB. Any excuses of Secunia coverage lacking that are based on them claiming to verify vulnerabilities and “not covering vulnerabilities in beta software” are also invalid, as a majority of vulnerabilities reported are in stable software, and regularly come from trusted researchers or the vendors themselves. Regardless, it should be clear to anyone passingly familiar with the aggregation of vulnerabilities that Secunia is playing well outside the bounds of reality. Of course, every provider of vulnerability intelligence has some motive, or at least desire, to see the numbers go up year to year. It further justifies their customers spending money on the solution.

“Every year, we see an increase in the number of vulnerabilities discovered, emphasizing the need for organizations to stay on top of their environment. IT teams need to have complete visibility of the applications that are in use, and they need firm policies and procedures in place, in order to deal with the vulnerabilities as they are disclosed.” — Kasper Lindgaard, Director of Research and Security at Secunia.

While Mr. Lindgaard’s quote is flashy and compelling, it is also completely wrong, even according to their own historical reports. If Mr. Lindgaard had read Secunia’s own report from 2010 and 2011, he might have avoided this blunder.

This report presents global vulnerability data from the last five years and identifies trends found in 2010. The total number of vulnerabilities disclosed in 2010 shows a slight decrease of 3% compared to 2009. – Yearly Report 2010 (Secunia)

Analysing the long-term and short-term trends of all products from all vendors in the Secunia database over the last six years reveals that the total number of vulnerabilities decreased slightly in 2011 compared to 2010. – Vulnerabilities are Resilient (2011, Secunia Report)

In 2007 OSVDB cataloged 9,574 vulnerabilities as compared to 11,050 in 2006. In 2009 they cataloged 8,175 vulnerabilities as compared to 9,807 in 2008. In 2011 they cataloged 7,913 vulnerabilities compared to 9,161 in 2010. That is three distinct years that were not an increase at all. Taking Mr. Lindgaard’s quote into account, and comparing it to our own data that is based on a more standard method of abstraction, we also show that it has not been a steady rise every year:


With highly questionable statistics, based on a flawed methodology, Secunia also lucked out this year, as vendors are more prudent about reporting vulnerabilities in used third-party libraries (e.g. OpenSSL, PHP). This plays out nicely for a company that incorrectly counts each vendor’s use of vulnerable software as a distinct issue.

Getting down to the statistics they share, we’ll give a few additional perspectives. As with all vulnerability statistics, they should be properly explained and disclaimed, or they are essentially meaningless.

In 2014, a total of 15,435 vulnerabilities were discovered in 3,870 products from 500 vendors.

As stated previously, this is absolutely false. Rather than track unique vulnerabilities, Secunia’s model and methodology is to focus more on products and operating systems. For example, they will issue one advisory for a vulnerability in OpenSSL, then dozens of additional advisories for the exact same vulnerability. The only difference is that each subsequent advisory covers a product or operating system that uses it. Above I mentioned the redundant ‘HeartBleed’ entries, but to further illustrate the point and demonstrate how this practice can lead to wildly inaccurate statistics, let’s examine the OpenSSL ‘FREAK’ vulnerability as well (29 Secunia advisories). The flaw is in the OpenSSL code, not the implementation from each vendor, so it is still that same one vulnerability. Depending on how good their coverage is, this could be more problematic. Looking at IBM, who uses OpenSSL in a wide variety of their products, we know that at least 56 are impacted by the vulnerability, which may mean that many more Secunia advisories cover them. If we had a perfect glimpse into what products use OpenSSL, it would likely be thousands. Trying to suggest that is ‘thousands of unique vulnerabilities’ is simply ignorant.

83% of vulnerabilities in all products had patches available on the day of disclosure in 2014.

We assume that by ‘patches’, they mean a viable vendor-provided solution (e.g. upgrade) as well. Based on our aggregation, it is closer to 64% that have a solution available, and that is not counting “the day of disclosure”. That includes vendors who have patched since the issue was disclosed. This big gap comes from the number of vulnerabilities aggregated by each database since the start of 2014.

25 zero-day vulnerabilities were discovered in total in 2014, compared to 14 the year before.

Assuming we use the same definition of “zero-day” as Secunia, this is considerably under what we tracked. We flag such issues “discovered in the wild”, meaning the vulnerability was being actively exploited by attackers when it was discovered and/or disclosed. In 2014, we show 43 entries with this designation, compared to 73 in 2013. That is a sharp decrease, not increase. Part of that large swing for us comes from a rash of attacker-trojaned software distributions that were used to target mobile users in 2013. Since trojaned software from a legitimate vendor meets the criteria to be called a vulnerability (it can be abused by a bad actor to cross privilege boundaries), we include it where Secunia and other databases generally ignore it.

20 of the 25 zero-day vulnerabilities were discovered in the 25 most popular products – 7 of these in operating systems.

Since we got into the “popular products” bit in last year’s rebuttal to their statistics, we’ll skip that this year. However, in 2014 we show that 16 of those “zero day” were in Microsoft products (two in Windows, six in the bundled MSIE), and four were in Adobe products.

In 2014, 1,035 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari. That is a 42% increase from 2013.

Without searching our data, it is likely very safe to say these numbers are way off, and that Secunia’s coverage of browser security is woefully behind. In addition to not aggregating as much data, they likely don’t properly take into account vulnerabilities in WebKit, which potentially affects Chrome, Safari, and Opera. Are WebKit vulnerabilities attributed to one, the other, or all three? With that in mind, let’s compare numbers. We see ~200 for Mozilla, ~250 for Microsoft IE, and not even 10 between Safari and Opera. In reality, Opera is not forthcoming in their information about vulnerabilities they patch, so their actual vulnerability count is likely much higher. That leaves us with Chrome, which had over 300 vulnerabilities in their own code. As mentioned, factor in another 100+ vulnerabilities in WebKit that may impact up to three browsers. That comes to a total of 889, far less than their claimed 1,035. That makes us curious if they are using the same flawed methodology with browser vulnerabilities, as they do with libraries (i.e. counting WebKit / Blink issues multiple times for each browser using it). Some vulnerabilities in these browsers are also due to flaws in other 3rd party libraries and not in the browsers’ own code (e.g. ICU4C and OpenJPEG). Finally, Google Chrome provides Adobe Flash Player, so is Secunia adding the number of vulnerabilities addressed in Flash Player to their Google Chrome count as well, further bloating it? So many unknowns and another reminder of why vulnerability statistics are often misleading or serve no value. Without explaining the methodology, we’re left with guesses and assumptions that strongly suggest this statistic can’t be trusted.

Even more offensive, is that the “vulnerability intelligence experts at Secunia Research” don’t demonstrate any expertise in informing readers why Google Chrome numbers are so high. Instead, they publish the information, withhold details of the methodology, and let the media run wild parroting their pedestrian statistics and understanding. As a lesson to the media, and Secunia, consider these additional pieces of information that help explain those numbers, and why you simple can’t compare disclosed vulnerability totals:

  1. Google releases details about the vulnerabilities fixed in Chrome, unlike some other browser vendors. For example, Opera releases almost no information. Microsoft typically releases information on the more serious vulnerabilities, and none from their internal auditing. Of the 310 Chrome vulnerabilities we tracked, 66 were CVSSv2 5.0 or less.
  2. Google has perhaps the most aggressive bug bounty program in the industry, which encourages more researchers to audit their code. How aggressive? They paid out more than US$550,000 in 2014, just for Chrome vulnerabilities.
  3. Google’s turnaround in fixing Chrome vulnerabilities is quite respectable. Since many of the vulnerabilities were reported to them and fixed promptly, it is easy to argue with the claim it is the “least secure” browser.
  4. Google fields several teams of security experts that audit their code, including Chrome. They find and fix potential issues before disclosing them. By doing that, they are ensuring the security of the browser, yet effectively getting punished by Secunia for doing so.
  5. In addition to all of the issues disclosed, a tiny fraction were actually proven to be exploitable. Like many software vendors, they find it is easier and more efficient to spend a small amount of time to fix the possible vulnerability rather than the larger amount of time required to prove it is exploitable.

There are many pitfalls of using simple disclosed vulnerability totals to compare products. This is explained over and over by those familiar with the vulnerability disclosure world, and should be common knowledge to ‘experts’ in vulnerability research. Instead, Secunia not only fuels media articles that aren’t critical of their report, they Tweet little snippets without any context. They also have their partners repeat their statistics, giving them more exposure, misleading the customers of other companies.

Vulnerability statistics can be useful. But they must be presented in a responsible and educated manner to serve their purpose. Those not familiar with vulnerability aggregation and generating the resulting statistics typically do a poor job, and add confusion to the matter. We like to call such people “vulnerability tourists”, and we can now add Secunia to the growing list of them.

[Update: 2015-04-01. One additional quote from the 2010 Secunia Report was added, and clarification of wording regarding the yearly vuln total chart was made.]

Reviewing the Secunia 2013 Vulnerability Review

On February 26, Secunia released their annual vulnerability report (link to report PDF) summarizing the computer security vulnerabilities they had cataloged over the 2013 calendar year. For those not familiar with their vulnerability database (VDB), we consider them a ‘specialty’ VDB rather than a ‘comprehensive’ VDB (e.g. OSVDB, X-Force). They may disagree on that point, but it is a simple matter of numbers that leads us to designate them as such. That also tends to explain why some of our conclusions and numbers are considerably different and complete than theirs.

In past years this type of blog post would not need a disclaimer, but it does now. OSVDB, while the website is mostly open to the public, is also the foundation of the VulnDB offering from our commercial partner and sponsor Risk Based Security (RBS). As such, we are now a direct competitor to Secunia, so any criticism leveled at them or their report may be biased. On the other hand, many people know that I am consistently critical of just about any vulnerability statistics published. Poor vulnerability statistics have plagued our industry for a long time. So much so that Steven Christey from CVE and I gave a presentation last year at the BlackHat briefings in Las Vegas on the topic.

One of the most important messages and take-aways from that talk is that all vulnerability statistics should be disclaimed and explained in advance. That means that a vulnerability report should start out by explaining where the data came from, applicable definitions, and the methodology of generating the statistics. This puts the subsequent statistics in context to better explain and disclaim them, as a level of bias enters any set of vulnerability statistics. Rather than follow the Secunia report in the order they publish them, I feel it is important to skip to the very end first. For that is where they finally explain their methodology to some degree, which is absolutely critical in understanding how their statistics were derived.

On page 16 (out of 20) of the report, in the Appendix “Secunia Vulnerability Tracking Process”, Secunia qualifies their methodology for counting vulnerabilities.

A vulnerability count is added to each Secunia Advisory to indicate the number of vulnerabilities covered by the Secunia Advisory. Using this count for statistical purposes is more accurate than counting CVE identifiers. Using vulnerability counts is, however, also not ideal as this is assigned per advisory. This means that one advisory may cover multiple products, but multiple advisories may also cover the same vulnerabilities in the same code-base shared across different programs and even different vendors.

First, the ‘vulnerability count’ referenced is not part of a public Secunia advisory, so their results cannot be realistically duplicated. The next few lines are important, as they invalidate the Secunia data set for making any type of real conclusion on the state of vulnerabilities. Not only can one advisory cover multiple products, multiple advisories can cover the same single vulnerability, just across different major versions. This high rate of duplicates and lack of unique identifiers make the data set too convoluted for meaningful statistics.

CVE has become a de facto industry standard used to uniquely identify vulnerabilities which have achieved wide acceptance in the security industry.

This is interesting to us because Secunia is not fully mapped to CVE historically. Meaning, there are thousands of vulnerabilities that CVE has cataloged, that Secunia has not included. CVE is a de facto industry standard, but also a drastically incomplete one. At the bare minimum, Secunia should have a 100% mapping to them and they do not. This further calls into question any statistics generated off this data set, when they knowingly ignore such a large number of vulnerabilities.

From Remote
From remote describes other vulnerabilities where the attacker is not required to have access to the system or a local network in order to exploit the vulnerability. This category covers services that are acceptable to be exposed and reachable to the Internet (e.g. HTTP, HTTPS, SMTP). It also covers client applications used on the Internet and certain vulnerabilities where it is reasonable to assume that a security conscious user can be tricked into performing certain actions.

Classification for the location of vulnerability exploitation is important as this heavily factors into criticality; either via common usage, or through scoring systems such as CVSS. In their methodology, we see that Secunia does not make a distinction between ‘remote’ and ‘context-dependent’ (or ‘user-assisted’ by some). This means that the need for user interaction is not factored into an issue and ultimately, scoring and statistics become based on network, local (adjacent) network, or local vectors.

Secunia further breaks down their classification in the appendix under “Secunia Vulnerability Criticality Classification“. However, it is important to note that their breakdown does not really jibe with any other scoring system. Looking past the flaw of using the word ‘critical’ in all five classifications, the distinction between ‘Extremely Critical’ and ‘Highly Critical’ is minor; it appears to be solely based on if Secunia is aware of exploit code existing for that issue based on their descriptions. This mindset is straight out of the mid 90s in regards to threat modeling. In today’s landscape, if details are available about a vulnerability then it is a given that a skilled attacker can either write or purchase a vulnerability for the issue within a few days, for a majority of disclosed issues. In many cases, even when details aren’t public but a patch is, that is enough to reliably reverse it and leverage it for working exploit code in a short amount of time. Finally, both of these designations still do not abstract on if user interaction is required. In each case, it may or may not be. In reality, I imagine that the difference between ‘Extremely’ and ‘Highly’ is supposed to be based on if exploits are happening in the wild at time of disclosure (i.e. zero day).

Now that we have determined their statistics cannot be reproduced, use a flawed methodology, and are based on drastically incomplete data, let’s examine their conclusions anyway!

The blog announcing the report is titled “1,208 vulnerabilities in the 50 most popular programs – 76% from third-party programs” and immediately calls into question their perspective. Reading down a bit, we find out what they mean by “third-party programs”:

“And the findings in the Secunia Vulnerability Review 2014 support that, once again, the biggest vulnerability threat to corporate and private security comes from third-party – i.e. non-Microsoft – programs.”

Unfortunately, this is not the definition of a third-party program by most in our industry. On a higher more general level, a “third-party software component” is a “is a reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform” (Wikipedia). In the world of VDBs, we frequently refer to a third-party component a ‘library‘ that is integrated into a bigger package. For example, Adobe Reader 10 which is found on many desktop computers is actually built on Adobe’s own code, but also as many as 212 other pieces of software. The notion that “non-Microsoft” software is “third-party” is very weird for lack of better words, and shows the mindset and perspective of Secunia. This completely discounts users of Apple, Linux, VMs (e.g. Oracle, VMware, Citrix), and mobile devices among others. Such a Microsoft-centric report should clearly be labeled as such, not as a general vulnerability report.

In the Top 50 programs, a total of 1,208 vulnerabilities were discovered in 2013. Third-party programs were responsible for 76% of those vulnerabilities, although these programs only account for 34% of the 50 most popular programs on private PCs. The share of Microsoft programs (including the Windows 7 operating system) in the Top 50 is a prominent 33 products – 66%. Even so, Microsoft programs are only responsible for 24% of the vulnerabilities in the Top 50 programs in 2013.

This is aiming for the most convoluted summary award apparently. I really can’t begin to describe how poorly this comes across. If you want to know the ‘Top 50 programs’, you have to read down to page 18 of the PDF and then resolve a lot of questions, some of which will be touched on below. When you read the list, and see that several ‘Microsoft’ programs actually had 0 vulnerabilities, it will call into question the “prominent 33 products” and show how the 66% is incorrectly weighted.

“However, another very important security factor is how easy it is to update Microsoft programs compared to third-party programs.” — Secunia CTO, Morten R. Stengaard.

When debunking vulnerability statistics, I tend to focus on the actual numbers. This is a case where I simply have to branch out and question how a ‘CTO’ could make this absurd statement. In one sentence, he implies that updating Microsoft is easy, while third-party programs (i.e. non-Microsoft programs per their definition) are not. Apparently Mr. Stengaard does not use Oracle Java, Adobe Flash player, Adobe Air, Adobe Reader, Mozilla Firefox, Mozilla Thunderbird, Google Chrome, Opera, or a wide range of other non-Microsoft desktop software, all of which have the same one-click patching/upgrade ability. Either Mr. Stengaard is not qualified to speak on this topic, or he is being extremely disingenuous in his characterization of non-Microsoft products to suit the needs of supporting this report and patch management business model. If he means that patching Windows is easier on an enterprise scale (e.g. via SCCM or WSUS), then that is frequently true, but such qualifications should be clear.

This is a case where using a valid and accepted definition of ‘third-party programs’ (e.g. a computing library) would make this quote more reasonable. Trying to upgrade ffmpeg, libav, or WebKit in the context of the programs that rely on them as libraries is not something that can be done by the average user. The problem is further compounded when portions of desktop software are used as a library in another program, such as AutoCad which appears in the Adobe Reader third-party license document linked above. However, these are the kinds of distinctions that any VDB should be fully aware of, and be able to disclaim and explain more readily.

Moving on to the actual ‘Secunia Vulnerability Review 2014‘ report, the very first line opens up a huge can of worms as the number is incorrect and entirely misleading. The flawed methodology used to generate the statistic cascades down into a wide variety of other incorrect conclusions.

The absolute number of vulnerabilities detected was 13,073, discovered in 2,289 products from 539 vendors.

It is clear that there are a significant amount of vulnerabilities that are being counted multiple times. While this number is generated from Secunia’s internal ‘vulnerability count’ number associated with each advisory, they miss the most obvious flaw; that many of their advisories cover the exact same vulnerability. Rather than abstract so that one advisory is updated to reflect additional products impacted, Secunia will release additional advisories. This is immediately visible in cases where a protocol is found to have a vulnerability, such as the “TLS / DTLS Protocol CBC-mode Ciphersuite Timing Analysis Plaintext Recovery Cryptanalysis Attack” (OSVDB 89848). This one vulnerability impacts any product that implements that protocol, so it is expected to be widespread. As such, that one vulnerability tracks to 175 different Secunia advisories. This is not a case where 175 different vendors coded the same vulnerability or the issue is distinct in their products. This is a case of a handful of base products (e.g. OpenSSL, GnuTLS, PolarSSL) implementing the flawed protocol, and hundreds of vendors using that software bundled as part of their own.

While that is an extreme example, the problem is certainly front-and-center due to their frequent multi-advisory coverage of the same issue. Consider that one OpenSSL vulnerability may be covered in 11 Secunia advisories. Then look at other products that are frequently used as libraries or found on multiple Linux distributions, each of which get their own advisory. Below is a quick chart showing examples of a single vulnerability in one of several products, along with the number of Secunia advisories that references that one vulnerability:

Example w/ 1 Vuln # of Secunia Adv
CVE-2013-6367 Linux Kernel 15
CVE-2013-6644 Google Chrome 5
CVE-2013-5609 Mozilla 14
CVE-2013-4408 Samba 10
CVE-2013-6415 Ruby on Rails 10
CVE-2014-0368 Oracle Java 27

This problem is further compounded when you consider the number of vulnerabilities in those products in 2013, where each one received multiple Secunia advisories. This table shows the products from above, and the number of unique vulnerabilities as tracked by OSVDB for that product in 2013 that had at least one associated Secunia advisory:

Software # of Vulns in product in 2013 w/ Secunia Ref
Linux Kernel 131
Google Chrome 267
Mozilla 148
Samba 10
Ruby on Rails 14
Oracle Java 194

It is easy to see how Secunia quickly jumped to 13,073 vulnerabilities while only issuing 3,327 advisories in 2013. If there is any doubt about vulnerability count inflation, consider these four Secunia advisories that cover the same set of vulnerabilities, each titled “WebSphere Application Server Multiple Java Vulnerabilities“. Secunia created four advisories for the same vulnerabilities simply to abstract based on the major versions affected, as seen in this table:

Secunia Advisory # of Vulns in product in 2013
56778 reported in versions through
56852 reported in versions through
56891 reported in version through
56897 reported in versions through

The internal ‘vulnerability count’ for these advisories are very likely 25, 25, 25, and 27, adding up to 102. Applied against IBM, you have 27 vulnerabilities inflated greatly and counting for 102 instead. Then consider that IBM has several hundred products that use Java, OpenSSL, and other common software. It is easy to see how Secunia could jump to erroneous conclusions:

The 32% year-on-year increase in the total number of vulnerabilities from 2012 to 2013 is mainly due to a vulnerability increase in IBM products of 442% (from 772 vulnerabilities in 2012 to 4,181 in 2013).

This unfair and potentially libelous statement was picked up by InformationWeek in an article titled “IBM Software Vulnerabilities Spiked In 2013“, which Secunia was quick to Tweet about.

The next set of statistics is convoluted on the surface, but even more confusing when you read the details and explanations for how they were derived:

Numbers – Top 50 portfolio
The number of vulnerabilities in the Top 50 portfolio was 1,208, discovered in 27 products from 7 vendors plus the most used operating system, Microsoft Windows 7.

To assess how exposed endpoints are, we analyze the types of products typically found on an endpoint. Throughout 2013, anonymous data has been gathered from scans of the millions of private computers which have the Secunia Personal Software Inspector (PSI) installed. Secunia data shows that the computer of a typical PSI user has an average of 75 programs installed on it. Naturally, there are country- and region-based variations regarding which programs are installed. Therefore, for the sake of clarity, we chose to focus on a representative portfolio of the 50 most common products found on a typical computer and the most used operating system, and analyze the state of this portfolio and operating system throughout the course of 2013. These 50 programs are comprised of 33 Microsoft programs and 17 non-Microsoft (third-party) programs.

Reading down to page 18 of the full report, you see the table listing the “Top 50” software installed as determined by their PSI software. On the list is a wide variety of software that are either components of Windows (meaning they come installed by default, but show up in the “Programs” list e.g. Microsoft Visual C++ Redistributable) or in a few cases third-party software (e.g. Google Toolbar), many of which have 0 associated vulnerabilities. In other cases they include product driver support tools (e.g. Realtek AC 97 Update and Remove Driver Tool) or ActiveX components that are generally not installed via traditional means (e.g. comdlg32 ActiveX Control). With approximately half of the Top 50 software having vulnerabilities, and mixing different types of software components, it causes summary put forth by Secunia to be misleading. Since they include Google Chrome on the list, by their current logic, they should also include WebKit which is a third-party library wrapped into Chrome, just as they include ‘Microsoft Powerpoint Viewer’ (33) which is a component of ‘Microsoft Powerpoint’ (14) and does not install separately.

Perhaps the most disturbing thing about this Top 50 summary is that Secunia only counts 7 vendors in their list. Reading through the list carefully, you see that there are actually 10 vendors represented: Microsoft, Adobe, Oracle, Mozilla, Google, Realtek, Apple, Piriform (CCleaner), VideoLAN, and Flexera (InstallShield). This seriously calls into question any conclusions put forth by Secunia regarding their Top 50 list and challenges their convoluted and irreproducible methodology.

Rather than offer a rebuttal line by line for the rest of the report and blog, we’ll just look at some of the included statistics that are questionable, wrong, or just further highlight that Secunia has missed some vulnerabilities.

In 2013, 727 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari.

By our count, there were at least 756 vulnerabilities in these browsers: Google Chrome (295), Mozilla Firefox (155), Internet Explorer (138), Opera (9), Apple Safari (8 on desktop, 4 on mobile), and WebKit (component of Chrome and Safari, 147). The count in Opera is likely very low though. In July 2013, Opera issued the first browser based on Blink, so it’s very likely that it has been affected by the vast majority of the Blink vulnerability fixes by Google. However, Opera is not very good at clearly reporting vulnerabilities, so this very likely accounts for the very low count that both we and Secunia have; something they should clearly have disclaimed.

In 2013, 70 vulnerabilities were discovered in the 5 most popular PDF readers: Adobe Reader, Foxit Reader, PDF-XChange Viewer, Sumatra PDF and Nitro PDF Reader.

By our count, there were at least 76 vulnerabilities in these PDF readers: Adobe Reader (69), Foxit (2), PDF-XChange (1), Sumatra (0), and Nitro (4).

The actual vulnerability count in Microsoft programs was 192 in 2013; 128.6% higher than in 2012.

Based on our data, there were 363 vulnerabilities in Microsoft software in 2013, not 192. This is up from 207 in 2012, giving us a 175.3% increase.

As in 2012, not many zero-day vulnerabilities were identified in 2013: 10 in total in the Top 50 software portfolio, and 14 in All products.
A zero-day vulnerability is a vulnerability that is actively exploited by hackers before it is publicly known, and before the vendor has published a patch for it.

By that definition, which we share, we tracked 72 vulnerabilities that were “discovered in the wild” in 2013. To be fair, our number is considerably higher because we actually track mobile vulnerabilities, something Secunia typically ignores. More curious is that based on a cursory search, we find 17 of their advisories that qualify as 0-day by their definition, suggesting they do not have a method for accurately counting them: SA51820 (1), SA52064 (1), SA52116 (2), SA52196 (2), SA52374 (2), SA52451 (1), SA53314 (1), SA54060 (1), SA54274 (1), SA54884 (2), SA55584 (1), SA55611 (1), and SA55809 (1).

Find out how quickly software vendors issue fixes – so-called patches – when vulnerabilities are discovered in All products.

This comes from their “Time to Patch for all products” summary page. This statement seems pretty clear; How fast do vendors issue fixes when vulnerabilities are discovered? However, Secunia does not track that specifically! The more appropriate question that can be answered by their data is “When are patches available at or after the time of public disclosure?” These are two very different metrics. The information on this page is generated using PSI/CSI statistics. So if a vulnerability is disclosed and a fix is already available at that time, it counts as within 24 hours. It doesn’t factor in that the vendor may have spent months fixing the issue before disclosure and patch.

In conclusion, while we appreciate companies sharing vulnerability intelligence, the Secunia 2013 vulnerability report is ultimately fluff that provides no benefit to organizations. The flawed methodology and inability for them to parse their own data means that the conclusions cannot be relied upon for making business decisions. When generating vulnerability statistics, a wide variety of bias will always be present. It is absolutely critical that your vulnerability aggregation methodology be clearly explained so that results are qualified and have more meaning.

VDB Relationships (hugs and bugs!)

Like any circle in any industry, having good professional relationships can be valuable to involved parties. In the world of security, more specifically Vulnerability Databases (VDBs), the relationships we maintain benefit the community behind the scenes. Like ogres and onions, there are layers.

Someone from CVE and someone from OSVDB run an informal list called ‘Vulnerability Information Managers’ (VIM) for discussion of vulnerabilities as relates to post-disclosure issues. New information comes up, additional research, vendor confirmations, vendor disputes and more. It’s a great resource for us to discuss the details that help each VDB fine-tune their information. (No new vulnerabilities are posted there, don’t bother)

In addition, some of the VDBs have stronger relationships that allow for great dialogue and information sharing. A few examples of these, from OSVDB’s perspective:

– A couple of the CVE guys are great for very informal chat about vulnerabilities. Despite being the dreaded “government contractors”, they are respectable, very knowledgeable and have a great sense of humor. I just sent one a mail with the subject “PROVENANCE BITCHEZ?!” challenging him on the details of a given CVE. They are so nice, I broke my rule of not taking candy from strangers and happily accepted the bag of leftover candy from their BlackHat booth. Joking aside, the ability to coordinate and share information is incredible and a testament to their integrity and desire to help the industry.

– OSVDB uses Secunia for one of our feeds to gather information. The two guys we regularly have contact with (CE & TK) lead a bright team that does an incredible amount of work behind the scenes. In case it slipped your attention, Secunia actually validates vulnerabilities before posting them. That means they take the time to install, configure and test a wide range of software based on the word of 3l1t3hax0ry0 that slapped some script tag in software you never heard of, as well as testing enterprise-level software that costs more than OSVDB makes in five years. Behind the scenes, Secunia shares information as they can with others, and there is a good chance you will never see it. If you aren’t subscribed to their service as a business, you should be. For those who asked OSVDB for years to have a ‘vulnerability alerting’ service; you can blame Secunia for us not doing it. They do it a lot better than we could ever hope to.

– The head of R&D at Tenable contributes a lot of time and information to VIM based on his research of disclosed vulnerabilities. Installing the software, configuring, testing and sometimes noticing additional vulnerabilities. He is a frequent contributor to VIM and has worked with OSVDB on sharing information to enhance the Nessus plugins as well as the OSVDB database.

– str0ke, that mysterious guy that somehow manages to run milw0rm in his spare time. What may appear to some as a website with user-posted content, is actually a horrible burden to maintain. Since the site’s inception, str0ke has not just posted the exploits sent in, but he has taken time to sanity check every single one as best he can. What you don’t see on that site are dozens (hundreds?) of exploits a month that were sent in but ended up being incorrect (or as OSVDB would label, “myth/fake”). When str0ke was overwhelmed and decided to give up the project, user demand (read: whining & complaints) lead him to change his mind and keep it going. Make sure you thank him every so often for his work and know this: milw0rm cannot be replaced as easily as you think. Not to the quality that we have seen from str0ke.

Since we have no corporate overlords, I’ll go ahead and talk about the flip side briefly:

– ISS (now IBM) runs a good database. Very thorough, keen to detail on including original source and vendor information. In 2004, the head of that group (AF) left, and until that time, we had a great dialogue and open communication. Since then, even before the IBM frenzy, we’ve mostly gotten the cold shoulder when mailing. Even when pointing out problems or negative changes on their side. LJ, bring back the old days!

– NVD. Why do you waste taxpayer money with that ‘database’? We pay $22 for Booz Allen Hamilton to “analyze” each CVE entry (thanks FOIA request!), yet they find a fraction of the typos and mistakes I do? By fraction, I mean exactly none from what I hear through the grape vine (DHS cronies are cool). If you can’t notice and report simple typos in a CVE, and you botch CVSS2 scores left and right (yes, I’ve mailed in corrections that were acted on), what exactly are you doing with our money? Are you the virtual Blackwater of VDBs?

– SecurityFocus / BID. Sorry, not going to bother with verbal fluffing. My countless mails pointing out errors and issues with your database are seemingly dumped to a black hole. Your promises of certain mail archives ‘not changing’ were pure fantasy. To this date you make erroneous assumptions about affected products, and still don’t grasp “case sensitive”. I know some of your team, you have great people there. Just lift the corporate policy that turns them into virtual shut-ins, please?

Sorry to end it on a downer. I still dream of a niche of the security industry (VDBs) where we can all play well with each other.

Vulnerability Counts and OSVDB Advocacy

CVE just announced reaching 30,000 identifiers which is a pretty scary thing. CVE staff have a good eye for catching vulnerabilities from sources away from the mainstream (e.g. bugtraq) and they have the advantage of being a very widely accepted standard for tracking vulnerabilities. As companies and researchers request CVE numbers for disclosures, they get a lot of the information handed to them on a silver platter. Of course, sometimes that platter is full of mud and confusion as vendors don’t always provide clear details to help CVE accurately track and distinguish between multiple vulnerabilities. I’ve also pointed out many times in the past that CVE is a very unique VDB that provides identifiers for vulnerability tracking. They do not provide many fields associated with other VDBs (solution, creditee, etc). As such, they may have a single entry that covers multiple distinct vulnerabilities if they are the same class (XSS, SQLi, RFI), or if there is a lack of details but they know it affects the same product (Oracle). So when we see 30,000 identifiers, we have to realize that the real count of vulnerabilities is significantly higher.

CVE is run by The MITRE Corporation, sponsored / funded by the NCSD (US-CERT) of DHS under government contract. That means our tax dollars fund this database so it should be of particular interest to U.S. taxpayers in the security industry. I know from past discussions with CVE staff and other industry veterans that on any given day, they are more likely to have more work than available staff. That means the rate of vulnerabilities that get published is greater than the resources CVE can maintain to track them. In short, the 30,000 identifiers you see only represents a percentage of the vulnerabilities actually disclosed. We could probably debate what percentage that represents all day long, and I don’t think that is really the point here other than “we know it isn’t all of them”.

Every VDB suffers from the same thing. “Commercial” VDBs like X-Force, BID and Secunia have a full time staff that maintain their databases, like CVE does. Despite having all of these teams (some of them consisting of 10 or more people) maintain VDBs, we still see countless vulnerabilities that are ‘missed’ by all of them. This is not a slight against them in any way; it is a simple manner of resources available and the amount of information out there. Even with a large team sorting disclosed vulnerabilities, some teams spend time validating the findings before adding them to the database (Secunia), which is an incredible benefit for their customers. There is also a long standing parasitic nature to VDBs, with each of them watching the others as best they can, to help ensure they are tracking all the vulnerabilities they can. For example, OSVDB keeps a close eye on Secunia and CVE specifically, and as time permits we look to X-Force, BID, SecurityTracker and others. Each VDB tends to have some researchers that exclusively disclose vulnerabilities directly to the VDB of their choice. So each one I mention above will get word of vulnerabilities that the rest really have no way of knowing about short of watching each other like this. This VDB inbreeding (I will explain the choice of word some other time) is an accepted practice and I have touched on this in the past (CanSecWest 2005).

Due to the inbreeding and OSVDB’s ability to watch other resources, it occasionally frees up our moderators to go looking for more vulnerability information that wasn’t published in the mainstream. This usually involves grueling crawls through vendor knowledge-bases, mind-numbing changelogs, searching CVS type repositories and more. That leads to the point of this lengthy post. In doing this research, we begin to see how many more vulnerabilities are out there in the software we use, that escapes the VDBs most of the time. Only now, after four years and getting an incredible developer to make many aspects of the OSVDB wish-list a reality, do we finally begin to see all of this. As I have whined about for those four years, VDBs need to evolve and move beyond this purely “mainstream reactionary” model. Meaning, we have to stop watching the half dozen usual spots for new vulnerability information, creating our entries, rinsing and repeating. There is a lot more information out there just waiting to be read and added.

In the past few weeks, largely due to the ability to free up time due to the VDB inbreeding mentioned above, we’ve been able to dig into a few products more thoroughly. These examples are not meant to pick on any product / VDB or imply anything other than what is said above. In fact, this type of research is only possible because the other VDBs are doing a good job tracking the mainstream sources, and because some vendors publish full changelogs and don’t try to hide security related fixes. Kudos to all of them.

Example: Search your favorite VDB for ”inspircd”, a popular multi-platform IRC daemon. Compare the results of BID, Secunia, X-Force, SecurityTracker, and http://osvdb.org/ref/blog/inspircd-cve.png. Compare these results to OSVDB after digging into their changelogs. Do these same searches for “xfce” (10 OSVDB, 5 max elsewhere), “safesquid” (6 OSVDB, 1 max elsewhere), “beehive forum” (27 OSVDB, 8 max elsewhere) and “jetty” (25 OSVDB, 12 max elsewhere). Let me emphasize, I did not specifically hand pick these examples to put down any VDB, these are some of the products we’ve investigated in the last few weeks.

The real point here is that no matter what vulnerability disclosure statistic you read, regardless of which VDB it uses (including OSVDB), consider that the real number of vulnerabilities disclosed is likely much higher than any of us know or have documented. As always, if you see vulnerabilities in a vendor KB or changelog, and can’t find it in your favorite VDB, let them know. We all maintain e-mail addresses for submissions and we all strive to be as complete as possible.

Why I’m So Behind

Another night of working on OSVDB, mainly focusing on vulnerability import and creating our entries to cover issues. Most nights end with between 25 and 50 new entries and a feeling of accomplishment. Well, other manglers can see the accomplishment if they check the back end, and that gives a little positive reinforcement. On really big days I just spam the status line to Jake and Sullo and demand instant gratification and the promise of booze to dull the pain.

Anyway, tonight was productive but no one but me and Speedbump will realize. I can thank IBM and a set of ridiculously large changelogs full of mind-numbing poorly written bug reports and excessive (apparent) duplication of entries. It started out with a simple bugtraq post about some vulnerabilities in IBM WebSphere Application Server. First off, I find it quite amusing that people are now taking credit for merely posting vulnerability information culled from another source.

Provided and/or discovered by:
Reported by the vendor

Reported by SnoB

If this type of activity deserved merit, VDBs like Secunia, CVE and OSVDB would be virtual gods of vulnerability disclosure. Second, he lists seven issues from a changelog that contains hundreds. If you go dig through the changelogs like the one for the Fix List for WebSphere Application Server Version 5.1.1, you may find more of interest. While browsing them, I noticed a fairly insignificant but ironic characteristic of the way IBM handles these disclosures. If you want to read the list of over five hundred entries and only pick out the security related ones, you can! Skim the list for any P##### number that doesn’t hyper-link to another document. 95% of the time, these are security related. So while IBM is not providing additional details about these issues (security through obscurity), they are making it easier to pick out which entries are of interest.

Oh yes, back to the exciting night life. After checking the latest list of changes as well as digging into some past fix lists, I ended up with around 75 more vulnerabilities, most of which are not in our database (or others). This list I extracted has some dupes in it, meaning the same issue affected multiple products or version lines. However, it is quite curious to see the same vulnerability patched half a dozen times over two years across many versions. Is IBM reintroducing the same vulnerability back into the code over and over? Or are they following the Oracle method of mitigation and not looking at the bigger picture and fixing similar vulnerabilities in the same code? Anyway, since I know I won’t get an answer to that, consider that it would take you twenty or more hours to read and digest a handful of these fix lists, and in doing so, you would likely find fifty or more vulnerabilities above and beyond what I found. The amount of information is overwhelming to say the least.

A Word on Solutions (use another product)

Something lead you to the product that ended up on your systems. Be it a feature, a look, ease of use, or price, it was a driving force in your decision. Changing to a different product isn’t easily done, especially if your current solution is heavily integrated or customers/users are familiar with it. Besides, what other product can fill your needs that doesn’t have vulnerabilities of it’s own? Look at the amount of vulnerabilities released along with the diversity of the products. Whether it is no name freebies or million dollar commercial installations, every package seems to have vulnerabilities that would drive you back to where you started.

Offering a “solution” of “Use another product” doesn’t seem very intuitive, logical, or helpful to customers.


Get every new post delivered to your Inbox.

Join 6,043 other followers