Sometime in the past day or so, CVE-2016-10001 was publicly disclosed, and possibly a duplicate. Regardless, CVE-2016-10002 is also now public and legitimate. Tonight, I Tweeted that the presence of those IDs doesn’t mean what many will think it means. I say that based on the past experience, both historical and more recent. Even 17 years later, many people believe that CVE assignments are sequential and that a given ID means that is the number of vulnerabilities aggregated by MITRE that year. That isn’t how it works and it never has.
As of the December 18 dump available from MITRE, there are 10,137 identifiers in the dump. However, 44 of them are REJECTED and 4,760 are in RESERVED status. That means there are 5,333 live CVE identifiers at this time that correspond to vulnerabilities. Since a single CVE ID can include multiple similar vulnerabilities, that number is also misleading. If you take their data and abstract it on a per-vulnerability basis, they cover 8,058 issues as aggregated by Risk Based Security’s VulnDB. So, to be very clear:
CVE has not cataloged 10,000 vulnerabilities in 2016 based on CVE IDs.
Additionally, to be very clear again:
CVE has not cataloged 10,000 vulnerabilities in 2016 based on their actual aggregated vulnerability data.
Meanwhile, VulnDB has currently cataloged 14,485 vulnerabilities, compared to the CVE 8,058 actual number. Hopefully your organization uses more than just CVE data. That means within your security products that scan for vulnerabilities, your tools that collect the data, and ultimately the reporting that guides your security team in making decisions.
All of that said… taking bets if we see Tweets, blogs, or news articles claiming the “10,000 vulns in 2016” notion.
Today, The Register wrote an article on MITRE’s announcement of a new CVE ID scheme, and got many things wrong about the situation. As I began to write out the errata in an email, someone asked that I make it public so they could learn from the response as well.
From The Register:
The pilot platform will implement a new structure for issuance of Common Vulnerabilities and Exposures numbers. MITRE will directly issue the identifiers and bypass its editorial board.
The old system bypassed the Editorial Board. The Board isn’t there to give input on assignments, not for over 10 years. The criticism right now is that the new system was rolled out without consulting the Board, which the board is supposedly there for:
The MITRE Corporation created the CVE Editorial Board, moderates Board discussions, and provides guidance throughout the process to ensure that CVE serves the public interest.
The last ID scheme change was after more than a year of discussion, debate, and ultimately a vote by the board (11/2/2012 Start –> 6/7/2013 End). This ID scheme change was a knee-jerk response to recent criticism (only after it became more public than the Editorial Board list), without any board member input. From The Register article:
The CVE numbers are the numerical tags assigned to legitimate verified bugs that act as a single source of truth for security companies and engineers as they seek to describe and patch problems.
No… CVE IDs are assigned to security issues, period. Alleged, undetermined, possible, or validated. Many IDs are assigned to bogus / illegitimate issues, which are generally discovered after the assignment has been made. There is no rule or expectation that IDs are only assigned to legitimate verified bugs. To wit, search the CVE database for the word “REJECTED”.
More importantly, the notion it is a “single source of truth” is perhaps the worst characterization of CVE one could imagine. Many companies use alternate vulnerability databases that offer more features, better consumption methods, and/or much better coverage. With MITRE falling behind other VDBs by as many as 6,000 vulnerabilities in 2015 alone, it isn’t a single source for anything more than training wheels for your vulnerability program.
The platform will exist alongside the current slower but established CVE system.
At this point, with what has been published by MITRE, this is pure propoganda in line with the platitudes they have been giving the Editorial Board for the last year. The new format does not make things easier for them, in any way. The ‘old ID scheme’ was never the slowdown or choke point in their process. In fact, read their press release and they say they will be the only ones issuing federated IDs, meaning the same problem will happen in the interim. If it doesn’t, and they start issuing faster, it wasn’t the new scheme that fixed it. More important to this part of the story is that it wasn’t just about assignment speed. The last six months have seen MITRE flat out refuse assignments to more and more researchers, citing the vulnerable software isn’t on their list of monitored products. Worse, they actually blame the Editorial Board to a degree, which is a blatant and pathetic attempt at scape-goating. The list they refer to was voted on by the board, yes. But the list given to the board was tragically small and it was about arranging the last chairs on the Titanic so to speak.
He says MITRE is aiming for automated vulnerability identification, description, and processing, and welcomed input from board members and the security community.
This is amusing, disgusting, and absurd. MITRE, including the new person in the mix (Sain) have NOT truly welcomed input from the board. The board has been giving input, non-stop, and MITRE has been ignoring it every single time. Only after repeated mails asking for an update, do they give us the next brief platitude. Sain’s title, “MITRE CVE communications and adoption lead”, is also ironic, given that just last week Sain told the board he would contact them via telephone to discuss the issues facing CVE, and never did. Instead, there was no communication, and MITRE decided to roll out a scheme that was horribly designed without any input from the Board or the industry.
For those who want a better glimpse into just how bad MITRE is handling the CVE project, I encourage you to read the Editorial Board traffic (and hope it is updated, as MITRE still manually runs a script to update that archive).
David Weinstein, a researcher at NowSecure, has posted a blog titled “Ruminations on App CVEs“. Thanks to Will Dormann’s Tweet it came to our attention, and he is correct! We have opinions on this. Quoted material below is from Weinstein’s blog unless otherwise attributed.
CVE is well-positioned to play a critical role in tracking the risk level of mobile computing.
This is trivially to debate and counter in 2015. Until last year, CVE maintained a public list of sources that was virtually unchanged since its inception in 1999. After pressure from the public and the CVE Editorial Board, compromised of non-MITRE industry “advisors”, CVE revised their coverage policy and shifted to a new system of ‘full’ or ‘partial’ coverage based on the vendor, product, and/or vulnerability source. On the surface, this list looks promising, but upon any significant scrutiny, it is utterly lacking in adequate coverage. In order to head off complaints, their webpage even qualifies ‘full coverage’ sources by using the phrasing “nearly all issues disclosed” to “allow the flexibility to potentially postpone coverage of minor issues”.
In fact, Steve Christey may disagree with your assertion. In 2013, he posted to the VIM list saying:
Unfortunately, CVE can no longer guarantee full coverage of all public vulnerabilities… we are not well-prepared to handle the full volume of CVEs for all publicly-disclosed vulnerabilities… – Steve Christey, Editor of CVE
Given the 39,280 vulnerabilities we track that do not have a CVE assignment, assignment volume is specifically a problem for them. Further, the current state of CVE assignments is a disaster, and I have been in mails with CVE staff actively encouraging them to figure out the problem, as my now 27 day wait for three different CVE assignments is getting ridiculous.
However, CVEs have largely focused on tracking server-side and related flaws and yet the security community has evolved to track client-side vulnerabilities as a critical aspect of dealing with risk.
While I can’t offer you a set of handy statistics to back this claim, know that CVE, or their CNAs, assign identifiers to a considerable number of client applications. In fact, with the release of Apple iOS 9 yesterday, the public learned of 91 _new_ vulnerabilities affecting the platform, many of them client-side. I do not believe this statement to be a fair criticism of CVE’s assignment history and policy.
While CVE has worked well for tracking system flaws in mobile operating systems, there remain quirks with regard to mobile app flaws.
There are around 90 vulnerabilities in Google Android alone, that do not have CVE assignments, and that represents a fairly significant percentage overall. However, in this case we can’t blame CVE at all for this shortcoming, as the fault lies entirely with Google instead.
It seems like using CVE is the “right thing” here but MITRE has been very cryptic (in my opinion) about what qualifies as something that gets a CVE and what doesn’t. These discussions should not happen in some closed door meeting in my opinion, or at least should not end there…
First, please factor in that CVE and every other vulnerability database has gone through a world of headache, dealing with researchers who do not fully understand the concept of ‘crossing privilege boundaries’. The number of “not-a-vuln” vulnerability reports we see has skyrocketed this year. Further, even from individuals that are very technical, don’t always convey their information in terms that are easily understood. They clearly see a vulnerability, the report they send doesn’t show it to our eyes. It requires back-and-forth to figure it out and better understand the exact vulnerability. Bottom line, some are clear-cut cases and typically get a quick assignment. Other cases are murky at best.
Second, I fully agree that such a discussion should not be behind closed doors. At the very least, it should include the CVE editorial board, and ideally should include anyone in the industry who has an interest in it. If you want to bring up the discussion on the VIM mail list, which has over one hundred members, including at least one person from each of the major public vulnerability databases, please do.
Is MITRE consistently responding to other researchers’ requests for CVEs?
No. The turnaround time for requesting a CVE has gone up considerably. As mentioned above, I have been waiting for three assignments for almost one month now, while CVE assigned two others filed on the same day within hours. They are not currently consistent with the same researcher, even one familiar with CVE and their abstraction and assignment policy.
Attaching a CVE to a vulnerable app, even if it’s an old version, is actually a big part of tracking the reputation of the developer as well!
Just quoting to signal-boost this, as it is a very important comment, and absolutely true.
Who acts as a certified numbering authority (CNA) on behalf of all the app developers on the market?
Based on the current list of CNAs, there is no CNA that covers third-party apps really, regardless of platform. Also note that many CNAs are not currently fulfilling their duties properly, and I have been trying to address this with MITRE for months. There is no current policy on filing a complaint against a CNA, there is no method for warning or revoking CNA status, and MITRE has dropped the ball replying to me on several different threads on this topic.
Should Google step in here as a formal body to assist with coordinating CVEs etc for these apps?
Absolutely not. While they are positioned to be the ideal CNA, their history in CVE assignments has been dismal. Further, they have so many diverse products, each with their own mechanism for bug reporting (if one exists), own disclosure policy (if one exists), and disclosure method (which doesn’t exist for many). We can’t rely on Google to disclose the vulnerabilities, let alone act as a CNA for standardized and formalized ID assignment. Could Google fix this? Absolutely. They are obviously overflowing with brilliant individuals. However, this would require a central security body in the company that crosses all departments, and fulfills such a duty. Due to the volume of software and vulnerabilities they deal with, it would require a small team. The same kind that IBM, Cisco, and other large companies already stand up and have used for years.
Is the process MITRE established designed and prepared to handle the mountain of bugs that will be thrown at it when the community really focuses on this problem as we have?
No, as covered above. CVE is already backpedaling on coverage, and has been for a couple years. That does not give any level of comfort that they would be willing to, or even could handle such a workload.
Can the community better crowd source this effort with confirmation of vulnerability reporting in a more scalable and distributed manner that doesn’t place a single entity in a potentially critical path?
Possibly. However, in the context of mobile, Google is a better choice than crowd-sourcing via the community. The community largely doesn’t understand the standards and abstraction policy that helps define CVE and the value. Many CNAs do not either unfortunately, despite them being in the best position to handle such assignments. Moving to a model that relies on more trusted CNAs has merit, but it would also require better documentation and training from MITRE to ensure the CNA follows the standards.
Every vendor distributing an app on the Play Store should be required to provide a security related contact e.g., email@example.com
Except, statistically, I bet that not even half of them have a domain! The app stores are full of a wide variety of applications that are little more than a hobby of one person. They don’t get a web page or a formal anything. Given the raw number of apps, that is also a tall order. Maybe apps that enjoy a certain amount of success (via download count?) would then be subjected to such rules?
Google could be a little clearer about when firstname.lastname@example.org should be used for organized disclosure of bugs and consider taking a stronger position in the process.
“Little clearer” is being more than generous. Remember, until a month or two ago, they had no formal place or process to make announcements of security updates. Disclosures related to Android have been all over the board and it is clear that Google has no process around it.
MITRE and/or whoever runs CVE these days should clarify what is appropriate for CVEs so that we know where we should be investing our efforts
Absolutely, and CVE’s documentation the last decade or more has not been properly updated. Having such documentation would also potentially cut down on their headache, as requests are made for issues clearly not a vulnerability. But newcomers to security research don’t have a well-written guide that explain it.
Should we pursue a decentralized CVE request process based on crowdsourcing and reputation?
I cannot begin to imagine what this would entail or what you have in mind. On the surface, won’t happen, won’t work.
Hopefully others in the industry chime in on this discussion, and again, I encourage you to take it to the CVE Editorial Board and the VIM list, to solicit feedback from more. I appreciate you bringing this topic up and getting it attention.
Nothing like waking up to a new article purporting to show vulnerability statistics and having someone ask us for comment. But hey, we love giving additional perspective on such statistics since they are often without proper context and disclaimers. This morning, the new article comes from Help Net Security and is titled “SQL injection vulnerabilities surge to highest levels in three years“. It cites “DB Networks’ research” who did the usual, parsed NVD data. As we all know, that data comes from CVE who is a frequent topic of rant on Twitter and occasional blogs. Cliff notes: CVE does not promise comprehensive coverage of public disclosures. They openly admit this and Steve Christey has repeatedly said that CVE may not be the best for statistics, even as far back as 2006. Early in 2013, Christey again publicly stated that CVE “can no longer guarantee full coverage of all public vulnerabilities.” I bring this up to remind everyone, again, that it is important to add such disclaimers about your data set, and ultimately your published research.
Getting back to the statistics and DB Research, the second thing I wondered after their data source, was who they were and why they were doing this analysis (SQLi specifically). I probably shouldn’t be surprised to learn this comes over a year after they announced they have an appliance that blocks SQL injection attacks. This too should be disclaimed in any article covering their analysis, as it adds perspective why they are choosing a single vulnerability type, and may also explain why they chose NVD over another data sources (since it fits more neatly into their narrative). Following that, they paid Ponemon to help them conduct a study on the threat of SQL injection. Or wait, was it two of them, the second with a bent towards retail breaches? Here are the results of the first one it seems, which again shows that they have a pretty specific bias toward SQL injection.
Now that we know they have a vested interest in the results of their analysis coming out a specific way, let’s look at the results. First, the Help Net Security article does not describe their methodology, at all. Reading their self-written, paid-for news announcement (PR Newswire) about the analysis, it becomes very clear this is a gimmick for advertising, not actual vulnerability statistics research. It’s 2015, years after Steve Christey and I have both ranted about such statistics, and they don’t explain their methodology. This makes it apparent that “CVE abstraction bias” is possibly the biggest factor here. I have blogged about using CVE/NVD as a dataset before, because it contains one of the biggest pitfalls in such statistic generation.
Rather than debunk these stats directly, since it has been done many times in the past, I can say that they are basically meaningless at this point. Even without their methodology, I am sure someone can trivially reproduce their results and figure out if they abstracted per CVE, or per actual SQLi mentioned. As a recent example, CVE-2014-7137 is a single entry that actually covers 54 distinct SQL injection vulnerabilities. If you count just the CVE candidate versus the vulnerabilities that may be listed within them, your numbers will vary greatly. That said, I will assume that their results can be reproduced since we know their data source and their bias in desired results.
With that in mind, let’s first look at what the numbers look like when using a database that clearly abstracts those issues, and covers a couple thousand sources more than CVE officially does:
“SQL Injection Vulnerabilities Surge to Highest Levels in Three Years” is the title of DB Networks’ press release, and is summarized by Help Net Security as “last year produced the most SQL vulnerabilities identified since 2011 and 104% more than were identified in 2013.” In reality (or at least, using a more comprehensive data set), we see that isn’t the case since the available statistics a) don’t show 2014 being more than 2011 and b) 2011 not holding a candle to 2010. No big surprise really, given that we actually do vulnerability aggregation while NVD and DB Networks does not. But really, I digress. These ‘statistics’ are nothing more than a thinly veiled excuse to further advertise themselves. Notice in the press release they go from the statistics that help justify their products right into the third paragraph quoting the CEO being “truly honored to be selected as a finalist for SC Magazine’s Best Database Security Solution.” He follows that up with another line calling out their specific product that helps “database security in some of the world’s largest mission critical datacenters.”
Yep, these statistics are very transparent. They are based on a convenient data source, maintained by an agency that doesn’t actually aggregate the information, that doesn’t have the experience their data-benefactors have (CVE). They are advertised with a single goal in mind; selling their product. The fact they use the word “research” in the context of generating these statistics is a joke.
As always, I encourage companies and individuals to keep publishing vulnerability statistics. But I stress that it should be done responsibly. Disclaim your data source, explain your methodology, be clear if you are curious about the results coming out one way (bias is fine, just disclaim it), and realize that different data sources will produce different results. Dare to use multiple sources and compare the results, even if it doesn’t fully back your desired opinion. Why? Because if you disclaim your data sources and results, the logical and simple conclusion is that you may still be right. We just don’t have the perfect vulnerability disclosure data source yet. Fortunately, some of us are working harder than others to find that unicorn.
CVE, managed by MITRE, a ‘sole-source’ government contractor, who gets as much as one million dollars a year from the government (or more) to run the project, is a confusing entity. Researchers who have reached out to CVE for assignment or clarification on current assignments, have gone 10 days without answer (as of 2014-11-15 late night). Yet look at their actual assignments the past week:
X N 924 Nov 10 email@example.com (22K) [CVENEW] New CVE CANs: 2014/11/10 06:00 ; count=17
X N 931 Nov 11 firstname.lastname@example.org (25K) [CVENEW] New CVE CANs: 2014/11/11 17:00 ; count=32
X N 932 Nov 11 email@example.com (18K) [CVENEW] New CVE CANs: 2014/11/11 18:00 ; count=18
X N 938 Nov 12 firstname.lastname@example.org (7514) [CVENEW] New CVE CANs: 2014/11/12 11:00 ; count=5
X N 962 Nov 13 email@example.com (12K) [CVENEW] New CVE CANs: 2014/11/13 10:00 ; count=9
X N 986 Nov 13 firstname.lastname@example.org (7139) [CVENEW] New CVE CANs: 2014/11/13 19:00 ; count=4
X N 995 Nov 14 email@example.com (7191) [CVENEW] New CVE CANs: 2014/11/14 10:00 ; count=3
X N 1015 Nov 14 firstname.lastname@example.org (6076) [CVENEW] New CVE CANs: 2014/11/14 21:00 ; count=3
X N 1035 Nov 15 email@example.com (5859) [CVENEW] New CVE CANs: 2014/11/15 15:00 ; count=2
X N 1037 Nov 15 firstname.lastname@example.org (9615) [CVENEW] New CVE CANs: 2014/11/15 16:00 ; count=7
X N 1043 Nov 15 email@example.com (9034) [CVENEW] New CVE CANs: 2014/11/15 19:00 ; count=4
X N 1045 Nov 15 firstname.lastname@example.org (7539) [CVENEW] New CVE CANs: 2014/11/15 20:00 ; count=3
X N 1046 Nov 15 email@example.com (5885) [CVENEW] New CVE CANs: 2014/11/15 21:00 ; count=2
Impressive! Until you read between the lines. Mon, 17 entries. Tues, which is MS (32) / Adobe (18) release day, and those numbers are obvious… as it only covers those two vendors. Then 5 on Wed, 13 on Thurs, 6 on Friday… and then 21 on the weekend? Why is a government contractor, who has a long history of not working or answering mails on the weekend, doing what appears to be overtime on a weekend?
Meanwhile, the 10th we have 32 entries, 11th we have 100 entries, 12th we have 92 entries, 13th we have 56 entries, 14th we have 42 entries, and the 15th we have 11 entries. That is 109 entries this week from CVE, where 50 of them (almost half) were Microsoft and Adobe. Meanwhile, we have 337 entries over those same days. That doesn’t count our backfill for historical entries, from those ‘old days’ back in earlier 2014 or 2013, that we are constantly doing.
Tonight, when matching up the Nov 15 CVE entries, we had 100% of the CVE assignments already. Remind me, where is the value of CVE exactly? They are assigning these identifiers in advance, that is obvious. But for most disclosures they are simple and straight-forward. They aren’t being used to coordinate among multiple vendors. The researchers or vendors are including the CVE identifier *before* CVE actually publishes them.
What led me to this post is that CVE is actually working on a weekend, which is very odd. Unless you mail Steve directly you generally don’t hear back from CVE until later in the week. OSVDB / RBS has outstanding mail to both Steve and CVE regarding previous assignments and other things, un-answered for 10 or more days currently. The entire purpose of CVE is to provide this ID for coordination and clarity. When they ignore such a mail, especially from a ‘trusted’ source, it speaks poorly on them. Given the level of government funding they receive, how are they not keeping up with disclosures throughout the week and instead, turning to a Saturday?
And please remember, the Saturday CVE assignments mentioned above won’t appear on CVE’s site for another day, and won’t be in NVD for at least 24 hours. Once NVD gets them, they won’t have a CVSS score or CPE data for a bit after. By a ‘bit’ I mean between a few hours and a few weeks.
This is fail on top of fail. And your security solutions are built on top of this. Yeah, of course this is a losing battle.
When referencing vulnerabilities in your products, you have a habit of only using an internal tracking number that is kept confidential between the reporter (e.g. ICS-CERT, ZDI) and you. For example, from your HotFix page (that requires registration):
WI2815: Directory Traversal Buffer overflow. Provided and/or discovered by: ICS-CERT ticket number ICS-VU-579709 created by Anthony …
The ICS-CERT ticket number is assigned as an internal tracking ID while the relevant parties figure out how to resolve the issue. Ultimately, that ticket number is not published by ICS-CERT. I have already sent a mail to them suggesting they include it in advisories moving forward, to help third parties match up vulnerabilities to fixes to initial reports. Instead of using that, you should use the public ICS-CERT advisory ID. The details you provide there are not specific enough to know which issue this corresponds to.
In another example:
WI2146: Improved the Remote Agent utility (CEServer.exe) to implement authentication between the development application and the target system, to ensure secure downloading, running, and stopping of projects. Also addressed problems with buffer overrun when downloading large files. Credits: ZDI reports ZDI-CAN-1181 and ZDI-CAN-1183 created by Luigi Auriemma
In this case, these likely correspond to OSVDB 77178 and 77179, but it would be nice to know that for sure. Further, we’d like to associate those internal tracking numbers to the entries but vendors do not reliably put them in order, so we don’t know if ZDI-CAN-1181 corresponds to the first or second.
WI1944: ISSymbol Virtual Machine buffer overflow Provided and/or discovered by: ZDI report ZDI-CAN-1341 and ZDI-CAN-1342
In this case, you give two ZDI tracking identifiers, but only mention a single vulnerability. ZDI has a history of abstracting issues very well. The presence of two identifiers, to us, means there are two distinct vulnerabilities.
This is one of the primary reasons CVE exists, and why ZDI, ICS-CERT, and most vendors now use it. In most cases, these larger reporting bodies will have a CVE number to share with you during the process, or if not, will have one at the time of disclosure.
Like your customers do, we appreciate clear information regarding vulnerabilities. Many large organizations will use a central clearing house like ours for vulnerability alerting, rather than trying to monitor hundreds of vendor pages. Helping us to understand your security patches in turn helps your customers.
Finally, adding a date the patch was made available will help to clarify these issues and give another piece of information that is helpful to organizations.
Thank you for your consideration in improving your process!
The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.
We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:
As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.
The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.
#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.
Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.
The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.
Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.
Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.
Readers may recall that I blogged about a similar topic just over a month ago, in an article titled Advisories != Vulnerabilities, and How It Affects Statistics. In this installment, instead of “advisories”, we have “CVEs” and the inherent problems when using CVE identifiers in the place of “vulnerabilities”. Doing so is technically inaccurate, and it negatively influences statistics, ultimately leading to bad conclusions.
NSS Labs just released an extensive report titled “Vulnerability Threat Trends; A Decade in Review, Transition on the Way“, by Stefan Frei. While the report is interesting, and the fundamental methodology is sound, Frei uses a dataset that is not designed for true vulnerability statistics. Additionally, I believe that some factors that Frei attributes to trends are incorrect. I offer this blog as open feedback to bring additional perspective to the realm of vulnerability stats, which is a long ways from approaching maturity.
Vulnerabilities versus CVE
In the NSS Labs paper, they define a vulnerability as “a weakness in software that enables an attacker to compromise the integrity, availability, or confidentiality of the software or the data that it processes.” This is as good a definition as any. The key point here is a weakness, singular. What Frei fails to point out, is that the CVE dictionary is not a vulnerability database in the same sense as many others. It is a specialty database designed primarily to assign a unique identifier to a vulnerability, or a group of vulnerabilities, to coordinate tracking and discussion. While CVE says “CVE Identifiers are unique, common identifiers for publicly known information security vulnerabilities” , it is more important to note the way CVE abstracts, which is covered in great detail. From the CVE page on abstraction:
CVE Abstraction Content Decisions (CDs) provide guidelines about when to combine multiple reports, bugs, and/or attack vectors into a single CVE name (“MERGE”), and when to create separate CVE names (“SPLIT”).
This clearly denotes that a single CVE may represent multiple vulnerabilities. With that in mind, every statistic generated by NSS Labs for this report is not accurate, and their numbers are not reproduceable using any other vulnerability dataset (unless it too is only based on CVE data and does not abstract differently, e.g. NVD). This distinction puts the report’s statements and conclusions in a different light:
As of January 2013 the NVD listed 53,489 vulnerabilities ..
In the last ten years on average 4,660 vulnerabilities were disclosed per year ..
.. with an all-‐time high of 6,462 vulnerabilities counted in 2006 ..
The abstraction distinction means that these numbers aren’t just technically inaccurate (i.e. terminology), they are factually inaccurate (i.e. actual stats when abstracting on a per-vulnerability basis). In each case where Frei uses the term “vulnerability”, he really means “CVE”. When you consider that a single CVE may cover as many as 66 or more distinct vulnerabilities, it really invalidates any statistic generated using this dataset as he did. For example:
However, in 2012 alone the number of vulnerabilities increased again to a considerable 5,225 (80% of the all-‐time high), which is 12% above the ten-‐year average. This is the largest increase observed in the past six years and ends the trend of moderate declines since 2006.
Based on my explanation, what does 5,225 really mean? If we agree for the sake of argument, that CVE averages two distinct vulnerabilities per CVE assignment, that is now over 10,000 vulnerabilities. How does that in turn change any observations on trending?
The report’s key findings offer 7 high-level conclusions based on the CVE data. To put all of the above in more perspective, I will examine a few of them and use an alternate dataset, OSVDB, that abstracts entries on a per-vulnerability basis. With those numbers, we can see how the findings stand. NSS Labs report text is quoted below.
The five year long trend in decreasing vulnerability disclosures ended abruptly in 2012 with a +12% increase
Based on OSVDB data, this is incorrect. Both 2009 (7,879) -> 2010 (8,835) as well as 2011 (7,565) -> 2012 (8,919) showed an upward trend.
More than 90 percent of the vulnerabilities disclosed are moderately or highly critical – and therefore relevant
If we assume “moderately” is “Medium” criticality, as later defined in the report, is 4.0 -‐ 6.9 then OSVDB shows 57,373 entries that are CVSSv2 4.0 – 10.0, out of 82,123 total. That means 90% is considerably higher than we show. Note: we do not have complete CVSSv2 data for 100% of our entries, but we do have them for all entries affiliated with the ones Frei examined and more. If “moderately critical” and “highly critical” refer to different ranges, then they should be more clearly defined.
It is also important to note that this finding is a red herring, due to the way CVSS scoring works. A remote path disclosure in a web application scores a 5.0 base score (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N). This skews the scoring data considerably higher than many in the industry would agree with, as 5.0 is the same score you get for many XSS vulnerabilities that can have more serious impact.
9 percent of vulnerabilities disclosed in 2012 are extremely critical (with CVSS score>9.9) paired with low attack/exploitation complexity
This is another red herring, because any CVSS 10.0 score means that “low complexity” was factored in. The wording in the report implies that a > 9.9 score could be paired with higher complexity, which isn’t possible. Further, CVSS is scored for the worst case scenario when details are not available (e.g. CVE-2012-5895). Given the number of “unspecified” issues, this may seriously skew the number of CVSSv2 10.0 scores.
Finally, there was one other element to this report that was used in the overview, and later in the document, that is used to attribute a shift in disclosure trends. From the overview:
The parallel and massive drop of vulnerability disclosures by the two long established purchase programs iDefense VCP and TippingPoint ZDI indicate a transition in the way vulnerability and exploit information is handled in the industry.
I believe this is a case of “correlation does not mean causation“. While these are the two most recognized third-party bug bounty programs around, there are many variables at play here. In the bigger picture, shifts in these programs do not necessarily mean anything. Some of the factors that may have influenced disclosure numbers for those two programs include:
- There are more bug bounty programs available. Some may offer better price or incentive for disclosing through them, stealing business from iDefense/ZDI.
- Both companies have enjoyed their share of internal politics that affected at least one program. In 2012, several people involved in the ZDI program left the company to form their startup. It has been theorized that since their departure, ZDI has not built the team back up and that disclosures were affected as a result.
- ZDI had a small bout of external politics, in which one of their most prevalent bounty collectors (Luigi Auriemma) had a serious disagreement about ZDI’s handling of a vulnerability, as relates to Portnoy and Exodus. Auriemma’s shift to disclose via his own company would dramatically affect ZDI disclosure totals alone.
- Both of these companies have a moving list of software that they offer a bounty on. As it changes, it may result in spikes of disclosures via their programs.
Regardless, iDefense and ZDI represent a small percentage of overall disclosures, it is curious that Frei opted to focus on this so prominently as a reason for vulnerability trends changing without considering some influencing factors. Even during a good year, 2011 for example, iDefense (42) and ZDI (297) together accounted for 339 out of 7,565 vulnerabilities, only ~ 4.5% of the overall disclosures. There are many other trends that could just as easily explain relatively small shifts in disclosure totals. When making statements about trends in vulnerability disclosure and how it affects statistics, it isn’t something that should be done by casual observers. They simply miss a lot of the low-level details you glean on the day-to-day vulnerability handling and cataloging.
To be clear, I am not against using CVE/NVD data to generate statistics. However, when doing so, it is important that the dataset be explained and qualified before going into analysis. The perception and definition of what “a vulnerability” is changes based on the person or VDB. In vulnerability statistics, not all vulnerabilities are created equal.
I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!
I noticed a Tweet from @SCMagazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.
Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.
It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.
A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12” (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!
If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.
|2012 Advisories (83)||35 (42.2%)||46 (55.4%)||2 (2.4%)||—|
|2012 CVE (160)||100 (62.5%)||18 (11.3%)||39 (24.4%)||3 (1.8%)|
|2012 Total (176)||101 (57.4%)||19 (10.8%)||41 (23.3%)||15 (8.5%)|
It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:
The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.
In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.
- MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
- MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
- Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
- OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
- Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
- One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.
These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.