Android versus iOS Security – Not Again…

About two weeks ago, another round of vulnerability stats got passed around. Like others before, it claims to use CVE to compare Apple iOS versus Android in an attempt to establish which is more secure based on “vulnerability counts”. The statistics put forth are basically meaningless, because like most people using a VDB to generate stats, they don’t fully understand their data source. This is one type of bias that enters the picture when generating statistics, and one of many points Steve Christey (MITRE/CVE) and I will be making next week at BlackHat (Wednesday afternoon).

As with other vulnerability statistics, I will debunk the latest by showing why the conclusions are not based on a solid understanding of vulnerabilities, or vulnerability data sources. The post is published on The Verge, written by ‘Mechanicix’. The results match last year’s Symantec Internet Security Threat Report (as mentioned in the comments), as well as the results published this year by Sourcefire in their paper titled “25 Years of Security Vulns“. In all three cases, they use the same data set (CVE), and do the same rudimentary counting to reach their results.

The gist of the finding is that Apple iOS is considerably less secure than Android, as iOS had 238 reported vulnerabilities versus the 27 reported in Android, based on CVE and illustrated through CVEdetails.com.

Total iOS Vulnerabilities 2007-2013: 238
Total Android Vulnerabilities 2009-2013: 27

Keeping in mind those numbers, if you look at the CVE entries that are included, a number of problems are obvious:

  1. We see that the comparison timeframes differ by two years. There are at least 3 vulnerabilities in Android SDK reported before 2009, two of which have CVEs (CVE-2008-0985 and CVE-2008-0986).
  2. These totals are based on CVE identifiers, which does not necessarily reflect a 1-to-1 vulnerability mapping, as they document. You absolutely cannot count CVE as a substitute for vulnerabilities, they are not the same.
  3. The vulnerability totals are incorrect due to using CVE, a data source that has serious gaps in coverage. For example, OSVDB has 71 documented vulnerabilities for Android, and we do not make any claims that our coverage is complete.
  4. The iOS results include vulnerabilities in WebKit, the framework iOS Safari uses. This is problematic for several reasons.
    1. First, that means Mechanicix is now comparing the Android OS to the iOS operating system and applications.
    2. Second, WebKit vulnerabilities account for 109 of the CVE results, almost half of the total reported.
    3. Third, if they did count WebKit intentionally then the numbers are way off as there were around 700 WebKit vulnerabilities reported in that time frame.
    4. Fourth, the default browser in Android uses WebKit, yet they weren’t counted against that platform.
  5. The results include 16 vulnerabilities in Safari itself (or in WebKit and just not diagnosed as such), the default browser.
  6. At least 4 of the 238 are vulnerabilities in Google Chrome (as opposed to WebKit) with no mention of iOS in the CVE.
  7. A wide variety of iOS applications are included in the list including Office Viewer, iMessage, Mail, Broadcom BCM4325 and BCM4329 Wi-Fi chips, Calendar, FreeType, libxslt, and more.

When you factor in all of the above, Android likely comes out on top for the number of vulnerabilities when comparing the operating systems. Once again, vulnerability statistics seem simple on the surface. When you consider the above, and further consider that there are likely more points that influence vulnerability counts, we see that it is anything other than simple.

The curiously creeping value of the iOS vulnerability…

The market for vulnerabilities has grown rapidly the last five years. While the market is certainly not new, going back well over ten years, more organizations are interested in acquiring 0-day / private vulnerabilities for a variety of needs. These vulnerabilities cover the gambit in applications and impacts, and range from the tens of dollars to $100,000 or more. While such transactions are sometimes public, high-end vulnerabilities that sell for large sums generally are not a matter of public record. That makes it difficult to track actual sale prices to gauge the value of such vulnerabilities.

In the vulnerability market place, the seller has the power. If they hold a 0-day vulnerability that is in demand, they can set their own price. For the few vulnerability brokers out there, the perception of vulnerability value is critical for their business. In March, 2013, a Forbes piece by Andy Greenberg covered this topic and told of the sale of an iOS vulnerability that allegedly sold for $250,000.

Even with the $250,000 payout [the Grugq] elicited for that deal, he wonders if he could have gotten more. “I think I lowballed it,” he wrote to me at one point in the dealmaking process. “The client was too happy.”

As expected, there is no validation of the claim of the sale. The price tag comes from the vulnerability broker who has an interest in making such prices public, even if they are exaggerated. Jump to July, 2013, and a New York Times article by Nicole Perlroth and David Sanger makes a vague reference to an iOS vulnerability that sold for $500,000.

Apple still has no such program, but its vulnerabilities are some of the most coveted. In one case, a zero-day exploit in Apple’s iOS operating system sold for $500,000, according to two people briefed on the sale.

Given the vague details, it is fairly safe to assume that it references the iOS vulnerability sale from a year earlier. The NY Times article sources many people regarding vulnerability value, including thegrugq on the first page. This means the vague reference to the “two people briefed on the sale” were likely people briefed by thegrugq as well. Ultimately, this means that both articles and both figures, all source to the same person who has a decided interest in publishing high numbers. Without any detail, the journalists could have contacted one or both sources via email, meaning they could have just as well been thegrugq himself.

I find it interesting that in the span of 1 year and 4 months, the price of that iOS vulnerability jumped from $250,000 to $500,000. More to the point, the original $250,000 price is way out of the league of the prices of vulnerabilities at that time, on any market. Some of us were speculating that a (truly) remote vulnerability in a default Windows installation would go for around $100,000, maybe more. Even if you double our suspected price, it wouldn’t surprise me that a nation-state with a budget would purchase for that amount. But an iOS vulnerability, even remote without user interaction, a year ago? That doesn’t make sense given the user-base and distribution.

Even more interesting, consider that 4 days after the NYTimes article, another outlet was reporting the original $250,000 price.

As I mentioned before, none of this is close to being verified. The only source on record, is someone who directly benefits from the perception that the price of that vulnerability is exceedingly high. Creating the market place value of vulnerabilities through main-stream media is brilliant on his part, if what I suspect is true. Of course, it also speaks to the state of journalism that seemingly no one tried to verify this beyond word-of-mouth.

Local File Inclusion vs Arbitrary File Access

Notes for this blog have been lingering for over three years now. In the daily grind to aggregate vulnerabilities, the time to write about them gets put on the back burner frequently. Rest assured, this is not a new issue by any means.

Back in the day, we had traversal attacks that allowed an attacker to ‘traverse’ outside an intended directory to access a file or directory that was not intended. The most basic example of this known to most is a web application traversal attack such as:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../../../etc/passwd

Making this request would direct the script to traverse outside the web server document root (DOCROOT) to access the system password file (/etc/passwd). For years, these attacks were simply known as “directory traversal” attacks. For limited traversals, CVSSv2 scoring would be 5.0 and look like (AV:N/AC:L/Au:N/C:P/I:N/A:N). If the application is running with full privileges and could access any file on the system, it would score a 7.8 and look like (AV:N/AC:L/Au:N/C:C/I:N/A:N). Note that such an attack only allows an attacker to read the contents of the file, not write to it or execute it as a script. To help distinguish this, such attacks are usually qualified to “traversal arbitrary file access”.

Local File Inclusion (LFI) attacks go back to around 2003 and often exhibit the same trait as directory traversal attacks, as outlined above. Like the traversal, the attack typically involves a relative (e.g. ../../) or absolute path (e.g. &file=/path/to/file) to call a specific file on the system. The difference is in how the application handles the request. Instead of displaying the contents of the file like above, it will include the file as if it is an executable script. This means that arbitrary code, but limited to what is already on the file system, will be executed with the same privileges as the web application and/or web server. Using a combination of real-world common issues, this can be leveraged into full arbitrary remote code execution. For example, if you can access an incoming directory via FTP to write your own .php file, the local file inclusion vulnerability can be used to call that custom code and execute it.

Visually, these two vulnerabilities may look identical:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

Despite appearances, these are two very different attacks. If the first is a traversal arbitrary file access issue, the contents of shell.php will be displayed. If the second is a traversal local file inclusion, the contents of shell.php will be processed as PHP code and executed.

Even with this simple concept, more and more researchers are unable to make this distinction. Arbitrary file access and local file inclusion are not only getting blended together, but traversals that allow for file manipulation (e.g. append, delete, overwrite) or even file enumeration (e.g. determine existence of file only) are also getting lumped in.

Wrong:

Specto Local File Inclusion by H4ckCity Security Team gives a PoC of:

http://SERVER/index.php?page=/etc/passwd

This is clearly not a local file inclusion as the file being included is the standard text file containing password information. Instead, they show an absolute path file disclosure.

OneFileCMS v.1.1.5 Local File Inclusion Vulnerability by mr.pr0n gives a PoC of:

http://TARGET/onefilecms/onefilecms.php?f=../../../../etc/passwd

Again, calling a text file, this time via a standard directory traversal. If this is really a LFI, then the PoC does not show it.

Pollen CMS 0.6 File Disclosure by MizoZ gives a PoC of:

http://SERVER/%5Bpath%5D/core/lib/readimage.php?image=%5BLFI%5D

First, this is a bit suspicious as the parameter ‘image’ implies it will handle images such as JPG or PNG. Second, the [LFI] string doesn’t show if it is an absolute path or traversal. How could the researcher find it without knowing this? Third, and most important, their disclaimer:

The script only verifies the existence of the given file.

Sorry, not even close to a LFI.

Mobile Devices and Exploit Vector Absurdity

The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.

Number One

We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:

As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.

The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.

Number Two

There were a few disclosures out of Japan’s JVN system, run by JPCERT. Two examples, both the same fundamental vulnerability, are summarized below:

#1 – CVE-2013-3643 (NVD Entry) – JVN 99813183 / JVNDB-2013-000056
#2 – CVE-2013-3642 (NVD Entry) – JVN 79301570 / JVNDB-2013-000055

#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.

Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.

The Reality

The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.

Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.

Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.

CVE-2013-1334CVE-2012-1848CVE-2011-1282CVE-2010-3942CVE-2009-1123CVE-2008-2252

Security, Ethics, and University

In the U.S., you are expected to know and live by certain ethical standards related to school. You are taught early on that plagiarism is bad for example. You are taught that school experiments should be done in a safe manner, that does not harm people or animals. Despite this, most colleges and universities maintain a Code of Conduct or a Code of Ethics that applies to both students and faculty. In the security industry, integrity is critical. Part of having integrity is behaving ethically in everything you do. This is important because if a researcher or consultant is questionable or unethical in one part of their life, there is no guarantee they will be when performing services for a client.

In the last week, we have seen two incidents that call into question if university students understand this at all. The first was a PhD student from a university in the U.S. who was not pleased we wouldn’t share our entire database with him. While we try our best to support academic research, we do not feel any academic project requires our entire data set. Further, many of the research projects he and his colleagues are working on are funded by the U.S. government, who may have contract language that means all data gets handed over to them, including ours. Instead of accepting our decision, he said he could just scrape our site and take all of our data anyway. I reminded him that not only does it violate our license, but it violates his university code of conduct and jeopardizes any government funding.

The second instance is outlined in more detail below since a group of three students posted multiple advisories yesterday, that call into question their sense of ethics. Note that the idea of “responsible” disclosure is a term that was strongly pushed by Scott Culp and Microsoft. His article on the topic has since been removed it seems. The term “responsible” disclosure is biased from the start, implying that anyone who doesn’t play by their rules is “irresponsible”. Instead, a better term of “coordinated disclosure” has been used since. Of course, the time frames involved in coordinated disclosure are still heavily debated and likely will never be agreed on. The time given to a vendor for them to patch a flaw cannot be a fixed length. A small content management system with an XSS vulnerability can often be patched in a day or week, where an overflow in a library of an operating system may take months due to testing for compatibility and regression. If the vulnerability is in a device that is difficult (or basically impossible) to upgrade, such as SCADA or non-connected devices (e.g. a pacemaker), then extra caution or thought should be given before disclosing it. While no fixed time can be agreed on, most people in the industry know when a researcher did not give a vendor enough time, or when a vendor seems to be taking too long. It isn’t science; it is a combination of gut and personal experience.

Yesterday’s disclosure of interest is by three students from the European University of Madrid who analyzed IP video cameras as part of their final project of “Security and Information Technology Master”. From their post:

In total, we analyzed 9 different camera brands and we have found 14 vulnerabilities.

**Note that all the analysis we have done has been from cameras found through Google dorks and Shodan, so we have not needed to purchase any of them for our tests. Everything we needed was online.

First, the obvious. Rather than purchasing their own hardware, they used Google and Shodan to find these IP cameras deployed by consumers and businesses. Devices that did not belong to them, they did not have permission to test, and ran the risk of disabling with their testing. If one of the cameras monitored security for a business and became disabled, it further posed a risk to the company as it created a further gap in their physical security.

Second, given these devices are deployed all over the world, and are traditionally difficult or annoying to upgrade, you might expect the researchers to give the vendors adequate time to verify the vulnerabilities and create a fix. How much time did the vendors get?

Airlive 6 days
Axis 16 days
Brickcom 11 days
Grandstream 11 days for 1 vuln, 0 days for 2 vulns
Samsung 0 days
Sony 17 days
TP-LINK 11 days

Shortly after posting their advisory, others on the Full Disclosure mail list challenged them too. For the vendors who received 16 and 17 days, many researchers would consider over two weeks to be adequate. However, for the two vendors that got less than 24 hours warning before disclosure, that is not considered coordinated by anyone.

Every researcher can handle disclosure how they see fit. For some, they have not considered the implications of uncoordinated disclosure, often in a hurry to get their advisory out for name recognition or the thrill. For others that have been doing this a long time, they find themselves jaded after dealing with one too many vendor who was uncooperative, stalled more than 1000 days, or threatened a lawsuit. In this case, they are students at a university and likely not veterans of the industry. Despite their own beliefs, one has to wonder if they violated a code of conduct and what their professor will say.

Our Latest Legal Threat

UPDATE: Shortly after the initial draft of this blog was written (but days before it was published), David mailed again shortly after my reply to apologize and clear up that any notion of a legal threat was not intended. Note that his reply was not sent to the same addresses he originally mailed, or the ones that were added in our reply, so it was not immediately seen. He went on to say that he “fired off an email quickly on my own in frustration without talking to anyone before hand or letting anyone else preview it“. As such, we have edited this post to mostly redact the company name as well as fully redact David’s last name. It is not our intent to punish anyone and we understand and appreciate that such actions are often misunderstood and not intended. We now hope that this blog post can serve as a lesson to everyone, ourselves included, about how emails can be perceived from both vendors, and vulnerability databases.


As most people who follow the OSVDB project know, we strive for the most complete and accurate information about vulnerabilities. We take it very seriously, almost to a fault. We actively seek out information from the community and routinely contact vendors and researchers directly to confirm we have a clear understanding of the information published. When we are provided more clarity we update our entries without hesitation. However, when we receive an email from a vendor with a “legal issue” in the subject and it tells us to change an entry without new evidence, this concerns us as it goes against the core of the project to provide accurate, detailed, current, and unbiased technical security information.

In keeping with our mission to help educate both vendors and researchers on how best to handle the vulnerability disclosure process, we believe it is in the interest of the community to publish details if a software vendor uses legal action, or the implied threat of legal action, to silence vulnerability information. Typically when we have vendors contact us they want to have an entry removed completely, but that was not the case in this situation. In this case, rather than try to work with us to ensure the entry is accurate we received an email from a large medical vendor that “suggested” we change published information so that it would no longer be factual.

David, who sent the mail, said he would follow up the next day with us but did not. As we shared with him on Friday in our reply, we would write a blog about the incident on Monday to ensure that everyone was made aware of the situation. Below are the two emails exchanged, only edited for formatting. No content has been removed or altered.

We respect vendor concerns about entries, and will flag “Vendor Disputed” immediately when we are contacted. We will then examine their concerns and make changes appropriately. In this case, the vendor has verified the vulnerability itself, but they are disputing the access vector. This may not seem like a big deal, but we take the “accurate information” guiding principle very seriously. Our vulnerability entry is currently using the CVSSv2 scoring from NVD at 4.3 publicly. The associated CERT/VU scoring has it at 7.4. We believe the score should really be 10.0 due to the vulnerability being remote default hardcoded credentials that allow full access to the database. Changing the access vector from remote to local, as the vendor requested, could result in a score as low as 1.9. Remember, while CVSSv2 has some faults, the base scoring system is still done according to the “constant with time and across user environments”. That means that third-party protection mechanisms like firewalls, routers, or other screening devices are not factored into scoring.

If any entry containing technically inaccurate information needs to be updated, we are happy to do so immediately provided there is sufficient evidence available. This has been our policy for almost 10 years now, and it will not change. Threatening legal action over something so trivial, without trying to resolve it amicably, seems counterproductive.

From: David
To: OSVDB moderators
Date: Fri, 17 May 2013 12:31:51 -0400
Subject: [OSVDB Mods] Legal issue and web site problem

1) When clicking on comment the web site returns a 500 error. As a result comments are not allowed on VBD items and owners of the software are not able to dispute misrepresentations in the posts.

http://osvdb.org/92817

2) Contains a fundamental misrepresentation of the original CERT posting which we will dispute with any and all means necessary. Please correct the post to indicate that the issue is not remotely exploitable which is clearly evident in the CERT post description, the remediation steps and is evident in the CVSS score itself. Ex: http://www.kb.cert.org/vuls/id/948155 “…the attacker would need network access to the database in order to obtain sensitive patient information.”

Please correct it immediately and ensure any other entities that receive a feed from your site also have corrected this misrepresentation. I will make our security response team aware of this posting and we will follow up with you tomorrow to ensure its corrected.

Thanks,
David

[..]

Please consider the environment before printing this email.

E-mail messages may contain viruses, worms, or other malicious code. By reading the message and opening any attachments, the recipient accepts full responsibility for taking protective action against such code. Henry Schein is not liable for any loss or damage arising from this message.

The information in this email is confidential and may be legally privileged. It is intended solely for the addressee(s). Access to this e-mail by anyone else is unauthorized.

Subsequent to this email, we had two comments left on other entries meaning the problem with comments causing a 500 are intermittent. Regardless, email is always a better way to ensure reaching us to discuss an issue.

From: Brian Martin
To: David
Cc: Legal @ OSF, OSVDB Moderators
Date: Fri, 17 May 2013 12:16:04 -0500 (CDT)
Subject: Re: [OSVDB Mods] Legal issue and web site problem

David,

On Fri, 17 May 2013, Cross, David wrote:

: 1) When clicking on comment the web site returns a 500 error. As a
: result comments are not allowed on VBD items and owners of the software
: are not able to dispute misrepresentations in the posts.
:
: http://osvdb.org/92817

Please note that in emailing the moderators, you are in fact disputing the
entry. This is a faster and more reliable method of raising a question
with us. During a recent upgrade, the comment functionality broke, and you
are the first to notice. That is why it has remained unfixed, as it is
considered very low priority to us.

: 2) Contains a fundamental misrepresentation of the original CERT
: posting which we will dispute with any and all means necessary. Please
: correct the post to indicate that the issue is not remotely exploitable
: which is clearly evident in the CERT post description, the remediation
: steps and is evident in the CVSS score itself. Ex:
: http://www.kb.cert.org/vuls/id/948155 “…the attacker would need
: network access to the database in order to obtain sensitive patient
: information.”

Between your subject line calling this a “legal issue” and including “any
and all means necessary” in the body, the Open Security Foundation (OSF)
is considering this email a threat of intended legal action and will reply
accordingly. We already strive for accuracy in our data and have a long
history of going out of our way to ensure it, frequently contacting
vendors for additional information, bringing issues to their attention,
and engaging in emails such as this to figure out details.

First, our entry does not misrepresent the CERT posting at all. Looking at
CERT VU 948155, specifically the Solution section:

http://www.kb.cert.org/vuls/id/948155

Restrict Network Access

As a general good security practice, only allow connections from trusted hosts and networks. Restricting access would prevent an attacker from using the hard-coded credentials from a blocked network location.

Do not allow the Dentrix G5 database to be accessed by unauthorized users on an insecure wireless network. If the Dentrix G5 database is accessible from an insecure wireless network, a remote attacker may be able to gain access using the hard-coded credentials.

Further, looking at the CERT page that includes what they call a “vendor
statement”, implying it came from Henry Schein Practice Solutions:

http://www.kb.cert.org/vuls/id/JALR-8ZRHUK

It is important to note, however, that the disclosure of the internal database password only posed a vulnerability for practices whose network was unprotected (i.e. practices who lacked a firewall and/or other basic network safeguards).

Between CERT and your company statement, it is abundantly clear that our
classification of this issue is accurate. In both cases, it explicitly
says that this may be a remote issue, and it relies on having third-party
hardware and software installed to protect the database from a remote
attacker. While most companies would follow these guidelines as part of a
regular security posture, we cannot make that assumption because history
has shown us that companies routinely fail to practice the most basic of
security measures. Our entries are added and updated with _factual
information_ pertaining to the issue. We do not account for network
configurations or the possible presence of third-party devices because
that does not happen 100% of the time.

With that, I have updated the entry to reflect that Henry Schein Practice
Solutions stresses that proper network protection be implemented to help
mitigate this issue. It does not change the fact that this can be remotely
exploited in some circumstances.

: Please correct it immediately and ensure any other entities that receive
: a feed from your site also have corrected this misrepresentation.

Now that the information is updated in our site, anyone viewing or
accessing the information has the latest updates.

: I will make our security response team aware of this posting and we will
: follow up with you tomorrow to ensure its corrected.

Likewise. I have made the other moderators aware of this situation and I
will be authoring a blog post on this entire matter (which will also be
Tweeted to our followers, and included on the ISN mail list that goes out
to ~ 6,000 security professionals), including the implied threat of legal
action to be posted Monday during business hours. We feel it is important
for the industry to know when a vendor uses such tactics in an attempt to
stifle vulnerability disclosure, and to unfairly pressure an organization
into displaying inaccurate information, which you are attempting to do.

Brian Martin
OSF / OSVDB.org

OSVDB Blog Migration

For years, we have used Typo3 for our blog, hosted on one of our servers. It isn’t bad software at all, I actually like it. That changes entirely when it sits behind Cloudflare. Despite our server being up and reachable, Cloudflare frequently reports the blog offline. When logged in as an administrator and posting a new blog or comment, Cloudflare challenges me by demanding a CAPTCHA, despite no similar suspicious activity. Having to struggle with a service designed to protect, that becomes a burden is bad. Add to that the administrative overhead of managing servers and blog software and it only takes away from time better spent maintaining the database.

With that, we have migrated over to the managed WordPress offering to free up time and reduce headache. The one downside to this is that Typo3 does not appear to have an export feature that WP recognizes. We have backfilled blogs back to early 2007 and will slowly get to the rest. The other downside is in migrating, comments left on previous blogs can’t be migrated in any semblance of their former self. Time permitting, we may cut/paste them over as a single new comment to preserve community feedback.

The upside, we’re much more likely to resume blogging, and with greater frequency. I currently have 17 drafts going, some dating back a year or more. Yes, the old blogging setup was that bad.

CVSSv2 Shortcomings, Faults, and Failures Formulation

The Open Security Foundation (OSF) and Risk Based Security wrote an open letter to FIRST regarding the upcoming Common Vulnerability Scoring System (CVSS) version 3 proposal. While we were not formally asked to provide input, given the expertise of managing vulnerability databases, along with the daily use of CVSS, we felt the feedback would provide valuable insight to improve CVSS in the future.

Some of the areas discussed include:

  • Introducing 4 levels for granularity
  • Better definitions for terminology for more accurate scoring
  • Re-examining the pitfalls of “Access Complexity”
  • Limitations of the current Access Vector breakdown
  • The challenge of scoring authentication
  • And a variety of other considerations to improve vulnerability scoring

Our conclusion points to the need for CVSS to be overhauled as CVSSv2 has too many current shortcomings to provide an adequate and useful risk scoring model. You can download the full letter in PDF format.

CVE Vulnerabilities: How Your Dataset Influences Statistics

Readers may recall that I blogged about a similar topic just over a month ago, in an article titled Advisories != Vulnerabilities, and How It Affects Statistics. In this installment, instead of “advisories”, we have “CVEs” and the inherent problems when using CVE identifiers in the place of “vulnerabilities”. Doing so is technically inaccurate, and it negatively influences statistics, ultimately leading to bad conclusions.

NSS Labs just released an extensive report titled “Vulnerability Threat Trends; A Decade in Review, Transition on the Way“, by Stefan Frei. While the report is interesting, and the fundamental methodology is sound, Frei uses a dataset that is not designed for true vulnerability statistics. Additionally, I believe that some factors that Frei attributes to trends are incorrect. I offer this blog as open feedback to bring additional perspective to the realm of vulnerability stats, which is a long ways from approaching maturity.

Vulnerabilities versus CVE

In the NSS Labs paper, they define a vulnerability as “a weakness in software that enables an attacker to compromise the integrity, availability, or confidentiality of the software or the data that it processes.” This is as good a definition as any. The key point here is a weakness, singular. What Frei fails to point out, is that the CVE dictionary is not a vulnerability database in the same sense as many others. It is a specialty database designed primarily to assign a unique identifier to a vulnerability, or a group of vulnerabilities, to coordinate tracking and discussion. While CVE says “CVE Identifiers are unique, common identifiers for publicly known information security vulnerabilities” , it is more important to note the way CVE abstracts, which is covered in great detail. From the CVE page on abstraction:

CVE Abstraction Content Decisions (CDs) provide guidelines about when to combine multiple reports, bugs, and/or attack vectors into a single CVE name (“MERGE”), and when to create separate CVE names (“SPLIT”).

This clearly denotes that a single CVE may represent multiple vulnerabilities. With that in mind, every statistic generated by NSS Labs for this report is not accurate, and their numbers are not reproduceable using any other vulnerability dataset (unless it too is only based on CVE data and does not abstract differently, e.g. NVD). This distinction puts the report’s statements and conclusions in a different light:

As of January 2013 the NVD listed 53,489 vulnerabilities ..
In the last ten years on average 4,660 vulnerabilities were disclosed per year ..
.. with an all-‐time high of 6,462 vulnerabilities counted in 2006 ..

The abstraction distinction means that these numbers aren’t just technically inaccurate (i.e. terminology), they are factually inaccurate (i.e. actual stats when abstracting on a per-vulnerability basis). In each case where Frei uses the term “vulnerability”, he really means “CVE”. When you consider that a single CVE may cover as many as 66 or more distinct vulnerabilities, it really invalidates any statistic generated using this dataset as he did. For example:

However, in 2012 alone the number of vulnerabilities increased again to a considerable 5,225 (80% of the all-‐time high), which is 12% above the ten-‐year average. This is the largest increase observed in the past six years and ends the trend of moderate declines since 2006.

Based on my explanation, what does 5,225 really mean? If we agree for the sake of argument, that CVE averages two distinct vulnerabilities per CVE assignment, that is now over 10,000 vulnerabilities. How does that in turn change any observations on trending?

The report’s key findings offer 7 high-level conclusions based on the CVE data. To put all of the above in more perspective, I will examine a few of them and use an alternate dataset, OSVDB, that abstracts entries on a per-vulnerability basis. With those numbers, we can see how the findings stand. NSS Labs report text is quoted below.

The five year long trend in decreasing vulnerability disclosures ended abruptly in 2012 with a +12% increase

Based on OSVDB data, this is incorrect. Both 2009 (7,879) -> 2010 (8,835) as well as 2011 (7,565) -> 2012 (8,919) showed an upward trend.

More than 90 percent of the vulnerabilities disclosed are moderately or highly critical – and therefore relevant

If we assume “moderately” is “Medium” criticality, as later defined in the report, is 4.0 -‐ 6.9 then OSVDB shows 57,373 entries that are CVSSv2 4.0 – 10.0, out of 82,123 total. That means 90% is considerably higher than we show. Note: we do not have complete CVSSv2 data for 100% of our entries, but we do have them for all entries affiliated with the ones Frei examined and more. If “moderately critical” and “highly critical” refer to different ranges, then they should be more clearly defined.

It is also important to note that this finding is a red herring, due to the way CVSS scoring works. A remote path disclosure in a web application scores a 5.0 base score (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N). This skews the scoring data considerably higher than many in the industry would agree with, as 5.0 is the same score you get for many XSS vulnerabilities that can have more serious impact.

9 percent of vulnerabilities disclosed in 2012 are extremely critical (with CVSS score>9.9) paired with low attack/exploitation complexity

This is another red herring, because any CVSS 10.0 score means that “low complexity” was factored in. The wording in the report implies that a > 9.9 score could be paired with higher complexity, which isn’t possible. Further, CVSS is scored for the worst case scenario when details are not available (e.g. CVE-2012-5895). Given the number of “unspecified” issues, this may seriously skew the number of CVSSv2 10.0 scores.

Finally, there was one other element to this report that was used in the overview, and later in the document, that is used to attribute a shift in disclosure trends. From the overview:

The parallel and massive drop of vulnerability disclosures by the two long established purchase programs iDefense VCP and TippingPoint ZDI indicate a transition in the way vulnerability and exploit information is handled in the industry.

I believe this is a case of “correlation does not mean causation“. While these are the two most recognized third-party bug bounty programs around, there are many variables at play here. In the bigger picture, shifts in these programs do not necessarily mean anything. Some of the factors that may have influenced disclosure numbers for those two programs include:

  • There are more bug bounty programs available. Some may offer better price or incentive for disclosing through them, stealing business from iDefense/ZDI.
  • Both companies have enjoyed their share of internal politics that affected at least one program. In 2012, several people involved in the ZDI program left the company to form their startup. It has been theorized that since their departure, ZDI has not built the team back up and that disclosures were affected as a result.
  • ZDI had a small bout of external politics, in which one of their most prevalent bounty collectors (Luigi Auriemma) had a serious disagreement about ZDI’s handling of a vulnerability, as relates to Portnoy and Exodus. Auriemma’s shift to disclose via his own company would dramatically affect ZDI disclosure totals alone.
  • Both of these companies have a moving list of software that they offer a bounty on. As it changes, it may result in spikes of disclosures via their programs.

Regardless, iDefense and ZDI represent a small percentage of overall disclosures, it is curious that Frei opted to focus on this so prominently as a reason for vulnerability trends changing without considering some influencing factors. Even during a good year, 2011 for example, iDefense (42) and ZDI (297) together accounted for 339 out of 7,565 vulnerabilities, only ~ 4.5% of the overall disclosures. There are many other trends that could just as easily explain relatively small shifts in disclosure totals. When making statements about trends in vulnerability disclosure and how it affects statistics, it isn’t something that should be done by casual observers. They simply miss a lot of the low-level details you glean on the day-to-day vulnerability handling and cataloging.

To be clear, I am not against using CVE/NVD data to generate statistics. However, when doing so, it is important that the dataset be explained and qualified before going into analysis. The perception and definition of what “a vulnerability” is changes based on the person or VDB. In vulnerability statistics, not all vulnerabilities are created equal.

Everything is Vulnerable – Even Security Software!

A week ago, I read an interesting blog post by Jeremiah Grossman of WhiteHat Security titled: “201x: The Year of the Security Industry Breach”, which discussed that security software may be the next big target for attackers to focus on

Some great points are presented and I especially appreciate how the fact is hammered in that attackers shift focus whenever required, but as Jericho pointed out when we discussed it, “The security industry does not.” I find, however, that the blog post lacking a couple of key points, which caused me to write this follow-up (not a rebuttal – Jericho handles those).

I agree that we have for decades been offered the same solution to all our security problems: Buy more/newer/subscription security software to deal with the threats. It is also certainly installed in abundance.

Security software does present problems however, and these concerns are not new. Researchers have voiced concerns for years over security software like firewalls and especially anti-virus (AV), pointing out that businesses are adding more (potentially flawed) code to protect themselves. It’s a common rule of thumb that the more lines of code, the more vulnerabilities. Reducing attack surface by adding an even greater attack surface is a paradox.

While security software in general presents an interesting target, it does, however, not necessarily mean that we will see it receiving the same heavy abuse as Java, Internet Explorer, and Flash, which are currently getting hammered by the bad guys.

When the bad guys were exploiting vulnerabilities in Adobe Reader, they were not targeting PDF viewers overall; they did not really care about e.g. Foxit Reader, PDF-XChange, nor Sumatra PDF. When Microsoft Office was hot, other office suites like OpenOffice and Corel Office did not feel the burn. And when 0-days are being dropped in Internet Explorer, browsers like Firefox, Opera, and Chrome can relax (relatively speaking) on the side.

These other products are not necessarily going free or only receiving limited attention because they are safer – they may just not have a user base that results in a profitable ROI.

If history is permitted to serve as a pointer, the bad guys do not target a whole class of products, but generally specific products with an extremely high user base allowing widespread attacks. It would, therefore, be more likely to see attacks focusing on products from a specific security vendor. That would, however, require a strong position with the vendor’s products being installed on a very large scale and while security software is prevalent, no single vendor dominates the market – especially not with a single product.

Another option would be if the security products start using the same codebase e.g. for file parsing or other common routines. Should security software e.g. use the same third-party parsing functionality, these parsers would certainly become of greater interest.

It should also be noted that while security software has not been targeted by the bad guys in the same manner as the previously mentioned products (and currently for good reason), it has still received a fair amount of attention over the years from researchers (myself included).

Looking at the OSVDB vulnerability database, it has at the time of writing 1,919 entries covering vulnerabilities in security software – in fact almost 2% of all our entries (yes, we have a specific classification flag, making it very easy to get a historical overview). These span from 2013 all the way back to 1995, when security software started gaining a foothold. I specifically remember when Alex Wheeler in 2005 was doing some serious work on the archive parsing functionality of many security software vendors’ AV solutions.

For the past 5-6 years, Symantec has also had their worries because they decided to implement very flawed parsers from HP/Autonomy/Verity Keyview in some of their security solutions, including their Mail Security and Data Loss Prevention products. Or Microsoft who on a related note made the questionable decision of implementing Oracle Outside In parsers in Microsoft Exchange Server 2007 and 2010. A vendor with a history of vulnerabilities, that they have worked very hard to overcome, deciding to implement a more notoriously insecure vendor’s software is certainly interesting.

Does this mean we won’t see any focus on security software in the future? Certainly not. Security software exposes a broad and tantalizing attack surface and is not necessarily built to impress (as Tavis Ormandy recently demonstrated for Sophos’ security solutions). It is an interesting target class of products that will continue to receive attention – and perhaps even an increasing focus once Oracle eventually sorts out Java, though that may take a while.

When the going gets tough, attackers just change to a different target with a better ROI, and it cannot be ruled out that the target ends up being the code actually intended to protect you.

So remember that 201x is already here and that while the bad guys may not go all-in against security solutions, vulnerabilities in these have been found consistently for many years. Consider what security software you really need – and certainly which features you need enabled.

Follow

Get every new post delivered to your Inbox.

Join 5,028 other followers