Author Archive: jerichoattrition

Buying Into the Bias: Why Vulnerability Statistics Suck

Last week, Steve Christey and I gave a presentation at Black Hat Briefings 2013 in Las Vegas about vulnerability statistics. We submitted a brief whitepaper on the topic, reproduced below, to accompany the slides that are now available.

buying_into_bias


Buying Into the Bias: Why Vulnerability Statistics Suck
By Steve Christey (MITRE) and Brian Martin (Open Security Foundation)
July 11, 2013

Academic researchers, journalists, security vendors, software vendors, and professional analysts often analyze vulnerability statistics using large repositories of vulnerability data, such as “Common Vulnerabilities and Exposures” (CVE), the Open Sourced Vulnerability Database (OSVDB), and other sources of aggregated vulnerability information. These statistics are claimed to demonstrate trends in vulnerability disclosure, such as the number or type of vulnerabilities, or their relative severity. Worse, they are typically misused to compare competing products to assess which one offers the best security.

Most of these statistical analyses demonstrate a serious fault in methodology, or are pure speculation in the long run. They use the easily-available, but drastically misunderstood data to craft irrelevant questions based on wild assumptions, while never figuring out (or even asking the sources about) the limitations of the data. This leads to a wide variety of bias that typically goes unchallenged, that ultimately forms statistics that make headlines and, far worse, are used to justify security budget and spending.

As maintainers of two well-known vulnerability information repositories, we’re sick of hearing about research that is quickly determined to be sloppy after it’s been released and gained public attention. In almost every case, the research casts aside any logical approach to generating the statistics. They frequently do not release their methodology, and they rarely disclaim the serious pitfalls in their conclusions. This stems from their serious lack of understanding about the data source they use, and how it operates. In short, vulnerability databases (VDBs) are very different and very fickle creatures. They are constantly evolving and see the world of vulnerabilities through very different glasses.

This paper and its associated presentation introduce a framework in which vulnerability statistics can be judged and improved. The better we get about talking about the issues, the better the chances of truly improving how vulnerability statistics are generated and interpreted.

Bias, We All Have It

Bias is inherent in everything humans do. Even the most rigorous and well-documented process can be affected by levels of bias that we simply do not understand are working against us. This is part of human nature. As with all things, bias is present in the creation of the VDBs, how the databases are populated with vulnerability data, and the subsequent analysis of that data. Not all bias is bad; for example, VDBs have a bias to avoid providing inaccurate information whenever possible, and each VDB effectively has a customer base whose needs directly drive what content is published.

Bias comes in many forms that we see as strongly influencing vulnerability statistics, via a number of actors involved in the process. It is important to remember that VDBs catalog the public disclosure of security vulnerabilities by a wide variety of people with vastly different skills and motivations. The disclosure process varies from person to person and introduces bias for sure, but even before the disclosure occurs, bias has already entered the picture.

Consider the general sequence of events that lead to a vulnerability being cataloged in a VDB.

  1. A researcher chooses a piece of software to examine.
  2. Each researcher operates with a different skill set and focus, using tools or techniques with varying strengths and weaknesses; these differences can impact which vulnerabilities are capable of being discovered.
  3. During the process, the researcher will find at least one vulnerability, often more.
  4. The researcher may or may not opt for vendor involvement in verifying or fixing the issue.
  5. At some point, the researcher may choose to disclose the vulnerability. That disclosure will not be in a common format, may suffer from language barriers, may not be technically accurate, may leave out critical details that impact the severity of the vulnerability (e.g. administrator authentication required), may be a duplicate of prior research, or introduce a number of other problems.
  6. Many VDBs attempt to catalog all public disclosures of information. This is a “best effort” activity, as there are simply too many sources for any one VDB to monitor, and accuracy problems can increase the expense of analyzing a single disclosure.
  7. If the VDB maintainers see the disclosure mentioned above, they will add it to the database if it meets their criteria, which is not always public. If the VDB does not see it, they will not add it. If the VDB disagrees with the disclosure (i.e. believes it to be inaccurate), they may not add it.

By this point, there are a number of criteria that may prevent the disclosure from ever making it into a VDB. Without using the word, the above steps have introduced several types of bias that impact the process. These biases carry forward into any subsequent examination of the database in any manner.

Types of Bias

Specific to the vulnerability disclosure aggregation process that VDBs go through every day, there are four primary types of bias that enter the picture. Note that while each of these can be seen in researchers, vendors, and VDBs, some are more common to one than the others. There are other types of bias that could also apply, but they are beyond the scope of this paper.

Selection bias covers what gets selected for study. In the case of disclosure, this refers to the researcher’s bias in selecting software and the methodology used to test the software for vulnerabilities; for example, a researcher might only investigate software written in a specific language and only look for a handful of the most common vulnerability types. In the case of VDBs, this involves how the VDB discovers and handles vulnerability disclosures from researchers and vendors. Perhaps the largest influence on selection bias is that many VDBs monitor a limited source of disclosures. It is not necessary to argue what “limited” means. Suffice it to say, no VDB is remotely complete on monitoring every source of vulnerability data that is public on the net. Lack of resources – primarily the time of those working on the database – causes a VDB to prioritize sources of information. With an increasing number of regional or country-based CERT groups disclosing vulnerabilities in their native tongue, VDBs have a harder time processing the information. Each vulnerability that is disclosed but does not end up in the VDB, ultimately factors into statistics such as “there were X vulnerabilities disclosed last year”.

Publication bias governs what portion of the research gets published. This ranges from “none”, to sparse information, to incredible technical detail about every finding. Somewhere between selection and publication bias, the researcher will determine how much time they are spending on this particular product, what vulnerabilities they are interested in, and more. All of this folds into what gets published. VDBs may discover a researcher’s disclosure, but then decide not to publish the vulnerability due to other criteria.

Abstraction bias is a term that we crafted to explain the process that VDBs use to assign identifiers to vulnerabilities. Depending on the purpose and stated goal of the VDB, the same 10 vulnerabilities may be given a single identifier by one database, and 10 identifiers by a different one. This level of abstraction is an absolutely critical factor when analyzing the data to generate vulnerability statistics. This is also the most prevalent source of problems for analysis, as researchers rarely understand the concept of abstraction, why it varies, and how to overcome it as an obstacle in generating meaningful statistics. Researchers will use whichever abstraction is most appropriate or convenient for them; after all, there are many different consumers for a researcher advisory, not just VDBs. Abstraction bias is also frequently seen in vendors, and occasionally researchers in the way they disclose one vulnerability multiple times, as it affects different software that bundles additional vendor’s software in it.

Measurement bias refers to potential errors in how a vulnerability is analyzed, verified, and catalogued. For example, with researchers, this bias might be in the form of failing to verify that a potential issue is actually a vulnerability, or in over-estimating the severity of the issue compared to how consumers might prioritize the issue. With vendors, measurement bias may affect how the vendor prioritizes an issue to be fixed, or in under-estimating the severity of the issue. With VDBs, measurement bias may also occur if analysts do not appropriately reflect the severity of the issue, or if inaccuracies are introduced while studying incomplete vulnerability disclosures, such as missing a version of the product that is affected by the vulnerability. It could be argued that abstraction bias is a certain type of measurement bias (since it involves using inconsistent “units of measurement”), but for the purposes of understanding vulnerability statistics, abstraction bias deserves special attention.

Measurement bias, as it affects statistics, is arguably the domain of VDBs, since most statistics are calculated using an underlying VDB instead of the original disclosures. As the primary sources of vulnerability data aggregation, several factors come into play when performing database updates.

Why Bias Matters, in Detail

These forms of bias can work together to create interesting spikes in vulnerability disclosure trends. To the VDB worker, they are typically apparent and sometimes amusing. To an outsider just using a data set to generate statistics, they can be a serious pitfall.

In August, 2008, a single researcher using rudimentary, yet effective methods for finding symlink vulnerabilities single handedly caused a significant spike in symlink vulnerability disclosures over the past 10 years. Starting in 2012 and continuing up to the publication of this paper, a pair of researchers have significantly impacted the number of disclosures in a single product. Not only has this caused a huge spike for the vulnerability count related to the product, it has led to them being ranked as two of the top vulnerability disclosers since January, 2012. Later this year, we expect there to be articles written regarding the number of supervisory control and data acquisition (SCADA) vulnerabilities disclosed from 2012 to 2013. Those articles will be based purely on vulnerability counts as determined from VDBs, likely with no mention of why the numbers are skewed. One prominent researcher who published many SCADA flaws has changed his personal disclosure policy. Instead of publicly disclosing details, he now keeps them private as part of a competitive advantage of his new business.

Another popular place for vulnerability statistics to break down is related to vulnerability severity. Researchers and journalists like to mention the raw number of vulnerabilities in two products and try to compare their relative security. They frequently overlook the severity of the vulnerabilities and may not note that while one product had twice as many disclosures, a significant percentage of them were low severity. Further, they do not understand how the industry-standard CVSSv2 scoring system works, or the bias that can creep in when using it to score vulnerabilities. Considering that a vague disclosure that has little actionable details will frequently be scored for the worst possible impact, that also drastically skews the severity ratings.

Conclusion

The forms of bias and how they may impact vulnerability statistics outlined in this paper are just the beginning. For each party involved, for each type of bias, there are many considerations that must be made. Accurate and meaningful vulnerability statistics are not impossible; they are just very difficult to accurately generate and disclaim.

Our 2013 BlackHat Briefings USA talk hopes to explore many of these points, outline the types of bias, and show concrete examples of misleading statistics. In addition, we will show how you can easily spot questionable statistics, and give some tips on generating and disclaiming good statistics.

Seriously RIM? Call it the HackBerry from now on…

Our sponsor Risk Based Security (RBS) posted an interesting blog this morning about Research In Motion (RIM), creator of the BlackBerry device. The behavior outlined in the blog, and from the original blog by Frank Rieger is shocking to say the least. In addition to the vulnerability outlined, potentially sending credentials in cleartext, this begs the question of legality. Quickly skimming the BlackBerry enterprise end-user license agreement (EULA), there doesn’t appear to be any warning that the credentials are transmitted back to RIM, or that they will authenticate to your mail server.

If the EULA does not contain explicit wording that outlines this behavior, it begs the question of the legality of RIM’s actions. Regardless of their intention, wether trying to claim that it is covered in the EULA or making it easier to use their device, this activity is inexcusable. Without permission, unauthorized possession of authentication credentials is a violation of Title 18 USC § 1030 law, section (a)(2)(C) and potentially others depending on the purpose of the computer. Since the server doing this resides in Canada, RIM may be subject to Canadian law and their activity appears to violate Section 342.1 (d). Given the U.S. government’s adoption of BlackBerry devices, if RIM is authenticating to U.S. government servers during this process, this could get really messy.

Any time a user performs an action that would result in sharing that type of information, with any third party, the device or application should give explicit warning and require the user to not only opt-in, but confirm their choice. No exceptions.

Cybercrime Stats: From Bad to Bad

Since vulnerabilities are a cornerstone of computer crime, stats on it are of interest to us. Statistics on cybercrime have always been dodgy; more so than real-world crime statistics. When your car is broken into or stolen, you know it. More often than not, you will report it to the police. In the computer world, an un-measurable number of intrusions happen every day. The rate of malware infection, DoS attacks, and other virtual crimes are not only difficult impossible to measure, they potentially go unreported more often than not.

Classically, the only number thrown around regarding damages from cybercrime has been this mythical one trillion dollars. Yes, with a ‘T’, not a ‘B’. That number has been challenged by many in the past, but no one has offered a better number with anything remotely factual. On July 22 the Center for Strategic and International Studies released a new study commissioned by McAfee (who previously quoted the trillion dollar figure) saying that damages are much less. From a Los Angeles Times article on the release:

Cyberattacks may be draining as much as $140 billion and half a million jobs from the U.S. economy each year, according to a new study that splashes water on a previous estimate of $1 trillion in annual losses.

“That’s our best guess,” said James Andrew Lewis, the director of the technology and public policy program at the Center for Strategic and International Studies.

James Andrew Lewis’ comment calling it a “best guess” is not assuring. The one trillion dollar figure cited for all those years was no better than a guess, as the surveys it relied on were far from a solid methodology. Regardless, the figure of $140 billion seems more rationale on the surface. Contrasting that is the claim that half a million jobs are “drained” from the U.S. economy each year. How can cybercrime conceivably lead to that? Reading on in the article:

Lewis and co-author Stewart Baker, a distinguished visiting fellow at CSIS, said that they were still working to determine cybercrime’s impact on innovation. They suggested a follow-up report might come out with a bigger number.

But preliminarily, they found U.S. losses to be somewhere between $20 billion to $140 billion, or about 1% of the nation’s GDP. They pegged job losses at 508,000.

“The effect of the net loss of jobs could be small, but if a good portion of these jobs were high-end manufacturing jobs that moved overseas because of intellectual property losses, the effect could be wide ranging,” Lewis said.

Right after the hint of a more rational number, CSIS immediately makes it a worthless number when they say it is really somewhere between $20 billion and $140 billion. In the world of sanity and statistics, that range is unreasonable. Further, Lewis goes on to say that some of the 508,000 jobs lost are due to “high-end manufacturing jobs moved overseas because of intellectual property losses”. Huh? High-end manufacturing jobs are moving overseas because of corporate budgets more than cybercrime. Such a claim should be backed up by a long list of examples showing companies losing intellectual property, and then reporting it to law enforcement or their shareholders, as well as SEC filings.

We moved from the fictional trillion number, to an overly wide range in the tens or hundreds of billions, and got an odd claim of half a million jobs lost due to cybercrime. This new study did little to clear things up.

@Stilgherrian makes another great point in his ZDNet piece, that everyone should take to heart when reading cybercrime statistics:

If we’re killing one cybercrime myth, let’s kill another — one which coincidentally emerged from McAfee — namely that the wealth transfer due to hacking represents some historically-unprecedented economic disaster.

Ultimately, we also have to remember that any cybercrime statistics coming from a company like McAfee are suspect, as they directly benefit them while they sell computer security solutions.

Android versus iOS Security – Not Again…

About two weeks ago, another round of vulnerability stats got passed around. Like others before, it claims to use CVE to compare Apple iOS versus Android in an attempt to establish which is more secure based on “vulnerability counts”. The statistics put forth are basically meaningless, because like most people using a VDB to generate stats, they don’t fully understand their data source. This is one type of bias that enters the picture when generating statistics, and one of many points Steve Christey (MITRE/CVE) and I will be making next week at BlackHat (Wednesday afternoon).

As with other vulnerability statistics, I will debunk the latest by showing why the conclusions are not based on a solid understanding of vulnerabilities, or vulnerability data sources. The post is published on The Verge, written by ‘Mechanicix’. The results match last year’s Symantec Internet Security Threat Report (as mentioned in the comments), as well as the results published this year by Sourcefire in their paper titled “25 Years of Security Vulns“. In all three cases, they use the same data set (CVE), and do the same rudimentary counting to reach their results.

The gist of the finding is that Apple iOS is considerably less secure than Android, as iOS had 238 reported vulnerabilities versus the 27 reported in Android, based on CVE and illustrated through CVEdetails.com.

Total iOS Vulnerabilities 2007-2013: 238
Total Android Vulnerabilities 2009-2013: 27

Keeping in mind those numbers, if you look at the CVE entries that are included, a number of problems are obvious:

  1. We see that the comparison timeframes differ by two years. There are at least 3 vulnerabilities in Android SDK reported before 2009, two of which have CVEs (CVE-2008-0985 and CVE-2008-0986).
  2. These totals are based on CVE identifiers, which does not necessarily reflect a 1-to-1 vulnerability mapping, as they document. You absolutely cannot count CVE as a substitute for vulnerabilities, they are not the same.
  3. The vulnerability totals are incorrect due to using CVE, a data source that has serious gaps in coverage. For example, OSVDB has 71 documented vulnerabilities for Android, and we do not make any claims that our coverage is complete.
  4. The iOS results include vulnerabilities in WebKit, the framework iOS Safari uses. This is problematic for several reasons.
    1. First, that means Mechanicix is now comparing the Android OS to the iOS operating system and applications.
    2. Second, WebKit vulnerabilities account for 109 of the CVE results, almost half of the total reported.
    3. Third, if they did count WebKit intentionally then the numbers are way off as there were around 700 WebKit vulnerabilities reported in that time frame.
    4. Fourth, the default browser in Android uses WebKit, yet they weren’t counted against that platform.
  5. The results include 16 vulnerabilities in Safari itself (or in WebKit and just not diagnosed as such), the default browser.
  6. At least 4 of the 238 are vulnerabilities in Google Chrome (as opposed to WebKit) with no mention of iOS in the CVE.
  7. A wide variety of iOS applications are included in the list including Office Viewer, iMessage, Mail, Broadcom BCM4325 and BCM4329 Wi-Fi chips, Calendar, FreeType, libxslt, and more.

When you factor in all of the above, Android likely comes out on top for the number of vulnerabilities when comparing the operating systems. Once again, vulnerability statistics seem simple on the surface. When you consider the above, and further consider that there are likely more points that influence vulnerability counts, we see that it is anything other than simple.

The curiously creeping value of the iOS vulnerability…

The market for vulnerabilities has grown rapidly the last five years. While the market is certainly not new, going back well over ten years, more organizations are interested in acquiring 0-day / private vulnerabilities for a variety of needs. These vulnerabilities cover the gambit in applications and impacts, and range from the tens of dollars to $100,000 or more. While such transactions are sometimes public, high-end vulnerabilities that sell for large sums generally are not a matter of public record. That makes it difficult to track actual sale prices to gauge the value of such vulnerabilities.

In the vulnerability market place, the seller has the power. If they hold a 0-day vulnerability that is in demand, they can set their own price. For the few vulnerability brokers out there, the perception of vulnerability value is critical for their business. In March, 2013, a Forbes piece by Andy Greenberg covered this topic and told of the sale of an iOS vulnerability that allegedly sold for $250,000.

Even with the $250,000 payout [the Grugq] elicited for that deal, he wonders if he could have gotten more. “I think I lowballed it,” he wrote to me at one point in the dealmaking process. “The client was too happy.”

As expected, there is no validation of the claim of the sale. The price tag comes from the vulnerability broker who has an interest in making such prices public, even if they are exaggerated. Jump to July, 2013, and a New York Times article by Nicole Perlroth and David Sanger makes a vague reference to an iOS vulnerability that sold for $500,000.

Apple still has no such program, but its vulnerabilities are some of the most coveted. In one case, a zero-day exploit in Apple’s iOS operating system sold for $500,000, according to two people briefed on the sale.

Given the vague details, it is fairly safe to assume that it references the iOS vulnerability sale from a year earlier. The NY Times article sources many people regarding vulnerability value, including thegrugq on the first page. This means the vague reference to the “two people briefed on the sale” were likely people briefed by thegrugq as well. Ultimately, this means that both articles and both figures, all source to the same person who has a decided interest in publishing high numbers. Without any detail, the journalists could have contacted one or both sources via email, meaning they could have just as well been thegrugq himself.

I find it interesting that in the span of 1 year and 4 months, the price of that iOS vulnerability jumped from $250,000 to $500,000. More to the point, the original $250,000 price is way out of the league of the prices of vulnerabilities at that time, on any market. Some of us were speculating that a (truly) remote vulnerability in a default Windows installation would go for around $100,000, maybe more. Even if you double our suspected price, it wouldn’t surprise me that a nation-state with a budget would purchase for that amount. But an iOS vulnerability, even remote without user interaction, a year ago? That doesn’t make sense given the user-base and distribution.

Even more interesting, consider that 4 days after the NYTimes article, another outlet was reporting the original $250,000 price.

As I mentioned before, none of this is close to being verified. The only source on record, is someone who directly benefits from the perception that the price of that vulnerability is exceedingly high. Creating the market place value of vulnerabilities through main-stream media is brilliant on his part, if what I suspect is true. Of course, it also speaks to the state of journalism that seemingly no one tried to verify this beyond word-of-mouth.

Local File Inclusion vs Arbitrary File Access

Notes for this blog have been lingering for over three years now. In the daily grind to aggregate vulnerabilities, the time to write about them gets put on the back burner frequently. Rest assured, this is not a new issue by any means.

Back in the day, we had traversal attacks that allowed an attacker to ‘traverse’ outside an intended directory to access a file or directory that was not intended. The most basic example of this known to most is a web application traversal attack such as:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../../../etc/passwd

Making this request would direct the script to traverse outside the web server document root (DOCROOT) to access the system password file (/etc/passwd). For years, these attacks were simply known as “directory traversal” attacks. For limited traversals, CVSSv2 scoring would be 5.0 and look like (AV:N/AC:L/Au:N/C:P/I:N/A:N). If the application is running with full privileges and could access any file on the system, it would score a 7.8 and look like (AV:N/AC:L/Au:N/C:C/I:N/A:N). Note that such an attack only allows an attacker to read the contents of the file, not write to it or execute it as a script. To help distinguish this, such attacks are usually qualified to “traversal arbitrary file access”.

Local File Inclusion (LFI) attacks go back to around 2003 and often exhibit the same trait as directory traversal attacks, as outlined above. Like the traversal, the attack typically involves a relative (e.g. ../../) or absolute path (e.g. &file=/path/to/file) to call a specific file on the system. The difference is in how the application handles the request. Instead of displaying the contents of the file like above, it will include the file as if it is an executable script. This means that arbitrary code, but limited to what is already on the file system, will be executed with the same privileges as the web application and/or web server. Using a combination of real-world common issues, this can be leveraged into full arbitrary remote code execution. For example, if you can access an incoming directory via FTP to write your own .php file, the local file inclusion vulnerability can be used to call that custom code and execute it.

Visually, these two vulnerabilities may look identical:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

Despite appearances, these are two very different attacks. If the first is a traversal arbitrary file access issue, the contents of shell.php will be displayed. If the second is a traversal local file inclusion, the contents of shell.php will be processed as PHP code and executed.

Even with this simple concept, more and more researchers are unable to make this distinction. Arbitrary file access and local file inclusion are not only getting blended together, but traversals that allow for file manipulation (e.g. append, delete, overwrite) or even file enumeration (e.g. determine existence of file only) are also getting lumped in.

Wrong:

Specto Local File Inclusion by H4ckCity Security Team gives a PoC of:

http://SERVER/index.php?page=/etc/passwd

This is clearly not a local file inclusion as the file being included is the standard text file containing password information. Instead, they show an absolute path file disclosure.

OneFileCMS v.1.1.5 Local File Inclusion Vulnerability by mr.pr0n gives a PoC of:

http://TARGET/onefilecms/onefilecms.php?f=../../../../etc/passwd

Again, calling a text file, this time via a standard directory traversal. If this is really a LFI, then the PoC does not show it.

Pollen CMS 0.6 File Disclosure by MizoZ gives a PoC of:

http://SERVER/%5Bpath%5D/core/lib/readimage.php?image=%5BLFI%5D

First, this is a bit suspicious as the parameter ‘image’ implies it will handle images such as JPG or PNG. Second, the [LFI] string doesn’t show if it is an absolute path or traversal. How could the researcher find it without knowing this? Third, and most important, their disclaimer:

The script only verifies the existence of the given file.

Sorry, not even close to a LFI.

Mobile Devices and Exploit Vector Absurdity

The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.

Number One

We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:

As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.

The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.

Number Two

There were a few disclosures out of Japan’s JVN system, run by JPCERT. Two examples, both the same fundamental vulnerability, are summarized below:

#1 – CVE-2013-3643 (NVD Entry) – JVN 99813183 / JVNDB-2013-000056
#2 – CVE-2013-3642 (NVD Entry) – JVN 79301570 / JVNDB-2013-000055

#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.

Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.

The Reality

The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.

Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.

Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.

CVE-2013-1334CVE-2012-1848CVE-2011-1282CVE-2010-3942CVE-2009-1123CVE-2008-2252

Security, Ethics, and University

In the U.S., you are expected to know and live by certain ethical standards related to school. You are taught early on that plagiarism is bad for example. You are taught that school experiments should be done in a safe manner, that does not harm people or animals. Despite this, most colleges and universities maintain a Code of Conduct or a Code of Ethics that applies to both students and faculty. In the security industry, integrity is critical. Part of having integrity is behaving ethically in everything you do. This is important because if a researcher or consultant is questionable or unethical in one part of their life, there is no guarantee they will be when performing services for a client.

In the last week, we have seen two incidents that call into question if university students understand this at all. The first was a PhD student from a university in the U.S. who was not pleased we wouldn’t share our entire database with him. While we try our best to support academic research, we do not feel any academic project requires our entire data set. Further, many of the research projects he and his colleagues are working on are funded by the U.S. government, who may have contract language that means all data gets handed over to them, including ours. Instead of accepting our decision, he said he could just scrape our site and take all of our data anyway. I reminded him that not only does it violate our license, but it violates his university code of conduct and jeopardizes any government funding.

The second instance is outlined in more detail below since a group of three students posted multiple advisories yesterday, that call into question their sense of ethics. Note that the idea of “responsible” disclosure is a term that was strongly pushed by Scott Culp and Microsoft. His article on the topic has since been removed it seems. The term “responsible” disclosure is biased from the start, implying that anyone who doesn’t play by their rules is “irresponsible”. Instead, a better term of “coordinated disclosure” has been used since. Of course, the time frames involved in coordinated disclosure are still heavily debated and likely will never be agreed on. The time given to a vendor for them to patch a flaw cannot be a fixed length. A small content management system with an XSS vulnerability can often be patched in a day or week, where an overflow in a library of an operating system may take months due to testing for compatibility and regression. If the vulnerability is in a device that is difficult (or basically impossible) to upgrade, such as SCADA or non-connected devices (e.g. a pacemaker), then extra caution or thought should be given before disclosing it. While no fixed time can be agreed on, most people in the industry know when a researcher did not give a vendor enough time, or when a vendor seems to be taking too long. It isn’t science; it is a combination of gut and personal experience.

Yesterday’s disclosure of interest is by three students from the European University of Madrid who analyzed IP video cameras as part of their final project of “Security and Information Technology Master”. From their post:

In total, we analyzed 9 different camera brands and we have found 14 vulnerabilities.

**Note that all the analysis we have done has been from cameras found through Google dorks and Shodan, so we have not needed to purchase any of them for our tests. Everything we needed was online.

First, the obvious. Rather than purchasing their own hardware, they used Google and Shodan to find these IP cameras deployed by consumers and businesses. Devices that did not belong to them, they did not have permission to test, and ran the risk of disabling with their testing. If one of the cameras monitored security for a business and became disabled, it further posed a risk to the company as it created a further gap in their physical security.

Second, given these devices are deployed all over the world, and are traditionally difficult or annoying to upgrade, you might expect the researchers to give the vendors adequate time to verify the vulnerabilities and create a fix. How much time did the vendors get?

Airlive 6 days
Axis 16 days
Brickcom 11 days
Grandstream 11 days for 1 vuln, 0 days for 2 vulns
Samsung 0 days
Sony 17 days
TP-LINK 11 days

Shortly after posting their advisory, others on the Full Disclosure mail list challenged them too. For the vendors who received 16 and 17 days, many researchers would consider over two weeks to be adequate. However, for the two vendors that got less than 24 hours warning before disclosure, that is not considered coordinated by anyone.

Every researcher can handle disclosure how they see fit. For some, they have not considered the implications of uncoordinated disclosure, often in a hurry to get their advisory out for name recognition or the thrill. For others that have been doing this a long time, they find themselves jaded after dealing with one too many vendor who was uncooperative, stalled more than 1000 days, or threatened a lawsuit. In this case, they are students at a university and likely not veterans of the industry. Despite their own beliefs, one has to wonder if they violated a code of conduct and what their professor will say.

OSVDB Blog Migration

For years, we have used Typo3 for our blog, hosted on one of our servers. It isn’t bad software at all, I actually like it. That changes entirely when it sits behind Cloudflare. Despite our server being up and reachable, Cloudflare frequently reports the blog offline. When logged in as an administrator and posting a new blog or comment, Cloudflare challenges me by demanding a CAPTCHA, despite no similar suspicious activity. Having to struggle with a service designed to protect, that becomes a burden is bad. Add to that the administrative overhead of managing servers and blog software and it only takes away from time better spent maintaining the database.

With that, we have migrated over to the managed WordPress offering to free up time and reduce headache. The one downside to this is that Typo3 does not appear to have an export feature that WP recognizes. We have backfilled blogs back to early 2007 and will slowly get to the rest. The other downside is in migrating, comments left on previous blogs can’t be migrated in any semblance of their former self. Time permitting, we may cut/paste them over as a single new comment to preserve community feedback.

The upside, we’re much more likely to resume blogging, and with greater frequency. I currently have 17 drafts going, some dating back a year or more. Yes, the old blogging setup was that bad.

CVSSv2 Shortcomings, Faults, and Failures Formulation

The Open Security Foundation (OSF) and Risk Based Security wrote an open letter to FIRST regarding the upcoming Common Vulnerability Scoring System (CVSS) version 3 proposal. While we were not formally asked to provide input, given the expertise of managing vulnerability databases, along with the daily use of CVSS, we felt the feedback would provide valuable insight to improve CVSS in the future.

Some of the areas discussed include:

  • Introducing 4 levels for granularity
  • Better definitions for terminology for more accurate scoring
  • Re-examining the pitfalls of “Access Complexity”
  • Limitations of the current Access Vector breakdown
  • The challenge of scoring authentication
  • And a variety of other considerations to improve vulnerability scoring

Our conclusion points to the need for CVSS to be overhauled as CVSSv2 has too many current shortcomings to provide an adequate and useful risk scoring model. You can download the full letter in PDF format.

Follow

Get every new post delivered to your Inbox.

Join 5,551 other followers