Monthly Archives: June, 2013

Local File Inclusion vs Arbitrary File Access

Notes for this blog have been lingering for over three years now. In the daily grind to aggregate vulnerabilities, the time to write about them gets put on the back burner frequently. Rest assured, this is not a new issue by any means.

Back in the day, we had traversal attacks that allowed an attacker to ‘traverse’ outside an intended directory to access a file or directory that was not intended. The most basic example of this known to most is a web application traversal attack such as:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../../../etc/passwd

Making this request would direct the script to traverse outside the web server document root (DOCROOT) to access the system password file (/etc/passwd). For years, these attacks were simply known as “directory traversal” attacks. For limited traversals, CVSSv2 scoring would be 5.0 and look like (AV:N/AC:L/Au:N/C:P/I:N/A:N). If the application is running with full privileges and could access any file on the system, it would score a 7.8 and look like (AV:N/AC:L/Au:N/C:C/I:N/A:N). Note that such an attack only allows an attacker to read the contents of the file, not write to it or execute it as a script. To help distinguish this, such attacks are usually qualified to “traversal arbitrary file access”.

Local File Inclusion (LFI) attacks go back to around 2003 and often exhibit the same trait as directory traversal attacks, as outlined above. Like the traversal, the attack typically involves a relative (e.g. ../../) or absolute path (e.g. &file=/path/to/file) to call a specific file on the system. The difference is in how the application handles the request. Instead of displaying the contents of the file like above, it will include the file as if it is an executable script. This means that arbitrary code, but limited to what is already on the file system, will be executed with the same privileges as the web application and/or web server. Using a combination of real-world common issues, this can be leveraged into full arbitrary remote code execution. For example, if you can access an incoming directory via FTP to write your own .php file, the local file inclusion vulnerability can be used to call that custom code and execute it.

Visually, these two vulnerabilities may look identical:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

Despite appearances, these are two very different attacks. If the first is a traversal arbitrary file access issue, the contents of shell.php will be displayed. If the second is a traversal local file inclusion, the contents of shell.php will be processed as PHP code and executed.

Even with this simple concept, more and more researchers are unable to make this distinction. Arbitrary file access and local file inclusion are not only getting blended together, but traversals that allow for file manipulation (e.g. append, delete, overwrite) or even file enumeration (e.g. determine existence of file only) are also getting lumped in.

Wrong:

Specto Local File Inclusion by H4ckCity Security Team gives a PoC of:

http://SERVER/index.php?page=/etc/passwd

This is clearly not a local file inclusion as the file being included is the standard text file containing password information. Instead, they show an absolute path file disclosure.

OneFileCMS v.1.1.5 Local File Inclusion Vulnerability by mr.pr0n gives a PoC of:

http://TARGET/onefilecms/onefilecms.php?f=../../../../etc/passwd

Again, calling a text file, this time via a standard directory traversal. If this is really a LFI, then the PoC does not show it.

Pollen CMS 0.6 File Disclosure by MizoZ gives a PoC of:

http://SERVER/%5Bpath%5D/core/lib/readimage.php?image=%5BLFI%5D

First, this is a bit suspicious as the parameter ‘image’ implies it will handle images such as JPG or PNG. Second, the [LFI] string doesn’t show if it is an absolute path or traversal. How could the researcher find it without knowing this? Third, and most important, their disclaimer:

The script only verifies the existence of the given file.

Sorry, not even close to a LFI.

Mobile Devices and Exploit Vector Absurdity

The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.

Number One

We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:

As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.

The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.

Number Two

There were a few disclosures out of Japan’s JVN system, run by JPCERT. Two examples, both the same fundamental vulnerability, are summarized below:

#1 – CVE-2013-3643 (NVD Entry) – JVN 99813183 / JVNDB-2013-000056
#2 – CVE-2013-3642 (NVD Entry) – JVN 79301570 / JVNDB-2013-000055

#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.

Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.

The Reality

The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.

Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.

Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.

CVE-2013-1334CVE-2012-1848CVE-2011-1282CVE-2010-3942CVE-2009-1123CVE-2008-2252

Security, Ethics, and University

In the U.S., you are expected to know and live by certain ethical standards related to school. You are taught early on that plagiarism is bad for example. You are taught that school experiments should be done in a safe manner, that does not harm people or animals. Despite this, most colleges and universities maintain a Code of Conduct or a Code of Ethics that applies to both students and faculty. In the security industry, integrity is critical. Part of having integrity is behaving ethically in everything you do. This is important because if a researcher or consultant is questionable or unethical in one part of their life, there is no guarantee they will be when performing services for a client.

In the last week, we have seen two incidents that call into question if university students understand this at all. The first was a PhD student from a university in the U.S. who was not pleased we wouldn’t share our entire database with him. While we try our best to support academic research, we do not feel any academic project requires our entire data set. Further, many of the research projects he and his colleagues are working on are funded by the U.S. government, who may have contract language that means all data gets handed over to them, including ours. Instead of accepting our decision, he said he could just scrape our site and take all of our data anyway. I reminded him that not only does it violate our license, but it violates his university code of conduct and jeopardizes any government funding.

The second instance is outlined in more detail below since a group of three students posted multiple advisories yesterday, that call into question their sense of ethics. Note that the idea of “responsible” disclosure is a term that was strongly pushed by Scott Culp and Microsoft. His article on the topic has since been removed it seems. The term “responsible” disclosure is biased from the start, implying that anyone who doesn’t play by their rules is “irresponsible”. Instead, a better term of “coordinated disclosure” has been used since. Of course, the time frames involved in coordinated disclosure are still heavily debated and likely will never be agreed on. The time given to a vendor for them to patch a flaw cannot be a fixed length. A small content management system with an XSS vulnerability can often be patched in a day or week, where an overflow in a library of an operating system may take months due to testing for compatibility and regression. If the vulnerability is in a device that is difficult (or basically impossible) to upgrade, such as SCADA or non-connected devices (e.g. a pacemaker), then extra caution or thought should be given before disclosing it. While no fixed time can be agreed on, most people in the industry know when a researcher did not give a vendor enough time, or when a vendor seems to be taking too long. It isn’t science; it is a combination of gut and personal experience.

Yesterday’s disclosure of interest is by three students from the European University of Madrid who analyzed IP video cameras as part of their final project of “Security and Information Technology Master”. From their post:

In total, we analyzed 9 different camera brands and we have found 14 vulnerabilities.

**Note that all the analysis we have done has been from cameras found through Google dorks and Shodan, so we have not needed to purchase any of them for our tests. Everything we needed was online.

First, the obvious. Rather than purchasing their own hardware, they used Google and Shodan to find these IP cameras deployed by consumers and businesses. Devices that did not belong to them, they did not have permission to test, and ran the risk of disabling with their testing. If one of the cameras monitored security for a business and became disabled, it further posed a risk to the company as it created a further gap in their physical security.

Second, given these devices are deployed all over the world, and are traditionally difficult or annoying to upgrade, you might expect the researchers to give the vendors adequate time to verify the vulnerabilities and create a fix. How much time did the vendors get?

Airlive 6 days
Axis 16 days
Brickcom 11 days
Grandstream 11 days for 1 vuln, 0 days for 2 vulns
Samsung 0 days
Sony 17 days
TP-LINK 11 days

Shortly after posting their advisory, others on the Full Disclosure mail list challenged them too. For the vendors who received 16 and 17 days, many researchers would consider over two weeks to be adequate. However, for the two vendors that got less than 24 hours warning before disclosure, that is not considered coordinated by anyone.

Every researcher can handle disclosure how they see fit. For some, they have not considered the implications of uncoordinated disclosure, often in a hurry to get their advisory out for name recognition or the thrill. For others that have been doing this a long time, they find themselves jaded after dealing with one too many vendor who was uncooperative, stalled more than 1000 days, or threatened a lawsuit. In this case, they are students at a university and likely not veterans of the industry. Despite their own beliefs, one has to wonder if they violated a code of conduct and what their professor will say.

Follow

Get every new post delivered to your Inbox.

Join 5,028 other followers