Category Archives: Vulnerability Disclosure

The Five High-level Types of Vulnerability Reports

Based on a Twitter thread started by Aaron Portnoy that was replied to by @4Dgifts asking why people would debunk vulnerability reports, I offer this quick high-level summary of what we see, and how we handle it.

Note that OSVDB uses an extensive classification system (that is very close to being overhauled greatly for more clarity and granularity), in addition to CVSS scoring. Part of our classification system allows us to flag an entry as ‘not-a-vuln’ or ‘myth/fake’. I’d like to briefly explain the different, but also in the bigger picture. When we process vulnerability reports, we only have time to go through the information disclosed usually. In some cases we will spend extra time validating or debunking the issue, as well as digging up information the researcher left out such as vendor URL, affected version, script name, parameter name, etc. That leads to the high-level types of disclosures:

  • Invalid / Not Enough – We are seeing cases where a disclosure doesn’t have enough actionable information. There is no vendor URL, the stated product name doesn’t come up on various Google searches, the proof-of-concept (PoC) provided is only for one live site, etc. If we can’t replicate it or dig up the vendor in five minutes, we have to move on.
  • Site-specific – Some of the disclosures from above end up being specific to one web site. In a few rare cases, they impact several web sites due to the companies all using the same web hosting / design shop that re-uses templates. Site-specific does not qualify for inclusion in any of the big vulnerability databases (e.g. CVE, BID, Secunia, X-Force, OSVDB). We aggregate vulnerabilities in software and hardware that is available to multiple consumers, on their premises. That means that big offerings like Dropbox or Amazon or Facebook don’t get included either. OSF maintains a separate project that documents site-specific issues.
  • Vulnerability – There is enough actionable information to consider it valid, and nothing that sets off warnings that it may be an issue. This is the run-of-the-mill event we deal with in large volumes.
  • Not a Vulnerability – While a valid report, the described issue is just considered a bug of some kind. The most common example is a context-dependent ‘DoS’ that simply crashes the software, such as media player or browser. The issue was reported to crash the software, so that is valid. But in ‘exploiting’ the issue, the attacker has gained nothing. They have not crossed privilege boundaries, as the issue can quickly be recovered from. Note that if the issue is a persistent DoS condition, that becomes a valid issue.
  • Myth/Fake – This was originally created to handle rumors of older vulnerabilities that simply were not true. “Do you remember that remote Solaris 2.5 bug in squirreld??” Since then, we have started using this classification more to denote when a described issue is simply invalid. For example, the researcher claims code execution and provides a PoC that only shows a DoS. Subsequent analysis shows that it is not exploitable.

Before you start sending emails, as @4DGifts reminds us, you can rarely say with 100% assurance that something isn’t exploitable. We understand and agree with that completely. But it is also not our job to prove a negative. If a researcher is claiming code execution, then they must provide the evidence to back their claim. Either an additional PoC that is more than a stability crash, or fully explain the conditions required to exploit it. Often times when a researcher does this, we see that while it is an issue of some sort, it may not cross privilege boundaries. “So you need admin privs to exploit this…” and “If you get a user to type in that shell code into a prompt on local software, it executes code…” Sure, but that doesn’t cross privilege boundaries.

That is why we encourage people like Aaron to help debunk invalid vulnerability reports. We’re all about accuracy, and we simply don’t have time to test and figure out every vulnerability disclosed. If it is a valid issue but requires dancing with a chicken at midnight, we want that caveat in our entry. If it is a code execution issue, but only with the same privileges as the attacker exploiting it, we want to properly label that too. We do not use CVSS to score bogus reports as valid. Instead, we reflect that they do not impact confidentiality, integrity, or availability which gives it a 0.0 score.

The Death and Re-birth of the Full-Disclosure Mail List

After John Cartwright abruptly announced the closure of the Full Disclosure mail list, there was a lot of speculation as to why. I mailed John Cartwright the day after and asked some general questions. In so many words he indicated it was essentially the emotional wear and tear of running the list. While he did not name anyone specifically, the two biggest names being speculated were ‘NetDev’ due to years of being a headache, and the more recent thread started by Nicholas Lemonias. Through other channels, not via Cartwright, I obtained a copy of a legal threat made against at least one hosting provider for having copies of the mails he sent. This mail was no doubt sent to Cartwright among others. As such, I believe this is the “straw that broke the camels back” so to speak. A copy of that mail can be found at the bottom of this post and it should be a stark lesson that disclosure mail list admins are not only facing threats from vendors trying to stifle research, but now security researchers. This includes researchers who openly post to a list, have a full discussion about the issue, desperately attempt to defend their research, and then change their mind and want to erase it all from public record.

As I previously noted, relying on Twitter and Pastebin dumps are not a reliable alternative to a mail list. Others agree with me including Gordon Lyon, the maintainer of seclists.org and author of Nmap. He has launched a replacement Full Disclosure list to pick up the torch. Note that if you were previously subscribed, the list users were not transferred. You will need to subscribe to the new list if you want to continue participating. The new list will be lightly moderated by a small team of volunteers. The community owes great thanks to both John and now Gordon for their service in helping to ensure that researchers have an outlet to disclose. Remember, it is a mail list on the surface; behind the scenes, they deal with an incredible number of trolls, headache, and legal threats. Until you run a list or service like this, you won’t know how emotionally draining it is.

Note: The following mail was voluntarily shared with me and I was granted permission to publish it by a receiving party. It is entirely within my legal right to post this mail.

From: Nicholas Lemonias. (lem.nikolas@googlemail.com)
Date: Tue, Mar 18, 2014 at 9:11 PM
Subject: Abuse from $ISP hosts
To: abuse@

Dear Sirs,

I am writing you to launch an official complaint relating to Data
Protection Directives / and Data Protection Act (UK).

Therefore my request relates to the retention of personal and confidential
information by websites hosted by Secunia.

These same information are also shared by UK local and governmental
authorities and financial institutions, and thus there are growing
concerns of misuse of such information.

Consequently we would like to request that you please delete ALL records
containing our personal information (names, emails, etc..) in whole, from
your hosted websites (seclists.org) and that distribution of our
information is ceased . We have mistakenly posted to the site, and however
reserve the creation rights to that thread, and also reserve the right to
have all personal information deleted, and ceased from any electronic
dissemination, use either partially or in full.

I hope that the issue is resolved urgently without the involvement of local
authorities.

I look forward to hearing from you soon.

Thanks in advance,

*Nicholas Lemonias*

Update 7:30P EST: Andrew Wallace (aka NetDev) has released a brief statement regarding Full Disclosure. Further, Nicholas Lemonias has threatened me in various ways in a set of emails, all public now.

Missing Perspective on the Closure of the Full-Disclosure Mail List

This morning I woke to the news that the Full-Disclosure mail list was closing its doors. Assuming this is not a hoax (dangerously close to April 1st) and not spoofed mail that somehow got through, there seems to be perspective missing on the importance of this event. Via Facebook posts and Twitter I see casual disappointment, insults that the list was low signal to noise, and that many had stopped reading it a while back. I don’t begrudge the last comment one bit. The list has certainly had its share of noise, but that is the price we pay as a community and industry for having a better source for vulnerability disclosure. Speaking to the point of mail lists specifically, there were three lists that facilitated this: Bugtraq, Full-Disclosure, and Open Source Security (OSS). Bugtraq has been around the longest and is the only alternative to Full-Disclosure really (remember that VulnWatch didn’t last, and was ultimately low traffic). OSS is a list that caters to open source software and does not traffic in commercial software. A majority of the posts come from open source vendors (e.g. Linux distributions), the software’s maintainer, etc. It is used as much for disclosure as coordination between vendors and getting a CVE assigned.

One of the first things that should be said is a sincere “thank you” to John Cartwright for running the list so long. For those of you who have not moderated a list, especially a high-traffic list, it is no picnic. The amount of spam alone makes list moderation a pain in the ass. Add to that the fake exploits, discussions that devolve into insults, and topics that are on the fringe of the list’s purpose. Trying to sort out which should be allowed becomes more difficult than you would think. More importantly, he has done it in a timely manner for so long. Read the bold part again, because that is absolutely critical here. When vulnerability information goes out, it is important that it goes out to everyone equally. Many mails sent to Bugtraq and Full-Disclosure are also sent to other parties at the same time. For example, every day we get up to a dozen mails to the OSVDB Moderators with new vulnerability information, and those lists and other sources (e.g. Exploit-DB, OffSec, 1337day) are in the CC. If you use one or a few of those places as your primary source for vulnerability intelligence, you want that information as fast as anyone else. A mail sent on Friday afternoon may hit just one of them, before appearing two days later on the rest. This is due to the sites being run with varying frequency, work schedules, and dedication. Cartwright’s quick moderation made sure those mails went out quickly, often at all hours of the day and over weekends.

While many vulnerability disclosers will send to multiple sources, you cannot assume that every disclosure will hit every source. Some of these sites specialize in a type of vulnerability (e.g. web-based), while some accept most but ignore a subset (e.g. some of the more academic disclosures). Further, not every discloser sends to all these sources. Many will send to a single mail list (e.g. Bugtraq or FD), or to both of them. This is where the problem arises. For many of the people still posting to the two big disclosure lists, they are losing out on the list that was basically guaranteed to post their work. Make no mistake, that isn’t the case for both lists.

This goes back to why Full-Disclosure was created in the first place (July 11, 2002). This was days before Symantec announced they were acquiring SecurityFocus (July 17, 2002). That was not a coincidence. While I can’t put a finger on when BugTraq changed for the worse exactly, I can assure you it has. Back in 2003, security researchers were noticing curious delays in their information being posted. One company challenged SecurityFocus/Bugtraq publicly, forcing them to defend themselves.

“The problem with SecurityFocus is not that they moderate the lists, but the fact that they deliberately delay and partially censor the information,” said Thomas Kristensen, CTO of Secunia, based in Copenhagen, Denmark. “Since they were acquired by Symantec they changed their policy regarding BugTraq. Before they used to post everything to everybody at the same time. Now they protect the interests of Symantec, delay information and inform their customers in advance.” Wong says there is no truth to these accusations. “The early warnings that our DeepSight customers get come from places like BugTraq and events and incidents that we monitor,” Wong said. “We dont give those alerts [from BugTraq] to our customers any sooner than anyone else gets them.”

Unfortunately for our community, Mr. Wong is absolutely incorrect. I have witnessed this behavior first hand several times over the years, as have others. From a series of mails in 2006:

* mudge (mudge @ uidzero org) [060120 20:04]:
Actually, this advisory is missing some important information. bugtraq engaged in this prior to the “buy out”. Security Focus engaged in this practice as well where there were some advisories that would go out only to the Security Focus paid private list and not be forwarded to the public list to which they were posted.

On Fri, 20 Jan 2006, H D Moore wrote:
FWIW, I have noticed that a few of my own BT posts will not reach my mailbox until they have already been added to the securityfocus.com BID database. It could be my subscriber position in the delivery queue, but it does seem suspicious sometimes. Could just be paranoia, but the list behavior/delivery delays definitely contribute to it.

In each case, moderators of Bugtraq vehemently denied the allegations. In one case, Al Huger (with Symantec at the time) reminded everyone that the combined lists of SecurityFocus were delivering over 7 million mails a day. That alone can cause issues in delivery of course. On the other hand, Symantec surely has the resources to ensure they run a set of mail servers that can churn out mail in such volume to ensure prompt delivery. Jump to more recently and you can still see incredible delay that has nothing to do with delivery issues. For example, RBS posted an advisory simultaneously to both Bugtraq and Full-Disclosure. Notice that the mail was posted on Sep 10 for Full-Disclosure and Sep 19 for Bugtraq. A nine day delay in moderating vulnerability information is not acceptable in today’s landscape of threats and bad actors. Regardless of intent, such delays simply don’t cut it.

In addition to the Bugtraq moderators having such delays, they will sometimes reject a post for trivial reasons such as “using a real IP address” in an example (one time using the vendor’s IP, another time using a public IP I control). They rejected those posts, while frequently allowing “target.com” in disclosures which is a real company.

With the death of Full-Disclosure, Bugtraq is now our primary source of vulnerability disclosure in the scope of mail lists, and only source for vulnerabilities in commercial software (out of scope for OSS). To those who argue that people “use mail a lot less now”, I suggest you look at the volume of Bugtraq, Full-Disclosure, and OSS. That is a considerable amount of disclosures made through that mechanism. Another mindset is that disclosing vulnerabilities can be done with a Tweet using a hash tag and a link to pastebin or other hosting site. To this I can quickly say that you have never run a VDB (and try finding a full set of your original l0pht or @stake advisories, many have largely vanished). Pastebin dumps are routinely removed. Researcher blogs, even hosted on free services such as WordPress and Blogger, disappear routinely. Worse, vendors that host advisories in their own products will sometimes remove their own historical advisories. The “Tweet + link” method simply does not cut it unless you want vulnerability provenance to vanish in large amounts. It is bad enough that VDBs have to rely on the Internet Archive so often (speaking of, donate to them!), but forcing us to set up a system to mirror all original disclosures is a burden. Last, for those who argue that nothing good is posted to Full-Disclosure, Lucian Constantin points out a couple good examples to counter the argument in his article on the list closing.

Instead, mail lists provide an open distributed method for releasing information. As you can see, these lists are typically mirrored on multiple sites as well as personal collections of incoming email. It is considerably easier and safer to use such a method for vulnerability disclosures going forward. In my eyes, and the eyes of others that truly appreciate what Full-Disclosure has done, the loss of that list is devastating in the short term. Not only will it introduce a small amount of bias in vulnerability aggregation, it will take time to recover. Even if someone else picks up the torch under the same name, or starts a new list to replace it, it will take time for people to transition to the new list.

To conclude, I would also ask that John Cartwright practice full disclosure himself. Shuttering the list is one thing, but blaming the action on an unnamed person with no real details isn’t what the spirit of the list is about. Give us details in a concise and factual manner, so that the industry can better understand what you are facing and what they may be getting into should they opt to run such a list.

More tricks than treats with today’s Metasploit blog disclosures?

Today, Tod Beardsley posted part one and part two on the Metasploit blogs titled “Seven FOSS Tricks and Treats. Unfortunately, this blog comes with as many tricks as it does treats.

In part one, he gently berates the vendors for their poor handling of the issues. In many cases, they are labeled as “won’t fix” without an explanation of why. During his berating, he also says “I won’t mention which project … filed the issue on a public bug tracker which promptly e-mailed it back in cleartext“. In part two, the only disclosure timeline including a bug report is for Moodle and ticket MDL-41449. If this is the case he refers to, then he should have noted that the tracker requires an account, and that a new account / regular user cannot access this report. Since his report was apparently mailed in the clear, the ticket system mailing it back is not the biggest concern. If this is not the ticket he refers to, now that the issues are public the ticket should be included in the disclosure for completeness.

Next, we have the issue of “won’t fix”. Zabbix, NAS4Free, and arguably OpenMediaVault are all intended functionality by the vendor. In each case, they require administrative credentials to use the function being ‘exploited’ by the metasploit modules. I won’t argue that additional circumstances make exploitation easier, such as XSS or default credentials, but intended functionality is often a reason a vendor will not “fix” the bug. As you say in part one, a vendor should make this type of functionality very clear as to the dangers involved. Further, they should strive to avoid making it easier to exploit. This means quickly fixing vulnerabilities that may disclose session information (e.g. XSS), and not shipping with default credentials. Only at the bottom of the first post do you concede that they are design decisions. Like you, we agree that admin of a web interface does not imply the person was intended to have root access on the underlying operating system. In those cases, we consider them a vulnerability but flag them ‘concern’ and include a technical note explaining.

One of the most discouraging things about these vulnerability reports is the lack of version numbers. It is clear that Beardsley downloaded the software to test it. Why not include the tested version so that administrators can more easily determine if they may be affected? For example, if we assume that the latest version of Moodle was 2.5.2 when he tested, it is likely vulnerable. This matters because version 2.3.9 does not appear to be vulnerable as it uses an alternate spell check method. This kind of detail is extremely helpful to the people who have to mitigate the vulnerability, and the type of people who use vulnerability databases as much as penetration testers.

Finally, the CVE assignments are questionable. Unfortunately, MITRE does not publish the “CVE ID Reservation Guidelines for Researchers” on their CVE Request Page, instead offering to mail it. This may cut down on improper assignments and may explain why these CVE were assigned. When an application has intended functionality that can only be abused by an attacker with administrator credentials, that does not meet the criteria for a CVE assignment. Discussion with CVE over each case would help to ensure assignment is proper (see above re: implied permission / access).

As always, we love seeing new vulnerabilities disclosed and quickly fixed. However, we also prefer to have disclosures that fully explain the issue and give actionable information to all parties involved, not just one side (e.g. penetration testers). Keep up the good work and kindly consider our feedback on future disclosures!

An Open Letter to @InduSoft

InduSoft,

When referencing vulnerabilities in your products, you have a habit of only using an internal tracking number that is kept confidential between the reporter (e.g. ICS-CERT, ZDI) and you. For example, from your HotFix page (that requires registration):

WI2815: Directory Traversal Buffer overflow. Provided and/or discovered by: ICS-CERT ticket number ICS-VU-579709 created by Anthony …

The ICS-CERT ticket number is assigned as an internal tracking ID while the relevant parties figure out how to resolve the issue. Ultimately, that ticket number is not published by ICS-CERT. I have already sent a mail to them suggesting they include it in advisories moving forward, to help third parties match up vulnerabilities to fixes to initial reports. Instead of using that, you should use the public ICS-CERT advisory ID. The details you provide there are not specific enough to know which issue this corresponds to.

In another example:

WI2146: Improved the Remote Agent utility (CEServer.exe) to implement authentication between the development application and the target system, to ensure secure downloading, running, and stopping of projects. Also addressed problems with buffer overrun when downloading large files. Credits: ZDI reports ZDI-CAN-1181 and ZDI-CAN-1183 created by Luigi Auriemma

In this case, these likely correspond to OSVDB 77178 and 77179, but it would be nice to know that for sure. Further, we’d like to associate those internal tracking numbers to the entries but vendors do not reliably put them in order, so we don’t know if ZDI-CAN-1181 corresponds to the first or second.

In another:

WI1944: ISSymbol Virtual Machine buffer overflow Provided and/or discovered by: ZDI report ZDI-CAN-1341 and ZDI-CAN-1342

In this case, you give two ZDI tracking identifiers, but only mention a single vulnerability. ZDI has a history of abstracting issues very well. The presence of two identifiers, to us, means there are two distinct vulnerabilities.

This is one of the primary reasons CVE exists, and why ZDI, ICS-CERT, and most vendors now use it. In most cases, these larger reporting bodies will have a CVE number to share with you during the process, or if not, will have one at the time of disclosure.

Like your customers do, we appreciate clear information regarding vulnerabilities. Many large organizations will use a central clearing house like ours for vulnerability alerting, rather than trying to monitor hundreds of vendor pages. Helping us to understand your security patches in turn helps your customers.

Finally, adding a date the patch was made available will help to clarify these issues and give another piece of information that is helpful to organizations.

Thank you for your consideration in improving your process!

howdoireportavuln.com – Good intentions, needs fix-ups though.

Tonight, shortly before retiring from a long day of vulnerability import, I caught a tweet mentioning a web site about reporting vulnerabilities. Created on 15-aug-2013 per whois, the footer shows it was written by Fraser Scott, aka @zeroXten on Twitter.

http://howdoireportavuln.com/

I love focused web sites that are informative, and make a point in their simplicity. Of course, most of these sites are humorous or parody, or simply making fun of the common Internet user.

This time, the web site is directly related to what we do. I want to be very clear here; I like the goal of this site. I like the simplistic approach to helping the reader decide which path is best for them. I want to see this site become the top result when searching for “how do I disclose a vulnerability?” This commentary is only meant to help the author improve the site. Please, take this advice to heart, and don’t hesitate if you would like additional feedback. [Update: After starting this blog last night, before publishing this morning, he already reached out. Awesome.]


Under the ‘What’ category, there are three general disclosure options:

NON DISCLOSURE, RESPONSIBLE DISCLOSURE, and FULL DISCLOSURE

First, you are missing a fourth option of ‘limited disclosure’. Researchers can announce they have found a vulnerability in given software, state the implications, and be done with it. Public reports of code execution in some software will encourage the vendor to prioritize the fix, as customers may begin putting pressure on them. Adding a video showing the code execution reinforces the severity. It often doesn’t help a VDB like ours, because such a disclosure typically doesn’t have enough actionable information. However, it is one way a researcher can disclose, and still protect themselves.

Second, “responsible”? No. The term was possibly coined by Steve Christey, further used by Russ Cooper, that was polarized by Cooper as well as Scott Culp at Microsoft (“Information Anarchy”, really?), in a (successful) effort to brand researchers as “irresponsible” if they don’t conform to vendor disclosure demands. The appropriate term more widely recognized, and fair to both sides, is that of “coordinated” disclosure. Culp’s term forgets that vendors can be irresponsible if they don’t prioritize critical vulnerabilities while customers are known to be vulnerable with public exploit code floating about. Since then, Microsoft and many other companies have adopted “coordinated” to refer to the disclosure process.

Under the ‘Who’ category, there are more things to consider:

SEND AN EMAIL

These days, it is rare to see domains following RFC-compliant addresses. That is a practice mostly lost to the old days. Telling readers to try to “Contact us” tab/link that invariably shows on web pages is better. Oh wait, you do that. However, that comes well after the big header reading TECHNICAL SUPPORT which may throw people off.

As a quick side note: “how to notifying them of security issues”. This is one of many spelling or grammar errors. Please run the text through a basic grammar checker.

Under the ‘How’ category:

STAY ANONYMOUS

This is excellent advice, except that using Tor bit since there are serious questions about the security/anonymity of it. If researchers are worried, they should look at a variety of options including using a coffee shop’s wireless, hotel wireless, etc.

BE YOURSELF

This is also a great point, but more to the point, make sure your mail is polite and NOT THREATENING. Don’t threaten to disclose on your own timeline. See how the vendor handles the vulnerability report without any indication of disclosing it. Give them benefit of the doubt. If you get hints they are stalling at some point, then gently suggest it may be in the best interest of their customers to disclose. Remind them that vulnerabilities are rarely discovered by a single person and that they can’t assume you are the only one who has found it. You are just the only one who apparently decided to help the vendor.

THE DISCLOSURE

Post to Full-Disclosure sure, or other options that may be more beneficial to you. Bugtraq has a history of stronger moderation, they tend to weed out crap. Send it directly to vulnerability databases and let them publish it anonymously. VDBs like Secunia generally validate all vulnerabilities before posting to their database. That may help you down the road if your intentions are called into question. Post to the OSS-security mail list if the vulnerability is in open-source software, so you get the community involved. For that list, getting a CVE identifier and having others on the list verifying or sanity checking your findings, it gives more positive attention to the technical issues instead of the politics of disclosure.

FOR SALE

Using a bug bounty system is a great idea as it keeps the new researcher from dealing with disclosure politics generally. Let people experienced with the process, who have an established relationship and history with the vendor handle it. However, don’t steer newcomers to ZDI immediately. In fact, don’t name them specifically unless you have a vested interest in helping them, and if so, state it. Instead, break it down into vendor bug bounty programs and third-party programs. Provide a link to Bugcrowd’s excellent resource on a list of current bounty programs.

FINALLY

The fine print of course. Under CITATIONS, I love that you reference the Errata legal threats page, but this should go much higher on the page. Make sure new disclosers know the potential mess they are getting into. We know people don’t read the fine print. This could also be a good lead-in to using a third-party bounty or vulnerability handling service.

howdoi

It’s great that you make this easy to share with everyone and their dog, but please consider getting a bit more feedback before publishing a site like this. It appears you did this in less than a day, when an extra 24 hours shows you could have made a stronger offering. You are clearly eager to make it better. You have already reached out to me, and likely Steve Christey if not others. As I said, with some edits and fix-ups, this will be a great resource.

Local File Inclusion vs Arbitrary File Access

Notes for this blog have been lingering for over three years now. In the daily grind to aggregate vulnerabilities, the time to write about them gets put on the back burner frequently. Rest assured, this is not a new issue by any means.

Back in the day, we had traversal attacks that allowed an attacker to ‘traverse’ outside an intended directory to access a file or directory that was not intended. The most basic example of this known to most is a web application traversal attack such as:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../../../etc/passwd

Making this request would direct the script to traverse outside the web server document root (DOCROOT) to access the system password file (/etc/passwd). For years, these attacks were simply known as “directory traversal” attacks. For limited traversals, CVSSv2 scoring would be 5.0 and look like (AV:N/AC:L/Au:N/C:P/I:N/A:N). If the application is running with full privileges and could access any file on the system, it would score a 7.8 and look like (AV:N/AC:L/Au:N/C:C/I:N/A:N). Note that such an attack only allows an attacker to read the contents of the file, not write to it or execute it as a script. To help distinguish this, such attacks are usually qualified to “traversal arbitrary file access”.

Local File Inclusion (LFI) attacks go back to around 2003 and often exhibit the same trait as directory traversal attacks, as outlined above. Like the traversal, the attack typically involves a relative (e.g. ../../) or absolute path (e.g. &file=/path/to/file) to call a specific file on the system. The difference is in how the application handles the request. Instead of displaying the contents of the file like above, it will include the file as if it is an executable script. This means that arbitrary code, but limited to what is already on the file system, will be executed with the same privileges as the web application and/or web server. Using a combination of real-world common issues, this can be leveraged into full arbitrary remote code execution. For example, if you can access an incoming directory via FTP to write your own .php file, the local file inclusion vulnerability can be used to call that custom code and execute it.

Visually, these two vulnerabilities may look identical:

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

http://%5Btarget%5D/myapp/jericho.php?file=../../../../tmp/shell.php

Despite appearances, these are two very different attacks. If the first is a traversal arbitrary file access issue, the contents of shell.php will be displayed. If the second is a traversal local file inclusion, the contents of shell.php will be processed as PHP code and executed.

Even with this simple concept, more and more researchers are unable to make this distinction. Arbitrary file access and local file inclusion are not only getting blended together, but traversals that allow for file manipulation (e.g. append, delete, overwrite) or even file enumeration (e.g. determine existence of file only) are also getting lumped in.

Wrong:

Specto Local File Inclusion by H4ckCity Security Team gives a PoC of:

http://SERVER/index.php?page=/etc/passwd

This is clearly not a local file inclusion as the file being included is the standard text file containing password information. Instead, they show an absolute path file disclosure.

OneFileCMS v.1.1.5 Local File Inclusion Vulnerability by mr.pr0n gives a PoC of:

http://TARGET/onefilecms/onefilecms.php?f=../../../../etc/passwd

Again, calling a text file, this time via a standard directory traversal. If this is really a LFI, then the PoC does not show it.

Pollen CMS 0.6 File Disclosure by MizoZ gives a PoC of:

http://SERVER/%5Bpath%5D/core/lib/readimage.php?image=%5BLFI%5D

First, this is a bit suspicious as the parameter ‘image’ implies it will handle images such as JPG or PNG. Second, the [LFI] string doesn’t show if it is an absolute path or traversal. How could the researcher find it without knowing this? Third, and most important, their disclaimer:

The script only verifies the existence of the given file.

Sorry, not even close to a LFI.

Security, Ethics, and University

In the U.S., you are expected to know and live by certain ethical standards related to school. You are taught early on that plagiarism is bad for example. You are taught that school experiments should be done in a safe manner, that does not harm people or animals. Despite this, most colleges and universities maintain a Code of Conduct or a Code of Ethics that applies to both students and faculty. In the security industry, integrity is critical. Part of having integrity is behaving ethically in everything you do. This is important because if a researcher or consultant is questionable or unethical in one part of their life, there is no guarantee they will be when performing services for a client.

In the last week, we have seen two incidents that call into question if university students understand this at all. The first was a PhD student from a university in the U.S. who was not pleased we wouldn’t share our entire database with him. While we try our best to support academic research, we do not feel any academic project requires our entire data set. Further, many of the research projects he and his colleagues are working on are funded by the U.S. government, who may have contract language that means all data gets handed over to them, including ours. Instead of accepting our decision, he said he could just scrape our site and take all of our data anyway. I reminded him that not only does it violate our license, but it violates his university code of conduct and jeopardizes any government funding.

The second instance is outlined in more detail below since a group of three students posted multiple advisories yesterday, that call into question their sense of ethics. Note that the idea of “responsible” disclosure is a term that was strongly pushed by Scott Culp and Microsoft. His article on the topic has since been removed it seems. The term “responsible” disclosure is biased from the start, implying that anyone who doesn’t play by their rules is “irresponsible”. Instead, a better term of “coordinated disclosure” has been used since. Of course, the time frames involved in coordinated disclosure are still heavily debated and likely will never be agreed on. The time given to a vendor for them to patch a flaw cannot be a fixed length. A small content management system with an XSS vulnerability can often be patched in a day or week, where an overflow in a library of an operating system may take months due to testing for compatibility and regression. If the vulnerability is in a device that is difficult (or basically impossible) to upgrade, such as SCADA or non-connected devices (e.g. a pacemaker), then extra caution or thought should be given before disclosing it. While no fixed time can be agreed on, most people in the industry know when a researcher did not give a vendor enough time, or when a vendor seems to be taking too long. It isn’t science; it is a combination of gut and personal experience.

Yesterday’s disclosure of interest is by three students from the European University of Madrid who analyzed IP video cameras as part of their final project of “Security and Information Technology Master”. From their post:

In total, we analyzed 9 different camera brands and we have found 14 vulnerabilities.

**Note that all the analysis we have done has been from cameras found through Google dorks and Shodan, so we have not needed to purchase any of them for our tests. Everything we needed was online.

First, the obvious. Rather than purchasing their own hardware, they used Google and Shodan to find these IP cameras deployed by consumers and businesses. Devices that did not belong to them, they did not have permission to test, and ran the risk of disabling with their testing. If one of the cameras monitored security for a business and became disabled, it further posed a risk to the company as it created a further gap in their physical security.

Second, given these devices are deployed all over the world, and are traditionally difficult or annoying to upgrade, you might expect the researchers to give the vendors adequate time to verify the vulnerabilities and create a fix. How much time did the vendors get?

Airlive 6 days
Axis 16 days
Brickcom 11 days
Grandstream 11 days for 1 vuln, 0 days for 2 vulns
Samsung 0 days
Sony 17 days
TP-LINK 11 days

Shortly after posting their advisory, others on the Full Disclosure mail list challenged them too. For the vendors who received 16 and 17 days, many researchers would consider over two weeks to be adequate. However, for the two vendors that got less than 24 hours warning before disclosure, that is not considered coordinated by anyone.

Every researcher can handle disclosure how they see fit. For some, they have not considered the implications of uncoordinated disclosure, often in a hurry to get their advisory out for name recognition or the thrill. For others that have been doing this a long time, they find themselves jaded after dealing with one too many vendor who was uncooperative, stalled more than 1000 days, or threatened a lawsuit. In this case, they are students at a university and likely not veterans of the industry. Despite their own beliefs, one has to wonder if they violated a code of conduct and what their professor will say.

Researcher Security Advisory Writing Guidelines

Researcher Security Advisory Writing Guidelines
Open Security Foundation / OSVDB.org
moderators at osvdb.org

This document has been prepared by the Open Security Foundation (OSF) to assist security researchers in working with vendors and creating advisories. Security advisories help convey important information to the community, regardless of your goals or intentions. While you may have an intended audience in mind as you write an advisory, they will not be the only ones to read it. There is a lot of information that can be included in a properly written advisory, and leaving any out makes your advisory something less than it could be.

The OSF encourages researchers to use this document as a guideline for writing security advisories. We will focus on the content of the advisory, not the style. While there is a logical order of presentation, what ultimately matters is including the necessary information, though some things are most beneficial at the start of an advisory. Remember; more information is better, and including information for other parties ultimately helps more people.

How you disclose a vulnerability is your choice. The debate about “responsible” or “coordinated” disclosure has raged for over two decades. There is no universal accord on what is an appropriate period of time for a vendor to reply to a vulnerability report, or fix the issue, though it is generally agreed that it is at the least more than a day and less than a year. Researchers, we fully encourage you to work with vendors and coordinate disclosure if possible; your goal is to improve security after all, right? The following material will give you additional information and considerations for this process.

Brian Martin & Daniel Moeller

View the Researcher Security Advisory Writing Guidelines

Fascinating Vulnerability and Glimpse Into 33 Year Old Pen-Testing

Today, we pushed OSVDB 82447 which covers a backdoor in the Multics Operating System. For those not familiar with this old OS, there is an entire domain covering the fascinating history behind the development of Multics. OSVDB 82447 is titled “Multics Unspecified Third-party Backdoor” and gives an interesting insight into backdoors distributed by vendors. In this case, a third-party planted it, told the vendor, and Honeywell still distributed the operating system anyway. I encourage you to read the full paper by Lieutenant Colonel Roger R. Schell, a member of the tiger team that carried out the attack.

To summarize;

During a US Air Force sanctioned penetration test of mainframe computers, sometime before 1979, the tiger team ended up penetrating a Multics installation at Honeywell. In an account of what happened later, a paper said that the tiger team “modified the manufacturer’s master copy of the Multics operating system itself” and injected a backdoor. The backdoor code was described as being small, “fewer than 10 instructions out of 100,000″ and required a password for use. The report continues, saying that even though Honeywell was told it was there and how it worked, their technicians could not find it. Subsequently, the backdoor was distributed in future installations of Multics.

It would be interesting to know why Honeywell didn’t ask for, or didn’t receive, the specific modified code from the Air Force tiger team, and why they opted to distribute it to customers. Perhaps they thought if their own technicians couldn’t find the backdoor, no one else could. Even more interesting is why a tiger team was sanctioned to carry out a penetration test that not only gave them access to the “master copy” of Multics, but why they were allowed to actually place the backdoor there. When they heard Honeywell couldn’t find it, why didn’t they insist on ensuring it was removed before installation at customer locations? This brings a new twist to the ethics of penetration testing, at least in a historical context.

Follow

Get every new post delivered to your Inbox.

Join 5,408 other followers