In the U.S., you are expected to know and live by certain ethical standards related to school. You are taught early on that plagiarism is bad for example. You are taught that school experiments should be done in a safe manner, that does not harm people or animals. Despite this, most colleges and universities maintain a Code of Conduct or a Code of Ethics that applies to both students and faculty. In the security industry, integrity is critical. Part of having integrity is behaving ethically in everything you do. This is important because if a researcher or consultant is questionable or unethical in one part of their life, there is no guarantee they will be when performing services for a client.
In the last week, we have seen two incidents that call into question if university students understand this at all. The first was a PhD student from a university in the U.S. who was not pleased we wouldn’t share our entire database with him. While we try our best to support academic research, we do not feel any academic project requires our entire data set. Further, many of the research projects he and his colleagues are working on are funded by the U.S. government, who may have contract language that means all data gets handed over to them, including ours. Instead of accepting our decision, he said he could just scrape our site and take all of our data anyway. I reminded him that not only does it violate our license, but it violates his university code of conduct and jeopardizes any government funding.
The second instance is outlined in more detail below since a group of three students posted multiple advisories yesterday, that call into question their sense of ethics. Note that the idea of “responsible” disclosure is a term that was strongly pushed by Scott Culp and Microsoft. His
article on the topic has since been removed it seems. The term “responsible” disclosure is biased from the start, implying that anyone who doesn’t play by their rules is “irresponsible”. Instead, a better term of “coordinated disclosure” has been used since. Of course, the time frames involved in coordinated disclosure are still heavily debated and likely will never be agreed on. The time given to a vendor for them to patch a flaw cannot be a fixed length. A small content management system with an XSS vulnerability can often be patched in a day or week, where an overflow in a library of an operating system may take months due to testing for compatibility and regression. If the vulnerability is in a device that is difficult (or basically impossible) to upgrade, such as SCADA or non-connected devices (e.g. a pacemaker), then extra caution or thought should be given before disclosing it. While no fixed time can be agreed on, most people in the industry know when a researcher did not give a vendor enough time, or when a vendor seems to be taking too long. It isn’t science; it is a combination of gut and personal experience.
Yesterday’s disclosure of interest is by three students from the European University of Madrid who analyzed IP video cameras as part of their final project of “Security and Information Technology Master”. From their post:
In total, we analyzed 9 different camera brands and we have found 14 vulnerabilities.
**Note that all the analysis we have done has been from cameras found through Google dorks and Shodan, so we have not needed to purchase any of them for our tests. Everything we needed was online.
First, the obvious. Rather than purchasing their own hardware, they used Google and Shodan to find these IP cameras deployed by consumers and businesses. Devices that did not belong to them, they did not have permission to test, and ran the risk of disabling with their testing. If one of the cameras monitored security for a business and became disabled, it further posed a risk to the company as it created a further gap in their physical security.
Second, given these devices are deployed all over the world, and are traditionally difficult or annoying to upgrade, you might expect the researchers to give the vendors adequate time to verify the vulnerabilities and create a fix. How much time did the vendors get?
|Grandstream||11 days for 1 vuln, 0 days for 2 vulns|
Shortly after posting their advisory, others on the Full Disclosure mail list challenged them too. For the vendors who received 16 and 17 days, many researchers would consider over two weeks to be adequate. However, for the two vendors that got less than 24 hours warning before disclosure, that is not considered coordinated by anyone.
Every researcher can handle disclosure how they see fit. For some, they have not considered the implications of uncoordinated disclosure, often in a hurry to get their advisory out for name recognition or the thrill. For others that have been doing this a long time, they find themselves jaded after dealing with one too many vendor who was uncooperative, stalled more than 1000 days, or threatened a lawsuit. In this case, they are students at a university and likely not veterans of the industry. Despite their own beliefs, one has to wonder if they violated a code of conduct and what their professor will say.
Researcher Security Advisory Writing Guidelines
Open Security Foundation / OSVDB.org
moderators at osvdb.org
This document has been prepared by the Open Security Foundation (OSF) to assist security researchers in working with vendors and creating advisories. Security advisories help convey important information to the community, regardless of your goals or intentions. While you may have an intended audience in mind as you write an advisory, they will not be the only ones to read it. There is a lot of information that can be included in a properly written advisory, and leaving any out makes your advisory something less than it could be.
The OSF encourages researchers to use this document as a guideline for writing security advisories. We will focus on the content of the advisory, not the style. While there is a logical order of presentation, what ultimately matters is including the necessary information, though some things are most beneficial at the start of an advisory. Remember; more information is better, and including information for other parties ultimately helps more people.
How you disclose a vulnerability is your choice. The debate about “responsible” or “coordinated” disclosure has raged for over two decades. There is no universal accord on what is an appropriate period of time for a vendor to reply to a vulnerability report, or fix the issue, though it is generally agreed that it is at the least more than a day and less than a year. Researchers, we fully encourage you to work with vendors and coordinate disclosure if possible; your goal is to improve security after all, right? The following material will give you additional information and considerations for this process.
Brian Martin & Daniel Moeller
Today, we pushed OSVDB 82447 which covers a backdoor in the Multics Operating System. For those not familiar with this old OS, there is an entire domain covering the fascinating history behind the development of Multics. OSVDB 82447 is titled “Multics Unspecified Third-party Backdoor” and gives an interesting insight into backdoors distributed by vendors. In this case, a third-party planted it, told the vendor, and Honeywell still distributed the operating system anyway. I encourage you to read the full paper by Lieutenant Colonel Roger R. Schell, a member of the tiger team that carried out the attack.
During a US Air Force sanctioned penetration test of mainframe computers, sometime before 1979, the tiger team ended up penetrating a Multics installation at Honeywell. In an account of what happened later, a paper said that the tiger team “modified the manufacturer’s master copy of the Multics operating system itself” and injected a backdoor. The backdoor code was described as being small, “fewer than 10 instructions out of 100,000” and required a password for use. The report continues, saying that even though Honeywell was told it was there and how it worked, their technicians could not find it. Subsequently, the backdoor was distributed in future installations of Multics.
It would be interesting to know why Honeywell didn’t ask for, or didn’t receive, the specific modified code from the Air Force tiger team, and why they opted to distribute it to customers. Perhaps they thought if their own technicians couldn’t find the backdoor, no one else could. Even more interesting is why a tiger team was sanctioned to carry out a penetration test that not only gave them access to the “master copy” of Multics, but why they were allowed to actually place the backdoor there. When they heard Honeywell couldn’t find it, why didn’t they insist on ensuring it was removed before installation at customer locations? This brings a new twist to the ethics of penetration testing, at least in a historical context.
In 2002, iDefense started their Vulnerability Contributor Program. The VCP was created to solicit vulnerability information from the security community and pay researchers for the information. Paying up to US$15,000 for a vulnerability or exploit, iDefense proved there was a significant market for such information after years of debate. The VCP also served as a stark reminder that researchers do not have an obligation to report vulnerabilities to vendors, that doing so is a courtesy.
The VCP pays for “actionable research”, meaning exploits in prominent software (e.g., Microsoft, Oracle) and infrastructure devices (e.g., Cisco). With the information in hand, iDefense in turn leverages researcher’s time by notifying their customers as an early warning system while handling the responsible disclosure of the information to the vendor. This activity can save a world of time for researchers who are long since tired of the headache that often comes with disclosure.
The list of vulnerabilities disclosed by iDefense is impressive. They attribute the large number of advisories to “250 security researchers worldwide”.
In the past few months, an OSF employee (Nepen) has begun to add creditee information for many vulnerabilities in prominent software. This has resulted in creditee information being added for all of the iDefense vulnerabilities. Using OSVDB, we can now look at their advisories in a new light.
iDefense employees have released 131 advisories, credited to 11 unique researchers and “iDefense Labs”. The VCP program has released 479 advisories, credited to 78 unique researchers and “anonymous”. If we assume the 250 researcher number is an estimate and includes both iDefense and VCP, then 89 researchers are distinct and public. That means the “anonymous” submissions make up approximately 161 unique people and cover 326 advisories out of the 479 released.
Using OSVDB’s new creditee system, we can see a neat timeline of the advisories as related to both iDefense and their VCP:
iDefense VCP (79 researchers, 479 advisories): http://osvdb.org/affiliations/1139-idefense-labs-vcp
iDefense Labs (12 researchers, 131 advisories): http://osvdb.org/affiliations/1091-idefense-labs
This is one of many neat ways to use the enhanced creditee system. Over time, as more information is added to the database, we can begin to look at other researchers and organizations.
Perhaps it is the fine tequila this evening, but I really don’t get how our industry can latch on to the recent ‘Aurora’ incident and try to take Microsoft to task about it. The amount of news on this has been overwhelming, and I will try to very roughly summarize:
- News surfaces Google, Adobe and 30+ companies hit by “0-day” attack
- Google uses this for political overtones
- Originally thought to be Adobe 0-day, revealed it was MSIE 0-day
- Jan 14, confirmed it is MSIE vuln, shortly after dubbed “aurora”
- Jan 21, uproar over MS knowing about the vuln since Sept
Now, here is where we get to the whole forest, trees and some analogy about eyesight. Oh, I’ll warn (and surprise) you in advance, I am giving Microsoft the benefit of the doubt here (well, for half the blog post) and throwing this back at journalists and the security community instead. Let’s look at this from a different angle.
The big issue that is newsworthy is that Microsoft knew of this vulnerability in September, and didn’t issue a patch until late January. What is not clear, is if Microsoft knew it was being exploited. The wording of the Wired article doesn’t make it clear: “aware months ago of a critical security vulnerability well before hackers exploited it to breach Google, Adobe and other large U.S. companies” and “Microsoft confirmed it learned of the so-called ‘zero-day’ flaw months ago”. Errr, nice wording. Microsoft was aware of the vulnerability (technically), before hackers exploited it, but doesn’t specifically say if they KNEW hackers were exploiting it. Microsoft learned of the “0-day” months ago? No, bad bad bad. This is taking an over-abused term and making it even worse. If a vulnerability is found and reported to the vendor before it is exploited, is it still 0-day (tree, forest, no one there to hear it falling)?
Short of Microsoft admitting they knew it was being exploited, we can only speculate. So, for fun, let’s give them a pass on that one and assume it was like any other privately disclosed bug. They were working it like any other issue, fixing, patching, regression testing, etc. Good Microsoft!
Bad Microsoft! But, before you jump on the bandwagon, bad journalists! Bad security community!
Why do you care they sat on this one vulnerability for six months? Why is that such a big deal? Am I the only one who missed the articles pointing out that they actually sat on five code execution bugs for longer? Where was the outpour of blogs or news articles mentioning that “aurora” was one of six vulnerabilities reported to them during or before September, all in MSIE, all that allowed remote code execution (tree, forest, not seeing one for the other)?
|CVE||Reported to MS||Disclosed||Time to Patch|
|CVE-2010-0244||2009-07-14||2010-01-21||6 Months, 7 Days (191 days)|
|CVE-2010-0245||2009-07-14||2010-01-21||6 Months, 7 Days (191 days)|
|CVE-2010-0246||2009-07-16||2010-01-21||6 Months, 5 Days (189 days)|
|CVE-2010-0248||2009-08-14||2010-01-21||5 Months, 7 days (160 days)|
|CVE-2010-0247||2009-09-03||2010-01-21||4 Months, 18 days (140 days)|
|CVE-2010-0249||2009-09-??||2010-01-14||4 Months, 11 days (133 days) – approx|
|CVE-2010-0027||2009-11-15||2010-01-21||2 Months, 6 days (67 days)|
|CVE-2009-4074||2009-11-20||2009-11-21||2 Months, 1 day (62 days)|
Remind me again, why the “Aurora” conspiracy is noteworthy? If Microsoft knew of six remote code execution bugs, all from the September time-frame, why is one any more severe than the other? Is it because one was used to compromise hosts, detected and published in an extremely abnormal fashion? Are we actually trying to hold Microsoft accountable on that single vulnerability when the five others just happened not to be used to compromise Google, Adobe and others?
Going back to the Wired article, they say on the second to last paragraph: “On Thursday, meanwhile, Microsoft released a cumulative security update for Internet Explorer that fixes the flaw, as well as seven other security vulnerabilities that would allow an attacker to remotely execute code on a victim’s computer.” Really, Wired? That late in the article, you gloss over “seven other vulnerabilities” that would allow remote code execution? And worse, you don’t point out that Microsoft was informed of five of them BEFORE AURORA?
Seriously, I am the first one to hold Microsoft over the flames for bad practices, but that goes beyond my boundaries. If you are going to take them to task over all this, at least do it right. SIX CODE EXECUTION VULNERABILITIES that they KNEW ABOUT FOR SIX MONTHS. Beating them up over just one is amateur hour in this curmudgeonly world.
Earlier this evening, there was a Twitter debate regarding a proposed standard for responsible vulnerability disclosure. It referred to ISO/IEC 29147, a proposed standard for responsibly disclosing a vulnerability. @dinodaizovi brought up a fresh angle, that the “responsible disclosure” name itself completely ignored the aspect of the vendor practicing “responsible remediation”. That term should really be more in the center of our minds and discussion. The lack of “responsible remediation” is why so many researchers are fed up with dealing with vendors. That is one reason some use services like ZDI or iDefense, not just the cash.
The “responsible disclosure” debate is stale for the most part. We’ll never agree on how much time is ‘right’ for a vendor to fix a vulnerability. Some researchers think it’s days, other think weeks or months. In the paraphrased words of some female vendor on some boring responsible disclosure panel a few years back, “if i can have a kid in 9 months, i should be able to fix a vulnerability too“. Yet 9 months isn’t reasonable to some vendors like HP, who routinely break the 1,000 day mark, even for simple XSS.
@mckeay brought up another aspect to the responsible disclosure debate that was actually fresh, asking what part consumers played in the disclosure process. While I believe it is a neat aspect and something most haven’t considered, I personally believe it is quickly answered by “consumers can put financial pressure on vendors that don’t play well with others”. In reality, consumers are lazy. It takes more than a few bad acts to get us to spend time an energy finding a new vendor. Short of anally raping us with a router and pouring lemon juice in our festering wound, nine times out of ten, we will not find a new vendor.
Back to @dinodaizovi. He is right, any standard for disclosure should be equally centered on the vendor as it is for the researcher. Researchers can easily fall back on RFP’s “rfpolicy” disclosure policy and change X days to something they believe in. The framework is still perfectly valid and outlines the process, the time frames are always up for debate.
What if we carried this one step beyond? How about making the ISO standards apply to any and every vulnerability, regardless of who found it? If BigVendor finds a vulnerability during internal testing and fixes it, don’t consumers have a right to know? When BigVendor says “upgrade to Service Pack 18” and only gives us a reason of “big stability enhancements!!”, shouldn’t we have a right to know those enhancements translate into 17 remotely exploitable vulnerabilities discovered during internal testing and QA? Wouldn’t *that* knowledge be a more significant reason to upgrade and apply the service pack?
I realize it is a pipe dream to think that most vendors would ever offer that level of transparency, even months (years?) after a given issue is fixed. In reality though, they are the proverbial large mythical flightless birds who stick their heads in the sand rather than face a difficult situation (ostriches are real and don’t bury their heads). It has been proven countless times that serious vulnerabilities in big vendors (e.g., Microsoft, Apple, Adobe) are being discovered by multiple parties. No one with an inkling of common sense and rational thinking can believe that the ‘bad guys’ aren’t also discovering some of these bugs. We’re long past the point of vendors honestly thinking that they can get away with some notion that they have a reputation for ‘security’. Add it up, and we’re to that time where the big vendors should be disclosing vulnerabilities discovered during their internal QA / SLDC process. The reputation of insecure software really can’t hurt them any more, and transparency is finally the one thing that could buy back some degree of consumer confidence.
Perhaps now is the time where ‘responsible disclosure’ should apply equally to hackers, security researchers and vendors, as well as apply to ‘responsible remediation’. Because really, some 20 years after the disclosure debate got going, do we really think we need to try to apply more guidelines to researchers giving away $250/hr consulting work or “hackers” posting vulnerabilities as a hobby? Vendors that have tried to label or apply policy to these people were simply blame-shifting from day one, while not applying that desired policy to themselves.
Vulnerabilities reported ten years ago, they have no impact on your customers. If they do, then you are woefully behind and your customers are desperately hanging on to legacy products, scared to upgrade. For vendors who have kept up on security and adopted a responsible and timely manner for handling security, open up your records. Share with the world the ten or more year old vulnerabilities. Let the security community get a better picture of the real number of vulnerabilities reported to you, specifically the ones that never appeared in your advisories. This includes off-beat denial of service crashes, difficult to reproduce memory corruption, silly issues that required some level of access to begin with and everything else.
Some researchers have begun to do this, sharing more details of older disclosures that had vague details. Simple Nomad posted earlier this year about several old bugs as well as cleared up some confusion (via e-mail) regarding the old Palmetto FTP vulnerabilities.
I know this is a pipe-dream, as companies don’t want to admit to the number of vulnerabilities in their products, even ten years ago. Doesn’t matter that they fought uphill battles to win over the media and consumers with promises of how their software development life cycle matured or how they learned from their past. No way a vendor will dump hundreds of previously unpublished vulnerabilities on the world. On the rare chance a vendor will realize this can only help their reputation by sharing information and contributing to the VDB and metrics communities.. send them in! moderators[at]osvdb.org
This is a question OSVDB moderators, CVE staff and countless other VDB maintainers have asked. Today, Gunter Ollmann with IBM X-Force released his research trying to answer this question. Before you read on, I think this research is excellent. The relatively few criticisms I bring up are not the fault of Ollmann’s research and methodology, but the fault of his VDB of choice (and *every* other VDB) not having a complete data set.
Skimming his list, my first thought was that he was missing someone. Doing a quick search of OSVDB, I see that Lostmon Lords (aka ‘lostmon’) has close to 350 vulnerabilities published. How could the top ten list miss someone like this when his #10 only had 147? Read down to Ollmann’s caveat and there is a valid point, but sketchy wording. The data he is using relies on this information being public. As the caveat says though, “because they were disclosed on non-public lists” implies that the only source he or X-Force are using are mail lists such as Bugtraq and Full-disclosure. Back in the day, that was a pretty reliable source for a very high percentage of vulnerability information. In recent years though, a VDB must look at other sources of information to get a better picture. Web sites such as milw0rm get a steady stream of vulnerability information that is frequently not cross-posted to mail lists. In addition, many researchers (including lostmon) mail their discoveries directly to the VDBs and bypass the public mail lists. If researchers mail a few VDBs and not the rest, it creates a situation where the VDBs must start watching each other. This in turn leads to “VDB inbreeding” that Jake and I mentioned at CanSecWest 2005, which is a necessary evil if you want more data on vulnerabilities.
In May of 2008, OSVDB did the same research Ollmann did and we came up with different results. This was based on data we had available, which is still admittedly very incomplete (always need data manglers.) So who is right? Neither of us. Well, perhaps he is, perhaps we are, but unfortunately we’re both working with incomplete databases. As a matter of my opinion, I believe OSVDB has better coverage of vulnerabilities, while X-Force clearly has better consistency in their data and a fraction of the gaps we do.
Last, this data is interesting as is, but would be really fascinating if it was mixed with ‘researcher confidence’ (a big thing of Steve Christey/CVE and myself), in which we track a researcher’s track record for accuracy in disclosure. Someone that disclosed 500 vulnerabilities last year with a 10% error rate should not be above someone who found 475 with a 0% error rate. In addition, as Ollmann’s caveat says, these are pure numbers and do not factor in hundreds of XSS versus remote code execution in operating system default install services. Having a weight system that can be applied to a vulnerability (e.g., XSS = 3, SQLi = 7, remote code exec = 9) that is then factored into researcher could move beyond “who discovered the most” and perhaps start to answer “who found the most respectable vulnerabilities”.
Adam Penenberg wrote an article titled “The Black Market Code Industry” for FastCompany in which he details his research of two HP employees that actively sold exploit code in their spare time, at least one selling exploits in HP’s own software. According to the article, HP knew about one of the employees at the time of the article and were investigating. While a neat article and fun read, it left me with a lot more questions that I hope get answered at some point (how about a ‘Part 2’ Adam?).
- Does Rigano still work for HP now that the article has been out a week?
- Did either individual have access to source code to make their exploit writing easier? If so, did they have access to edit source code in any capacity (e.g. backdoors, adding vulnerable code)?
- Did Rigano actually sell his exploits? If so, to who and for how much? Checking the Full-Disclosure list archives, he appears to have had exploits for IIS 6.0, Firefox 2.x, MSIE 7, SAP, Apache, Microsoft Office and more.
- If Rigano did sell vulnerabilities, did he vette his buyers or could he have sold them to ‘enemy’ nations or hostile countries (relative I know)?
- Why is the FBI investigating a France based employee of HP?
- Is t0t0 a current employee of HP? If not, did he leave for his exploit selling activities? The article suggests that HP is aware of one of the two sellers. What do they have to say about this article now?
This blog entry is probably worth many pages of ranting, examining and dissecting the anatomy of a 0-day panic and the resulting fallout. Since this tends to happen more often than some of us care to stomach, I’ll touch on the major points and be liberal in pointing fingers. If you receive the “wag of my finger“, stop being part of the problem and wise up.
I blinked and missed someone disclosing that there was a dreaded 0-day vulnerability in Adobe Flash Player and that it was a big threat. Apparently Symantec noticed that evil Chinese sites were exploiting Flash and the current 126.96.36.199 could be successfully exploited. When pressed for details, Symantec backtracked and said that they were wrong and it appeared to be the same exploit as previously disclosed by Mark Dowd (CVE-2007-0071). Bad Symantec, poor research.
To make matters worse, Symantec then further claimed that even though it was an old issue, the “in-the-wild exploit was effective against stand-alone versions of Flash Player 188.8.131.52” and that not all versions had been patched correctly. Way to save face Ben Greenbaum of Symantec!! Oh wait, today he changed his mind and said that Symantec’s claims were based on erroneous conclusions and that the behavior of Flash on Linux they were observing was indeed intended by Adobe and not proof it was vulnerable. To make matters worse, Symantec researchers downloaded the “latest” Flash and found it “vulnerable”, which lead to their sky-is-falling panic. Shortly after, they realized that they didn’t download all of the security patches and had been exploiting a known vulnerable version of Flash. Oops?
Two rounds of hype-driven 0-day threat warnings, and no real new threat. Whew, hopefully Symantec raised their THREATCON to blood red or whatever is appropriate for such 0-day threats. You do monitor that don’t you?
This fiasco lead many news outlets and vendors to issue warnings about the new 0-day threat. Secunia, SecurityFocus/BID, SecurityTracker, CERT, and FrSIRT all released new warnings and created entries in their respective databases as a result. In the VDB world, this is a royal pain-in-the-ass to deal with. Secunia ‘revoked’ their entry, BID ‘retired’ their entry, SecurityTracker flaged theirs ‘duplicate entry’, FrSIRT ‘revoked’ their entry and CERT still has it listed.
Fortunately for OSVDB, we were a few hours behind the rest and noticed the discrepancies and waited for more information. Unfortunately, the rest of the world, including ALL of the VDBs and news outlets listed above (and others) failed miserably in using common sense and a government funded resource to better prevent this kind of problem. As of this posting, Secunia, BID, SecurityTracker, FrSIRT, CERT, Dancho, ComputerWorld and eWeek still don’t link to the CVE ID for the vulnerability. Only Adobe’s updated blog entry actually references CVE-2007-0071 (but doesn’t link to it). Secunia links to a previous ID that has seven CVEs associated with it. The original CVE was assigned 2007-01-04 and published around 2008-04-08, a month and a half prior to this mess.
VDBs, shame on you for adding to the confusion. Symantec, shame on you for crying 0-day when your own engineers screwed up badly. Shame on everyone for not clearing it up fully by linking to the correct CVE entry or their own previous entries.
Before any of you receiving a “wave of the finger” bitch, consider the real world impact of your actions. In this case, only 12 MILLION people ended up seeing a vague warning when they loaded their favorite game. Blizzard included the correct fix information which was the same as a month or more before, but the sudden ‘security alert’ (that is extremely rare) only prompted their customers to wonder, possibly panic and definitely kill some demons as a result.