In the U.S., you are expected to know and live by certain ethical standards related to school. You are taught early on that plagiarism is bad for example. You are taught that school experiments should be done in a safe manner, that does not harm people or animals. Despite this, most colleges and universities maintain a Code of Conduct or a Code of Ethics that applies to both students and faculty. In the security industry, integrity is critical. Part of having integrity is behaving ethically in everything you do. This is important because if a researcher or consultant is questionable or unethical in one part of their life, there is no guarantee they will be when performing services for a client.
In the last week, we have seen two incidents that call into question if university students understand this at all. The first was a PhD student from a university in the U.S. who was not pleased we wouldn’t share our entire database with him. While we try our best to support academic research, we do not feel any academic project requires our entire data set. Further, many of the research projects he and his colleagues are working on are funded by the U.S. government, who may have contract language that means all data gets handed over to them, including ours. Instead of accepting our decision, he said he could just scrape our site and take all of our data anyway. I reminded him that not only does it violate our license, but it violates his university code of conduct and jeopardizes any government funding.
The second instance is outlined in more detail below since a group of three students posted multiple advisories yesterday, that call into question their sense of ethics. Note that the idea of “responsible” disclosure is a term that was strongly pushed by Scott Culp and Microsoft. His
article on the topic has since been removed it seems. The term “responsible” disclosure is biased from the start, implying that anyone who doesn’t play by their rules is “irresponsible”. Instead, a better term of “coordinated disclosure” has been used since. Of course, the time frames involved in coordinated disclosure are still heavily debated and likely will never be agreed on. The time given to a vendor for them to patch a flaw cannot be a fixed length. A small content management system with an XSS vulnerability can often be patched in a day or week, where an overflow in a library of an operating system may take months due to testing for compatibility and regression. If the vulnerability is in a device that is difficult (or basically impossible) to upgrade, such as SCADA or non-connected devices (e.g. a pacemaker), then extra caution or thought should be given before disclosing it. While no fixed time can be agreed on, most people in the industry know when a researcher did not give a vendor enough time, or when a vendor seems to be taking too long. It isn’t science; it is a combination of gut and personal experience.
Yesterday’s disclosure of interest is by three students from the European University of Madrid who analyzed IP video cameras as part of their final project of “Security and Information Technology Master”. From their post:
In total, we analyzed 9 different camera brands and we have found 14 vulnerabilities.
**Note that all the analysis we have done has been from cameras found through Google dorks and Shodan, so we have not needed to purchase any of them for our tests. Everything we needed was online.
First, the obvious. Rather than purchasing their own hardware, they used Google and Shodan to find these IP cameras deployed by consumers and businesses. Devices that did not belong to them, they did not have permission to test, and ran the risk of disabling with their testing. If one of the cameras monitored security for a business and became disabled, it further posed a risk to the company as it created a further gap in their physical security.
Second, given these devices are deployed all over the world, and are traditionally difficult or annoying to upgrade, you might expect the researchers to give the vendors adequate time to verify the vulnerabilities and create a fix. How much time did the vendors get?
|Grandstream||11 days for 1 vuln, 0 days for 2 vulns|
Shortly after posting their advisory, others on the Full Disclosure mail list challenged them too. For the vendors who received 16 and 17 days, many researchers would consider over two weeks to be adequate. However, for the two vendors that got less than 24 hours warning before disclosure, that is not considered coordinated by anyone.
Every researcher can handle disclosure how they see fit. For some, they have not considered the implications of uncoordinated disclosure, often in a hurry to get their advisory out for name recognition or the thrill. For others that have been doing this a long time, they find themselves jaded after dealing with one too many vendor who was uncooperative, stalled more than 1000 days, or threatened a lawsuit. In this case, they are students at a university and likely not veterans of the industry. Despite their own beliefs, one has to wonder if they violated a code of conduct and what their professor will say.
In a recent discussion on the security metrics mailing list, Pete Lindstrom put forth a rough formula to throw out a number of vulnerabilities that have been discovered versus undiscovered. One of the data points that he cited lead me to his page on “undercover vulnerabilities”, his term for “0-day” in a certain context. Since the term “0-day” has been perverted to mean many things, he clearly defines his term as:
Undercover Vulnerability: A vulnerability that was generally unknown (e.g. not published on any lists, not discussed by “above ground” security folks) until it was actively exploited in the wild. The vulnerability was discovered through evidence of tampering or other means, not through the usual bugfinding ritual.
In my reply challenging some of his numbers, I specifically said that “if we consider that your number 20 is off by at least half, and I would personally guess it’s more like a small fraction, how does this change your numbers?” Pete took this in stride and offered to buy me a case of beer if I could find half a dozen that he didn’t have. Not one to pass up free booze and vulnerability research (yes, i’m weird) I spent several hours Friday doing just that. I ended up with 24 vulnerabilities that seemed to match his definition, roughly half of them in his time frame (“in the last two years”).
Pete’s page got me wondering just how many vulnerabilities classified as ‘undercover’ by his definition. Further, I thought about another question he asked on his page:
I am open to suggestions on an easy way to do this with TypePad (TypeLists, maybe?). Else, I’ll just periodically update as new vulns become available.
I cornered our lead developer Dave and said “make it so” while I mailed Pete asking if OSVDB could help in this effort. As a result, we now have a new classification that we call “Discovered In the Wild” that means the same thing as Pete’s “undercover vulnerability”. I have updated the 20 vulnerabilities listed on his page and added the flag to the ones I researched. This now shows 43 results which is good progress.
Not content with that, I asked a fellow geek who has a world more experience with IDS, NOC management and various devices that would be prone to catching such vulnerabilities “how many do you think were found this way last year”, to which she replied “at least 50”. So vulnerability researchers and OSVDB contributors, it’s up to you to help out! We’re looking for more instances of vulnerabilities being discovered “in the wild”, being exploited and subsequently disclosed (to mail list, vendor, whatever). Please cite your source as best as possible.
To see what we have so far:
- Under “Vulnerability Classification” and “Disclosure”
- Check “Discovered in the Wild”
Thanks to Pete Lindstrom and the Security Metrics mailing list for the input and great idea for a new classification!