Monthly Archives: October, 2016

NTIA, Bug Bounty Programs, and Good Intentions

[Note: This blog had been sitting as a 99% completed draft since early September. I lost track of time and forgot to finish it off then. Since this is still a relevant topic, I am publishing now despite it not being quite as timely in the context of the articles cited in it.]

An article by Kim Zetter in Wired, titled “When Security Experts Gather to Talk Consensus, Chaos Ensues“, talks about a recent meeting that tried to address the decades-old problem of vulnerability disclosure.

This article covers a recent meetings organized by the National Telecommunications and Information Administration (NTIA), a division of the US Commerce Department (DoC). This is not the main reason I am blogging, when most would assume I would speak up on this topic. I’ll give you an odd insight into this from my perspective, then get to the real point. The person who organized this, was virtually introduced to me on July 31, 2015. He replied the same day, and had a great introduction that showed intent to make this a worthwhile effort:

The US Dept of Commerce is trying to help in the old dance between security researchers and system/SW vendors. We think that folks on most sides want a better relationship, with more trust, more efficiency, and better results. Our goal is to bring the voices together in a “multistakeholder process” to find common ground in an open participant-led forum.

I’ve been trying to reach out to many people in this area, on all sides. I know that you’re both veterans of many discussions like this, and I’d like to chat with you about lessons learned, how things may (or may not) have changed, and generally get your insight into how to make this not suck.

That level of understanding from a U.S. government employee is rewarding and encouraging. But, in the interest of fairness and pointing out the obvious, the first thing I asked:

In the past couple of weeks, I have heard mention of the Dept. of Commerce becoming involved in vulnerability disclosure. One thing that is not clear to myself, and many others, is what your intended role or “jurisdiction” (for lack of better words) is?

That is an important question that the DoC must consider, and understand how to respond to.

That question should still be on everyone’s minds, as it has not been answered in a factual manner. Good intentions only get you so far. Having another government player in this mess, who has no power, no jurisdiction, and only “the best of intentions” can only muddy the waters at best. The notes of the ~ 6 hour meeting are now online. I haven’t read them, don’t plan to. This is pure academic masturbation that has escaped from academia, and missed the age old point that so many refuse to admit. “When the $government outlaws chinchillas, only the outlaws will have chinchillas.” Seriously, stupidly simple concept that basically as much the basis for our society as anything else you hold dear.

I’m jumping past that, because a vulnerability tourist said something else that needs addressing. If you haven’t seen me use that term, ‘vulnerability tourist’ refers to someone who dabbles in the bigger world of vulnerability aggregation, disclosure, and the higher-level ideas surrounding it. This doesn’t speak to finding vulns (although it helps), disclosing them (although it helps), or aggregating them for long-term analysis (although it helps). It speaks to someone who thinks they are doing good speaking as some form of expert, but are actually continuing to harm it due to a lack of real knowledge on the topic. When someone who has next to no experience in those realms speaks on behalf of the industry, it becomes difficult to take them seriously… but unfortunately, journalists do. Especially when it comes to hot topics or the dreaded “0day” vulnerabilities, that happen every day… while the media cherry picks the occasional one. As usual in our industry, give such a person a bit of rope and you are likely to find them hanging a few days or weeks later.

Focusing on a single aspect of the Wired article:

Members of the audience snickered, for example, when a representative from the auto industry pleaded that researchers should consider “safety” when testing for vulnerabilities. At a bar after the event, some attendees said automakers are the ones who don’t seem concerned about consumer safety when they sell cars that haven’t been pen-tested for vulnerabilities or when it takes them five years to fix a known vulnerability.

And when Corman sought community support for new companies entering the bug bounty arena, some attendees responded with derision. He noted that after United Airlines launched its bug bounty program this year—the first for the airline industry—it suffered backlash from the security community instead of support.

Just Google “volkswagen emissions software” and you will see why the public cannot trust auto manufacturers. Nothing to do with vulnerability research, and at least one is manipulating their software to defraud the public which may hurt the world environment in ways we can’t comprehend. If that isn’t enough for you, consider the same company spent two years doing everything in their power to hide a vulnerability in their software, that may have allowed criminals to more easily steal consumer’s cars.

So first, ask yourself why anyone from the security arena is so gung-ho in supporting the auto industry. Sure, it would benefit our industry, and more importantly benefit the average consumer. Effecting that change would be incredible! But given the almost three decades of disclosure debate centered on questionable dealings between researchers and vendors, it is not a fight you can win fast. And more importantly, it is not one you capitulate to for your own gain.

Moving past that, the quote from Corman speaks to me personally. When United announced their bug bounty program, it received a lot of media attention. And this is critical to note. A company not known for a bug bounty program, in an industry not known for it, offering perks that were new to the industry (i.e. United, airlines, and frequent flier miles). Sure, that is interesting! Unfortunately, none of the journalists that covered that new bounty program read their terms, or if they did, couldn’t comprehend why it was a horrible program that put researchers at great risk. To anyone mildly familiar with bounty programs, it screamed “run, not walk, away…”. I was one of the more vocal in criticizing the program on social media. When a new bounty opens up, there are hundreds of (largely) neophyte researchers that flock to it, trying to find the lowest hanging fruit, to get the quick and easy bounties. If you have run a vulnerability reporting program, even one without bounties, you likely have witnessed this (I have). I was also very quick to start reaching out to my security contacts, trying to find back-channel contacts to United to give them feedback on their offering.

United’s original bounty program was not just a little misleading, but entirely irresponsible and misleading. It basically said, in laymen’s terms, “if you find a vuln in our site, report it and you may get up to 1 million airline miles!” It also said, you cannot test ANY united.com site, you cannot test our airplanes (on the back of the Chris Roberts / United / FBI drama), our mobile application, or anything else remotely worth testing. The program had a long list of what you could NOT test, that excluded every single target a malicious hacker would target. Worse? It did not offer a test / Dev network, a test airplane, or any other ‘safe’ target to test. In fact, it even excluded the “beta” United site! Uh… what were bounty-seekers supposed to do here? If United thinks that bug bounty seekers read past the “mail bugs to” and “this is the potential bounty”, they need to reconsider their bounty program. The original bounty excluded:

“Code injection on live systems”

Bugs on customer-facing websites such as:
united.com
beta.united.com
mobile.united.com

Yet, despite that exclusionary list, they did NOT include what was ALLOWED to be tested. That, in modern security terms, is called a honeypot. Don’t even begin to talk about “intent”, because in a court of law with some 17 year old facing CFAA charges, intent doesn’t enter the picture until way too late. The original United program was set up so that they could trivially file a case with the FBI and go after anyone attacking any of their Internet-addressable systems, their mobile apps, or their airplanes. Shortly after the messy public drama of a white hat hacker telling the media he tested the airplane systems, and could have done “bad things” in so many words.

My efforts to reach out to United via back-channels worked. One of their security engineers that is part of the bounty program was happy to open a dialogue with me. See, this is where we get to the bits that are the most important. We traded a few mails, where I outlined my issues with the bounty program and gave them extensive feedback on how to better word it, so that researchers could not only trust the program, but help them with their efforts. The engineer replied quickly saying they would review my feedback and I never heard back. That is the norm for me, when I reach out and try to help a company. So now, when I go to write this blog, of course I look to see if United revised their bounty program! Let’s look at the current United bug bounty offering:

Bugs that are eligible for submission:

Authentication bypass
Bugs on United-operated, customer-facing websites such as:
united.com
beta.united.com
mobile.united.com
mystatus.united.com
smartphone.continental.com
Bugs on the United app
Bugs in third-party programs loaded by united.com or its other online properties

Wow… simply WOW. That is a full 180 from the original program! Not only do they allow testing of the sites they excluded before, but they opened up more sites to the program. They better defined what was allowed (considerably more lenient for testing) as far as technical attacks, and targets. I still disagree a bit on a few things that are “not allowed”, but completely understand their reasons for doing so. They have to balance the safety of their systems and customers with the possibility that a vulnerability exists. And they are wise to consider that a vulnerability may exist.

So after all that, let’s jump back to the quote which drew my ire.

And when [Josh] Corman sought community support for new companies entering the bug bounty arena, some attendees responded with derision. He noted that after United Airlines launched its bug bounty program this year—the first for the airline industry—it suffered backlash from the security community instead of support.

This is why a soundbyte in a media article doesn’t work. It doesn’t help vendors, and it doesn’t help researchers. It only helps someone who has largely operated outside the vulnerability world for a long time. Remember, “security researcher” is an overly vague term that has many meanings. Nothing about that press release suggests Corman has experience in any of the disciplines that matter in this debate. As a result, his lack of experience shows clearly here. Perhaps it was his transition from being a “DevOps” expert for a couple years, to being a “Rugged Software” expert, to becoming a “vulnerability disclosure” expert shortly after?

First, United’s original bug bounty offering was horrible in every way. There was basically zero merit to it, and only served to put researchers at risk. Corman’s notion that scaring companies away from this example is ‘bad’ is irresponsible and contradictory to his stated purpose with the I Am The Cavalry initiative. While Corman’s intentions are good, the delivery simply wasn’t helpful to our industry, the automobile industry, or the airline industry. Remember, “the road to hell is paved with good intentions“.

Rebuttal: Dark Reading’s “9” Sources for Tracking New Vulnerabilities

Earlier today, Sean Martin published an article on Dark Reading titled “9 Sources For Tracking New Vulnerabilities“. Spanning 10 pages, likely for extra ad revenue, the sub-title reads:

Keeping up with the latest vulnerabilities — especially in the context of the latest threats — can be a real challenge.

One would hope this article would help with that challenge, and it most certainly is one. First, a disclaimer; I was involved with OSVDB for roughly 10 years and the primary curator over that time. Further, I am now involved with Risk Based Security’s VulnDB commercial vulnerability database offering. Both of these are mentioned in the article, so my comments below will most certainly have some level of bias.

To help readers, Sean Martin writes “In no particular order, here are nine key vulnerability data sources for your consideration.” With that, flip to the next page of the article.

It’s important to understand the source — and backing for your source — to avoid getting left without a solid vulnerability database. A good example is the case where many had to say goodbye to their vulnerability feed when minority-player Open Source Vulnerability Database (OSVDB) was shut down.

“Not having OSVDB any longer, while sad for those that relied on it, may actually reduce the complexity in making sure there is integration across all products, MSSPs, services, and SIEMs,” says Fred Wilmot, chief technology officer at PacketSled

I am not sure how OSVDB constituted a “minority-player” in any sense of the term given the broad coverage for a decade. While historical entries were often incomplete, the database was commercially maintained from just before January, 2012, and the information was still given away for free, despite competing with the company providing the support and updates. Since the quote specifically mentions that OSVDB shut down, and it did on April 5, 2016, it’s nice to hear people give belated appreciation to the project. OSVDB shutting down, I would argue, does not reduce the complexity of anything for those knowledgeable about vulnerability disclosure. On the surface, sure! One less set of IDs to integrate across products sounds like a good thing. However, you have to also remember that OSVDB was cataloging thousands of vulnerabilities a year that were not found in the other sources listed in this article. That means there is a level of complexity here that is horrible for companies trying to keep up with vulnerabilities.

Page 3 tells readers about NIST’s National Vulnerability Database (NVD):

NVD is the US government repository of standards-based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance. NVD is based on and synchronized with the CVE List (see next slide).

First, since NVD is synchronized with CVE, it is curious that they are listed as separate sources. For those not aware, NVD is a sort of ‘value add’ to CVE in that they generate CVSS and CPE data for the vulnerabilities cataloged by MITRE for the CVE project. Monitoring NVD means you are already monitoring all of CVE and getting the additional meta-data. It is also important to note that the meta-data is outsourced to a contractor who employs ‘junior analysts’ to do the work. This becomes apparent if you consume there data and actually look at their CVSS scores over the last ~ 8 years. Personally, I stopped emailing them corrections many years back due to the volume involved. To this day, you can still often see them scoring Adobe Flash vulnerabilities as CVSSv2 10.0, meaning they miss the ‘context-dependent’ (a.k.a. ‘user-assisted’) aspect which means the access complexity moves from ‘L’ow to ‘M’edium per the CVSSv2 scoring guide on FIRST, resulting in a 9.3 score. Seems minor, but that reclassifies a vulnerability from ‘Critical’ to ‘High’ for many organizations, and should make you question their scoring on more complex issues.

Page 4 tells us more about CVE and offers some “insight” into it that is horribly wrong:

CVE is a dictionary of publicly known information security vulnerabilities and exposures. CVE’s common identifiers enable data exchange between security products and provide a baseline index point for evaluating coverage of tools and services.

Morey Haber, VP of technology at BeyondTrust, offers these examples:
Scanning tools most commonly use CVEs for classification
SIEM technologies understand their applicability in reporting
Risk frameworks use them as a calculation vehicle for applied risk to the business

First, I cannot over-share Steve Ragan’s recent article titled “Over 6,000 vulnerabilities went unassigned by MITRE’s CVE project in 2015“. Consider just the headline, and then think about the fact that CVE does not catalog at least 47,267 vulnerabilities historically. Now, re-read Haber’s examples of how CVE is used and what kind of Achilles’ heel that is for any organization using security software based on CVE.

Fred Wilmot’s quote about CVE is what prompted me to write this entire blog. This is so incredibly wrong and misleading:

“Now that you have a common calculator for interoperability among vendors, the fact that CVE is maintained completely transparently to the community is a HUGE pro,” says Fred Wilmot, chief technology officer at PacketSled. “There is no holdout of exploits for vulnerabilities based on financial gain or intent. It’s altruism at its best. The weakness in the CVE comes in the weaponization of that information and the lack of disclosure for profit and activism, as two examples.”

Where to start…

  1. There is no common calculator for “interoperability among vendors” in the context of CVE. That isn’t what CVE is or does.
  2. CVE is most certainly not maintained transparently to the community. It is not maintained transparently to the volunteer Editorial Board (now known simply as the ‘CVE Board’) either. The backroom workings and decisions MITRE makes on behalf of CVE without Board or public input have been documented before. The last decision that lacked any transparency was their recent catastrophic decision to change the CVE format to a new ‘federated’ scheme. If you have any doubt about this being a backroom decision, look at the first reply from CVE Board member Kurt Seifried.
  3. Wilmot’s characterization that CVE is “altruism at its best” also speaks to a lack of knowledge of CVE. While MITRE, the organization that maintains CVE, is technically a not-for-profit organization, they only take non-compete contracts at incredible expense to the U.S. taxpayer. CVE, and a handful of other ‘C’ projects related to information security, bring in considerable money to the company. In 2015, they enjoyed over $1.4 billion in revenue and maintained $788 million in assets. The fact that the contract to maintain CVE is non-compete, and cannot be bid on by companies more qualified to run the project, speaks to where the real interest lies and it isn’t altruistic.
  4. The weakness in CVE is certainly not the “weaponization” of that information. A significant majority of weaponized exploits that lead to the thousands of data breaches and organizations being compromised are typically done with functional exploits that enjoy little technical information being made public. For example, phishing attacks that rely on Adobe Reader or Adobe Flash are usually patched by Adobe eventually, and the subsequent disclosure has no technical details. Even if researchers post more details down the road, the entries in CVE are rarely updated to include the additional details.
  5. The last bit of Wilmot’s quote, I will need someone to explain to me. “The weakness in the CVE [..] comes in the lack of disclosure for profit and activism.” I don’t know what that means.

Page 5 tells readers about the CERT Vulnerability Notes Database:

The Vulnerability Notes Database provides information about software vulnerabilities. Vulnerability notes include summaries, technical details, remediation information, and lists of affected vendors. Most Vulnerability notes are the result of private coordination and disclosure efforts. For more comprehensive coverage of public vulnerability reports, consider the National Vulnerability Database (NVD).

“This is nice to have, but it still uses CVEs as reference,” says Fred Wilmot, chief technology officer at PacketSled. “NVD is not nearly as practical to consume directly as CVE — the disclosure form is fine, but why would I go there and not directly to MITRE for CVE establishment first? However, it’s probably a good place to spend time during an investigation.”

The CERT VNDB is not a comprehensive vulnerability database, and does not aim to be one. As mentioned, their information is primarily via their assisting researchers in coordinating a disclosure with a vendor. Since CERT is a CNA, meaning they can assign CVE IDs to vulnerabilities they coordinate, it means that over 99% of their entries are covered by CVE and thus NVD. Monitoring NVD will get you all of CVE and almost all of CERT VNDB. The very few CERT VU that do not get CVE IDs assigned before disclosure are rare, and I believe they get assignments shortly after from MITRE.

Once again, Wilmot speaks about these sources and doesn’t appear to have real working knowledge which personifies my term ‘vulnerability tourist’. CERT VNDB disclosures appear on their site before they appear in CVE or NVD. It may be 24 – 72 hours before they appear in fact, meaning that while it still uses CVEs as a reference, for timely monitoring of vulnerabilities it may be important to keep an eye out on CERT directly. Next, Wilmot goes on to say “NVD is not nearly as practical to consume directly as CVE”, apparently not realizing that NVD makes its data available in XML. While MITRE makes the CVE data in several formats, it doesn’t mean NVD is not easy to consume. The most important distinction here is that NVD comes with CPE data where CVE does not. For any medium to large organization, this is basically mandatory meta-data for actually putting the information to use.

Page 6 tells readers about Risk Based Security’s VulnDB offering. The curious bit to me is the quote from Morey Haber:

“VulnDB does not contain audit information, but it is a good source for solutions that need to reference vulnerability information in their products such as firewalls or IDS/IPS and do not want to rely on open source or to build/maintain a library,” said Morey Haber, VP of technology at BeyondTrust.

First, BeyondTrust is not a user of VulnDB, which is a commercial offering unlike CVE/NVD/CERT. They did a short-term trial in 2012, during the beginning of the offering and opted not pursue it as a source of vulnerability intelligence. Second, what does “audit information” even mean in the context of a VDB? Audit information about your own environment maybe? Something that a vulnerability intelligence provider can’t possibly deliver. An audit trail is maintained for each vulnerability entry and is available to customers, but I doubt that is what he means since calling VulnDB out on this doesn’t make sense and the other sources of vulnerabilities listed in this article don’t maintain such a trail.

While VulnDB can certainly be used to reference vulnerabilities in security products as he says, that is the tip of the iceberg. With over 47,000 vulnerabilities not found in CVE or NVD, the breadth of information is incredible. Further, VulnDB has made a concerted effort for years to track vulnerabilities in third-party libraries, and builds on top of the robust meta-data that has been generated for over a decade. Haber’s comments do not reflect actual knowledge of the VulnDB offering.

Page 7 tells readers about the DISA IAVA Database And STIGS. Haber gives commentary on this as well:

“IAVA, the DISA-based vulnerability mapping database, is based on existing SCAP sources, and once in a while it contains details for government systems that are not a part of the commercial world,” says Morey Haber, VP of technology at BeyondTrust. “For any vendor doing .gov or .mil work, this reference is a must.”

While some of the IAVA advisories may contain additional detail, it is important to note that these will not provide any vulnerabilities above and beyond CVE/NVD, and their advisories lag well behind the issues being published in CVE/NVD. Haber is right, that this is a vital resource for .gov and .mil contractors, for several reasons.

Page 8 tells readers about SecurityTracker.com, which is a long-running site that aggregates vulnerabilities to a degree. Haber once again provides commentary:

“The website tends to focus on non-OS vulnerabilities, but they are certainly included in the feed,” says Morey Haber, VP of technology at BeyondTrust. “Infrastructure and IoT tend to make the front page the most, and this site is a good third-party reference for new flaws.”

Actually, they do focus on OS vulnerabilities and that can routinely be seen on their site. As I write this, 2 of the 5 vulnerabilities listed on the front page are in operating systems. The biggest thing to note about this offering is that publicly, they just don’t aggregate a significant volume of vulnerabilities for public consumption. Their most recent ID is 1037091, meaning they have ~ 37,000 entries in their database. Note that they operate like CVE though, where multiple vulnerabilities can be associated with a single ID. Regardless, reading their Weekly Vulnerability Summary emails for the past five weeks shows their volume: Sep 26 2016 (32 alerts), Oct 3 2016 (17 alerts), Oct 10 2016 (26), Oct 17 2016 (31), and Oct 24 2016 (32 alerts). To put that into perspective, VulnDB averages 46 new entries a day in 2016, with the most being 224 in a single day in 2016. For the most part, SecurityTracker may beat CVE on adding to the database, but they are almost entirely covering entries that have CVE IDs.

Page 9 tells readers about the Open Vulnerability And Assessment Language (OVAL) Interpreter And Repository. This is a curious addition to the article because it is not a source of vulnerability intelligence. Instead, it is a standard for reporting about systems. From the OVAL page:

OVAL® International in scope and free for public use, OVAL is an information security community effort to standardize how to assess and report upon the machine state of computer systems. OVAL includes a language to encode system details, and an assortment of content repositories held throughout the community.

While OVAL is certainly useful to some organizations, it does not belong in a list of vulnerability sources.

Page 10 tells readers about Information Sharing And Analysis Centers (ISACs). This is another curious addition as ISACs typically trade information on active attacks, threat actors, and which vulnerabilities may be targeted more heavily. They are generally not a source of vulnerabilities in the same context as most of the resources above.

In summary, Sean Martin’s article says it will share “9 Sources For Tracking New Vulnerabilities”. In reality, based on that quote and context, the article only tells readers about CVE/NVD, CERT VNDB, RBS VulnDB, and SecurityTracker. Several of the sources listed are not really for tracking new vulnerabilities, rather augment the vulnerability threat intelligence in various ways.