Monthly Archives: January, 2015

2013 Superdome Outage a Hack? The Value of Post-Incident Investigations.

As we approach the pinnacle of U.S. sportsball, I am reminded of the complete scandal from a past Superbowl. No, not the obviously-setup wardrobe malfunction scandal. No, not the one where we might have been subjected to a pre-recorded half-time show. The one in 2013 where hackers terrorism who-knows-why caused the stadium lights to go out for 34 minutes. That day, and the days after, everyone sure seemed to ‘know’ what happened. Since many were throwing around claims of ‘hacking’ or ‘cyber terrorism’ at the time, this incident caught my attention.

Here’s what we know, with selected highlights:

  • February 3, 2013: Superbowl happened.
  • February 3, 2013: Anonymous takes credit for the blackout.
  • February 3, 2013: Because theories of hacking or terrorism aren’t enough, Mashable comes up with 13 more things that may have caused it.
  • February 4, 2013: A day later, we’re once again reminded that “inside sources” are often full of it. Baltimore Sun initial report claimed a “power-intensive” halftime show might have been a factor.
  • February 4, 2013: The FBI makes a statement saying that terrorism was not a factor.
  • February 4, 2013: We learn that such a failure may have been predicted in 2012.
  • February 4, 2013: Of course the outage doesn’t really matter. A little game delay, and it is a “boon for super bowl ratings“, the most critical thing to the corrupt NFL.
  • February 4, 2013: By this point, people are pretty sure hackers didn’t do it. They probably didn’t, but they could have!
  • February 4, 2013: Oh sorry, it could still be hackers. The Christian Science monitor actually covers the likely reason, yet that isn’t sexy. Chinese hacker ploy seems more reasonable to cover…
  • February 4, 2013: Not only Anonymous, but ‘Rustle League’ claimed to hack the super bowl. A day later we learn that notorious Rustle League trolls were … wait for it … trolling.
  • February 5, 2013: Officials at Entergy, who provide power for that property clearly state “There was no Internet or remote computer access to the piece of equipment inside the stadium that sensed an abnormality in the electrical system and partially cut power to the Superdome…”
  • February 6, 2013: While the Superdome was not hacked on Sunday, the U.S. Federal Reserve was.
  • February 8, 2013: Multiple sources begin covering the real reason for the Superdome outage.
  • February 8, 2013: We now have a good idea what caused it, but let the blame game begin. Manufacturer error, or user error?
  • March 21, 2013: The official Entergy report is released (PDF), giving a very technical analysis and summary of what happened. Everyone but conspiracy theorists can sleep well.

The reason for this blog is that Chris Sistrunk, a noted SCADA security researcher, pinged me the other day about the report. We were curious if the failure described could be considered a vulnerability by OSVDB standards. After reading the report and several questions for him, this seems like a simple case of device malfunction / failure. Quoting relevant bits from the report:

During the testing, behavior of the relay was not entirely consistent with the function described in the instruction manual. Under some circumstances, when the current exceeded the trip
setting and then decreased below the trip setting even after the timer had expired, the relay did not operate.

This instability was observed on all of the relays tested (during testing by this engineer, ENOI, and others in coordination with S&C at Vault 24 on March 1, 2013), including the subject
(Bay 8) relay and two identical (exemplar) relays. Behavior of the device in a manner contrary to the published functionality of the device constitutes a design defect.

Interesting read and glimpse into the world of SCADA / ICS. While the notion that the outage was due to hackers, the reality is far more mundane. We could certainly learn from this case, along with thousands of others… but who am I kidding. News covering the mundane and real doesn’t sell.

We’re “critical”, not “immature”.

Recently, we got feedback via Twitter that we come across as “immature”. On the surface, perhaps. Not all of our Tweets are critical of CVE though. I replied pretty quickly that said criticism is also us “pushing for them to improve since so much of the industry relies on them.” When I Tweeted that, a post to the CVE Editorial Board wasn’t public on the web site, so I couldn’t quote it. But it was a great and timely example of one way our team is pushing CVE to improve.

That said, let me better explain our criticism. It isn’t the first time, it won’t be the last time, but I am not sure if some of these thoughts have been published via blog before. Put simply, we would not be so critical of CVE if it wasn’t a crumbling cornerstone of the Information Security industry. Countless organizations and products use CVE as a bible of public vulnerabilities. A majority of security technology, including firewalls, vulnerability scanners, IDS, IPS, and everything else is built on their database. Our industry assumes they are doing their job, doing their best, and cataloging public vulnerabilities. That simply is not the case, and it hasn’t been for more than a year. Everyone uses it as a benchmark, trusts it, and relies on it. Yet no one questions it except us. Last I checked, our profession was built on “trust but verify“. We verify, thus, we’re critical.

That said, we have a long history of sending corrections, feedback, and improvement ideas to all of the major VDBs including CVE, BID, Secunia, ISS, SecTracker, EDB, PacketStorm, and more. We currently have a great relationship with ISS and EDB. We have had a great-to-courteous relationship with CVE. SecTracker has been receptive of our feedback in the past, but given their minimal output we don’t try to provide cross-references to them anymore. PacketStorm is bad about approving our comments on their published disclosures. BID is and has been a lost cause for almost a decade. Further, we have decent relationships with US-CERT, IBM PSIRT, CERT (CM), and other companies teams. We give continued feedback to some of these organizations that are designed to help their process, help their disclosures, and thus help the industry. I mention this because for the most part, a majority of them are competitors to us. We have a commercial model, it is the only way to fund the database. Despite that, and a decade of disillusionment of the ‘open source’ model, we still give away a significant portion of our data. Until recently, our biggest competitor in licensing our data was ourselves, as potential customers would freely admit they could take our data from osvdb.org. Next to ourselves? CVE and NVD. It is astounding and scary how many companies will pass up on superior vulnerability intelligence because what they are using is free. In many cases, they tell their customers they provide the best intelligence, provide the most security, and a wide variety of other platitudes. In reality, they don’t want to spend a sliver of a fraction of their profits, to truly help their customers. It is a level of greed and unethical behavior that is absolutely disgusting. If I could expose them, I would, but they fall under dreaded NDAs.

Point is, we strive to push all of these organizations to be better. Jake and I gave a presentation back in 2005 at CanSecWest where we said that VDBs need to evolve. They still do. Ten years later, we’re still pushing them to do so while they resist with every ounce of their being. Fortunately, we have been pushing ourselves to be better during that time, and it shows. We’re no more critical of the other VDBs than we are of ourselves. It doesn’t help us one bit. In fact, it only serves to hurt us. Yet, it is the right thing to do for the industry, so we do it.

SQLi Disclosures and the Last Five Years (Transparent Statistics)

Nothing like waking up to a new article purporting to show vulnerability statistics and having someone ask us for comment. But hey, we love giving additional perspective on such statistics since they are often without proper context and disclaimers. This morning, the new article comes from Help Net Security and is titled “SQL injection vulnerabilities surge to highest levels in three years“. It cites “DB Networks’ research” who did the usual, parsed NVD data. As we all know, that data comes from CVE who is a frequent topic of rant on Twitter and occasional blogs. Cliff notes: CVE does not promise comprehensive coverage of public disclosures. They openly admit this and Steve Christey has repeatedly said that CVE may not be the best for statistics, even as far back as 2006. Early in 2013, Christey again publicly stated that CVE “can no longer guarantee full coverage of all public vulnerabilities.” I bring this up to remind everyone, again, that it is important to add such disclaimers about your data set, and ultimately your published research.

Getting back to the statistics and DB Research, the second thing I wondered after their data source, was who they were and why they were doing this analysis (SQLi specifically). I probably shouldn’t be surprised to learn this comes over a year after they announced they have an appliance that blocks SQL injection attacks. This too should be disclaimed in any article covering their analysis, as it adds perspective why they are choosing a single vulnerability type, and may also explain why they chose NVD over another data sources (since it fits more neatly into their narrative). Following that, they paid Ponemon to help them conduct a study on the threat of SQL injection. Or wait, was it two of them, the second with a bent towards retail breaches? Here are the results of the first one it seems, which again shows that they have a pretty specific bias toward SQL injection.

Now that we know they have a vested interest in the results of their analysis coming out a specific way, let’s look at the results. First, the Help Net Security article does not describe their methodology, at all. Reading their self-written, paid-for news announcement (PR Newswire) about the analysis, it becomes very clear this is a gimmick for advertising, not actual vulnerability statistics research. It’s 2015, years after Steve Christey and I have both ranted about such statistics, and they don’t explain their methodology. This makes it apparent that “CVE abstraction bias” is possibly the biggest factor here. I have blogged about using CVE/NVD as a dataset before, because it contains one of the biggest pitfalls in such statistic generation.

Rather than debunk these stats directly, since it has been done many times in the past, I can say that they are basically meaningless at this point. Even without their methodology, I am sure someone can trivially reproduce their results and figure out if they abstracted per CVE, or per actual SQLi mentioned. As a recent example, CVE-2014-7137 is a single entry that actually covers 54 distinct SQL injection vulnerabilities. If you count just the CVE candidate versus the vulnerabilities that may be listed within them, your numbers will vary greatly. That said, I will assume that their results can be reproduced since we know their data source and their bias in desired results.

With that in mind, let’s first look at what the numbers look like when using a database that clearly abstracts those issues, and covers a couple thousand sources more than CVE officially does:

sqli-five-year-view

“SQL Injection Vulnerabilities Surge to Highest Levels in Three Years” is the title of DB Networks’ press release, and is summarized by Help Net Security as “last year produced the most SQL vulnerabilities identified since 2011 and 104% more than were identified in 2013.” In reality (or at least, using a more comprehensive data set), we see that isn’t the case since the available statistics a) don’t show 2014 being more than 2011 and b) 2011 not holding a candle to 2010. No big surprise really, given that we actually do vulnerability aggregation while NVD and DB Networks does not. But really, I digress. These ‘statistics’ are nothing more than a thinly veiled excuse to further advertise themselves. Notice in the press release they go from the statistics that help justify their products right into the third paragraph quoting the CEO being “truly honored to be selected as a finalist for SC Magazine’s Best Database Security Solution.” He follows that up with another line calling out their specific product that helps “database security in some of the world’s largest mission critical datacenters.

Yep, these statistics are very transparent. They are based on a convenient data source, maintained by an agency that doesn’t actually aggregate the information, that doesn’t have the experience their data-benefactors have (CVE). They are advertised with a single goal in mind; selling their product. The fact they use the word “research” in the context of generating these statistics is a joke.

As always, I encourage companies and individuals to keep publishing vulnerability statistics. But I stress that it should be done responsibly. Disclaim your data source, explain your methodology, be clear if you are curious about the results coming out one way (bias is fine, just disclaim it), and realize that different data sources will produce different results. Dare to use multiple sources and compare the results, even if it doesn’t fully back your desired opinion. Why? Because if you disclaim your data sources and results, the logical and simple conclusion is that you may still be right. We just don’t have the perfect vulnerability disclosure data source yet. Fortunately, some of us are working harder than others to find that unicorn.

Microsoft’s latest plea for CVD is as much propaganda as sincere.

Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.

Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.

Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.

In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).

Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.

Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.

And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.

Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.

Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.

It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.

This is a great point. But, let’s read on and offer some context using your own words…

Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.

Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.

CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.

And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)

[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]

One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…

What’s right for Google is not always right for customers.

This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.

For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?

It is a zero sum game where all parties end up injured.

What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.

Betz’ blog goes on to quote the Microsoft CVD policy which states:

Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.

Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:

Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.

Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.

Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…

Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.

Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.

Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?

Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.

Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.

The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.

While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.