Tag Archives: Microsoft

The Duality of Expertise: Microsoft

The notion of expertise in any field is fascinating. It crosses so many aspects of humans and our perception. For example, two people in the same discipline, each with the highest honors academic can grant, can still have very different expertise within that field. Society and science have advanced so we don’t have just have “science” experts and medical doctors can specialize to extreme degrees. Within Information Security, we see the same where there are experts in penetration testing, malware analysis, reverse engineering, security administration, and more.

In the context of a software company, especially one that does not specifically specialize in security (and is trivial to argue was late to the security game), you cannot shoehorn them into any specific discipline or expertise. We can all absolutely agree there is an absolute incredible level of expertise across a variety of disciplines within Microsoft. So when Microsoft releases yet another report that speaks to vulnerability disclosures, the only thing I can think of is duality. Especially in the context of a report that puts forth some expertise that they are uniquely qualified to speak on, while mixed with a topic that pre-dates Microsoft and they certainly aren’t qualified to speak on to some degree.

A Tweet from Carsten Eiram pointed me to the latest report, and brought up the obvious fact that it seemed to be way off when it comes to vulnerability disclosures.

carsten-ms-sir

The “MS SIR” he refers to is the Microsoft Security Intelligence Report, Volume 21 which covers “January through June, 2016” (direct PDF download).

It’s always amusing to me that you get legal disclaimers in such analysis papers before you even get a summary of the paper:

ms-disclaimer

Basically, the take away is that they don’t stand behind their data. Honestly, the fact I am blogging about this suggests that is a good move and that they should not. The next thing that is fascinating is that it was written by 33 authors and 14 contributors. Since you don’t know which of them worked on the vulnerability section, it means we get to hold them all accountable. Either they can break it down by author and section, or they all signed off on the entire paper. Sorry, the joys of academic papers!

After the legal disclaimers, then you start to get the analysis disclaimers, which are more telling to me. Companies love to blindly throw legal disclaimers on anything and everything (e.g. I bet you still get legal disclaimers in the footer of emails you receive, that have no merit). When they start to explain their results via disclaimers while not actually including their methodology, anyone reading the report should be concerned. From their “About this report” section:

This volume of the Microsoft Security Intelligence Report focuses on the first and second quarters of 2016, with trend data for the last several quarters presented on a quarterly basis. Because vulnerability disclosures can be highly inconsistent from quarter to quarter and often occur disproportionately at certain times of the year, statistics about vulnerability disclosures are presented on a half-yearly basis.

This is a fairly specific statement that speaks as if it is fact that vulnerability trends vary by quarter (they do!), but potentially ignores the fact that they can also vary by half-year or year. We have seen that impact not only a year, but the comparison to every year prior (e.g. Will Dormann in 2014 and his Tapioca project). Arbitrarily saying that it is a ‘quarter’ or ‘half-year’ does not demonstrate experience in aggregating vulnerabilities, instead it is a rather arbitrary and short time-frame. Focusing on a quarter can easily ignore some of the biases that impact vulnerability aggregation as outlined by Steve Christey and my talk titled “Buying Into the Bias: Why Vulnerability Statistics Suck” (PPT).

Jumping down to the “Ten years of exploits: A long-term study of exploitation of vulnerabilities in Microsoft software” section, Microsoft states:

However, despite the increasing number of disclosures, the number of remote code execution (RCE) and elevation of privilege (EOP) vulnerabilities in Microsoft software has declined
significantly.

Doing a title search of Risk Based Security’s VulnDB for “microsoft windows local privilege escalation” tells a potentially different story. While 2015 is technically lower than 2011 and 2013, it is significantly higher than 2012 and 2014. I can’t say for certain why these dips occur, but they are very interesting.

5year-win-lpe

Thousands of vulnerabilities are publicly disclosed across the industry every year. The 4,512 vulnerabilities disclosed during the second half of 2014 (2H14) is the largest
number of vulnerabilities disclosed in any half-year period since the Common Vulnerabilities and Exposures system was launched in 1999.

This quote from the report explicitly shows serious bias in their source data, and further shows that they do not consider their wording. This would be a bit more accurate saying “The 4,512 vulnerabilities aggregated by MITRE during the second half of 2014…” The simple fact is, a lot more than 4,512 vulnerabilities were disclosed during that time. VulnDB shows that they aggregated 8,836 vulnerabilities in that same period, but less than the 9,016 vulnerabilities aggregated in the second half of 2015. Microsoft also doesn’t disclaim that the second half of 2014 is when the aforementioned Will Dormann released the results of his ‘Tapioca’ project totaling over 20,500 vulnerabilities, only 1,384 of which received CVE IDs. Why? Because CVE basically said “it isn’t worth it”, and they weren’t the only vulnerability database to do so. With all of this in mind, Microsoft’s comment about the second half of 2014 becomes a lot more complicated.

The information in this section is compiled from vulnerability disclosure data that is published in the National Vulnerability Database (NVD), the US government’s repository of standards-based vulnerability management data at nvd.nist.gov. The NVD represents all disclosures that have a published CVE (Common Vulnerabilities and Exposures) identifier.

This is a curious statement, since CVE is run by MITRE under a contract from the Department of Homeland Security (DHS), making it a “US government repository” too. More importantly, NVD is essentially a clone of CVE that just wraps additional meta-data around each entry (e.g. CPE, CWE, and CVSS scoring). This also reminds us that they opted to use a limited data set, one that is well known in the Information Security field as being woefully incomplete. So even a company as large as Microsoft, with expected expertise in vulnerabilities, opts to use a sub-par data set which drastically influences statistics.

Figure 23. Remote code executable (RCE) and elevation of privilege (EOP) vulnerability disclosures in Microsoft software known to be exploited before the corresponding security update release or within 30 days afterward, 2006–2015

The explanation for Figure 23 is problematic in several ways. Does it cherry pick RCE and EOP while ignoring context-dependent (aka user-assisted) issues? Or does this represent all Microsoft vulnerabilities? This is important to ask as most web browser exploits are considered to be context-dependent and coveted by the bad guys. This could be Microsoft conveniently leaving out a subset of vulnerabilities that would make the stats look worse. Next, looking at 2015 as an example from their chart, they say 18 vulnerabilities were exploited and 397 were not. Of the 560 Microsoft vulnerabilities aggregated by VulnDB in 2015, 48 have a known public exploit. Rather than check each one to determine the time from disclosure to exploit publication, I’ll ask a more important question. What is the provenance of Microsoft’s exploitation data? That isn’t something CVE or NVD track.

Figure 25 illustrates the number of vulnerability disclosures across the software industry for each half-year period since 2H13

Once again, Microsoft fails to use the correct wording. This is not the number of vulnerability disclosures, this is the number of disclosures aggregated by MITRE/CVE. Here is their chart from the report:

ms-2016-1h16-chart

Under the chart they claim:

Vulnerability disclosures across the industry decreased 9.8 percent between 2H15 and 1H16, to just above 3,000.

As mentioned earlier, since Microsoft is using a sub-par data set, I feel it is important to see what this chart would look like using more complete data. More importantly, watch how it invalidates their claim about an industry decrease of 9.8 percent between 2H15 and 1H16, since RBS shows the drop is closer to 18%.

vulndb-vs-nvd

I have blogged about vulnerability statistics, focusing on these types of reports, for quite a while now. And yet, every year we see the exact same mistakes made by just about every company publishing statistics on vulnerabilities. Remember, unless they are aggregating vulnerabilities every day, they are losing a serious understanding of the data they work with.

March 19, 2017 Update – Carsten Eiram (@carsteneiram) pointed out that the pattern of local privilege escalation numbers actually follow an expected pattern with knowledge of researcher activity and trends:

In 2011, Tarjei Mandt while he was at Norman found a metric ton of LPEs in win32k.sys as part of a project.

In 2013, it was Mateusz Jurczyk’s turn to also hit win32k.sys by focusing on a bug-class he dubbed “double-fetch” (he’s currently starting that project up again to see if he with tweaks can find more vulns).

And in 2015, Nils Sommer (reported via Google P0) was hitting win32k.sys again along with a few other drivers and churned out a respectable amount of LPE vulnerabilities.

So 2012 and 2014 represent “standard” years while 2011, 2013, and 2015 had specific high-profile researchers focus on Windows LPE flaws via various fuzzing projects.

So the explanation is the same explanation we almost always see: vulnerability disclosures and statistics are so incredibly researcher driven based on which product / component / vulnerability type a researcher or group of researchers decides to focus on.

Microsoft’s latest plea for CVD is as much propaganda as sincere.

Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.

Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.

Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.

In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).

Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.

Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.

And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.

Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.

Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.

It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.

This is a great point. But, let’s read on and offer some context using your own words…

Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.

Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.

CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.

And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)

[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]

One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…

What’s right for Google is not always right for customers.

This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.

For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?

It is a zero sum game where all parties end up injured.

What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.

Betz’ blog goes on to quote the Microsoft CVD policy which states:

Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.

Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:

Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.

Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.

Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…

Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.

Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.

Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?

Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.

Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.

The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.

While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.

Advisories != Vulnerabilities, and How It Affects Statistics

I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!

I noticed a Tweet from @SCMagazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.

Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.

It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.

A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12” (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!

If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.

Base Source Critical Important Moderate Low
2012 Advisories (83) 35 (42.2%) 46 (55.4%) 2 (2.4%)
2012 CVE (160) 100 (62.5%) 18 (11.3%) 39 (24.4%) 3 (1.8%)
2012 Total (176) 101 (57.4%) 19 (10.8%) 41 (23.3%) 15 (8.5%)

It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:

   

The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.

In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.

  • MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
  • MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
  • Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
  • OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
  • Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
  • One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.

These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.

August 2012, A Few Small Updates

Our dev team tackled some of the ticket backlog on the OSVDB project. While many changes are ‘behind the scenes’ and only affect the daily manglers, there are a few that are helpful to anyone using the database:

  • Metasploit links have been fixed. At some point, the Metasploit project changed the URL scheme for the search engine. Our incoming links stopped matching the format and resulted in landing at the main search page. We now use the new URL scheme, so links from OSVDB will directly load the Metasploit module again.
  • Microsoft changed their URL scheme yet again. Our links for MS bulletins were redirecting, but sometimes 2 or 3 times on Microsoft’s side. It’s cool that they kept up the redirects, but our links have been updated to be more efficient and land without the 30x magic.
  • Immunity CANVAS references have been added. In our quest to add as much vulnerability information to each entry, we have used Immunity’s API to pull in data about their exploit availability. While it is a commercial offering, such exploit frameworks are invaluable to pen-testing teams, as well as administrators that mitigate based on the availability of exploits. An example of an OSVDB entry with a CANVAS reference is OSVDB 60929.
  • Continued backfilling; we have still been pushing to backfill vulnerability data from prior years, focusing on 2011 currently. The data is coming from a variety of sources including bug trackers, changelogs, and Exploit-DB. We have been working with EDB so that each site has a more thorough cross-reference available. The EDB team has been outstanding to work with and continues to show diligence in their data quality and integrity. Moving forward, we will continue to focus on more vulnerability data imports and more information backfill.

Microsoft, Aurora and Something About Forest and Trees?

Perhaps it is the fine tequila this evening, but I really don’t get how our industry can latch on to the recent ‘Aurora’ incident and try to take Microsoft to task about it. The amount of news on this has been overwhelming, and I will try to very roughly summarize:

Now, here is where we get to the whole forest, trees and some analogy about eyesight. Oh, I’ll warn (and surprise) you in advance, I am giving Microsoft the benefit of the doubt here (well, for half the blog post) and throwing this back at journalists and the security community instead. Let’s look at this from a different angle.

The big issue that is newsworthy is that Microsoft knew of this vulnerability in September, and didn’t issue a patch until late January. What is not clear, is if Microsoft knew it was being exploited. The wording of the Wired article doesn’t make it clear: “aware months ago of a critical security vulnerability well before hackers exploited it to breach Google, Adobe and other large U.S. companies” and “Microsoft confirmed it learned of the so-called ‘zero-day’ flaw months ago”. Errr, nice wording. Microsoft was aware of the vulnerability (technically), before hackers exploited it, but doesn’t specifically say if they KNEW hackers were exploiting it. Microsoft learned of the “0-day” months ago? No, bad bad bad. This is taking an over-abused term and making it even worse. If a vulnerability is found and reported to the vendor before it is exploited, is it still 0-day (tree, forest, no one there to hear it falling)?

Short of Microsoft admitting they knew it was being exploited, we can only speculate. So, for fun, let’s give them a pass on that one and assume it was like any other privately disclosed bug. They were working it like any other issue, fixing, patching, regression testing, etc. Good Microsoft!

Bad Microsoft! But, before you jump on the bandwagon, bad journalists! Bad security community!

Why do you care they sat on this one vulnerability for six months? Why is that such a big deal? Am I the only one who missed the articles pointing out that they actually sat on five code execution bugs for longer? Where was the outpour of blogs or news articles mentioning that “aurora” was one of six vulnerabilities reported to them during or before September, all in MSIE, all that allowed remote code execution (tree, forest, not seeing one for the other)?

CVE Reported to MS Disclosed Time to Patch
CVE-2010-0244 2009-07-14 2010-01-21 6 Months, 7 Days (191 days)
CVE-2010-0245 2009-07-14 2010-01-21 6 Months, 7 Days (191 days)
CVE-2010-0246 2009-07-16 2010-01-21 6 Months, 5 Days (189 days)
CVE-2010-0248 2009-08-14 2010-01-21 5 Months, 7 days (160 days)
CVE-2010-0247 2009-09-03 2010-01-21 4 Months, 18 days (140 days)
CVE-2010-0249 2009-09-?? 2010-01-14 4 Months, 11 days (133 days) – approx
CVE-2010-0027 2009-11-15 2010-01-21 2 Months, 6 days (67 days)
CVE-2009-4074 2009-11-20 2009-11-21 2 Months, 1 day (62 days)

Remind me again, why the “Aurora” conspiracy is noteworthy? If Microsoft knew of six remote code execution bugs, all from the September time-frame, why is one any more severe than the other? Is it because one was used to compromise hosts, detected and published in an extremely abnormal fashion? Are we actually trying to hold Microsoft accountable on that single vulnerability when the five others just happened not to be used to compromise Google, Adobe and others?

Going back to the Wired article, they say on the second to last paragraph: “On Thursday, meanwhile, Microsoft released a cumulative security update for Internet Explorer that fixes the flaw, as well as seven other security vulnerabilities that would allow an attacker to remotely execute code on a victim’s computer.” Really, Wired? That late in the article, you gloss over “seven other vulnerabilities” that would allow remote code execution? And worse, you don’t point out that Microsoft was informed of five of them BEFORE AURORA?

Seriously, I am the first one to hold Microsoft over the flames for bad practices, but that goes beyond my boundaries. If you are going to take them to task over all this, at least do it right. SIX CODE EXECUTION VULNERABILITIES that they KNEW ABOUT FOR SIX MONTHS. Beating them up over just one is amateur hour in this curmudgeonly world.

It’s patch xxxday!

A while back, Microsoft announced they were moving to release patches on the second Tuesday of each month, lovingly called Patch Tuesday. Soon after, Oracle announced that they too would be moving to scheduled releases of patches on the Tuesday closest to the 15th day of January, April, July and October. Now, Cisco has announced they are moving to scheduled patches on the fourth Wednesday of the month in March and September of each calendar year.

In the attempt to make life easier on administrators and help avoid installing patches every few days, these scheduled releases are now causing organizations to enjoy life between monster patches.

Mar 11 – Microsoft
Mar 26 – Cisco
Apr 8 – Microsoft
Apr 15 – Oracle
May 13 – Microsoft
June 10 – Microsoft
July 8 – Microsoft
July 15 – Oracle
August 12 – Microsoft
September 9 – Microsoft
September 24 – Cisco
October 14 – Microsoft, Oracle
November 11 – Microsoft
December 9 – Microsoft

As you can see, October 14 promises to be a lot of fun for companies running Oracle products on Microsoft systems. While the scheduled dates look safe, I can’t wait until we see the ”perfect storm” of vendor patches.

The Perfect Patch Storm

Steven Christey of CVE recently commented on the fact that Microsoft, Adobe, Cisco, Sun and HP all released multi-issue advisories on the same day (Feb 13). My first reaction was to come up with an amusing graphic depicting this perfect storm. Due to not having any graphic editing skills and too much cynicism, I now wonder if these are the same vendors that continually bitch about irresponsible disclosure and it “hurting their customers”?

These same customers are now being subjected to patches for at least five major vendors on the same day. In some IT shops, this is devastating and difficult to manage and recover from. If a single patch has problems it forces the entire upgrade schedule to come to a halt until the problem can be resolved. If these vendors cared for their customers like they pretend to when someone releases a critical issue w/o vendor coordination, then they would consider staggering the patches to help alleviate the burden it causes on their beloved customers.

reply: Microsoft: Responsible Vulnerability Disclosure Protects Users

Microsoft: Responsible Vulnerability Disclosure Protects Users
http://www2.csoonline.com/exclusives/column.html?CID=28071
By Mark Miller, Director, Microsoft Security Response Center

Responsible disclosure, reporting a vulnerability directly to the vendor and allowing sufficient time to produce an update, benefits the users and everyone else in the security ecosystem by providing the most comprehensive and highest-quality security update possible.

Provided “sufficient time” doesn’t drag out too long, else the computer criminal (who are in the ‘security ecosystem’) benefit greatly from responsible disclosure too.

From my experience helping customers digest and respond to full disclosure reports, I can tell you that responsible disclosure, while not perfect, doesn’t increase risk as full disclosure can.

Except “your experience” wouldn’t take full disclosure cases into account appropriately. Look at some of the vulnerabilities reported in Windows, Real, Novell and other big vendors. Notice that in more and more cases, we’re seeing the vendor acknowledge multiple researchers who found the issues independantly. That is proof that multiple people know about vulnerabilities pre-disclosure, be it full or responsible. If a computer criminal has such vulnerability information that remains unpatched for a year due to the vendor producing “the most comprehensive and highest-quality security update possible”, then the risk is far worse than the responsible disclosure your experience encompasses.

Vendors only take these shortcuts because we have to, knowing that once vulnerability details are published the time to exploit can be exceedingly short-many times in the range of days or hours.

See above, the bolded “proof” I mention. If vendors are going to move along with their head in the sand, pretending that there is a single person with the vulnerability or exploit details, and pretending that they alone control the disclosure, the vendors are naive beyond imagination.

The security researcher community is an integral part of this change, with Microsoft products experiencing approximately 75 percent responsible disclosure.

I’d love to see the chart showing issues in Microsoft products (as listed in OSVDB), relevant dates (disclosed to vendor, patch date, public disclosure) and the resulting statistics. My gut says it would be less than 75%.

McAfee: Microsoft patches 133 Critical/Important Vulns in 2006

http://www.avertlabs.com/research/blog/?p=153

McAfee is reporting that Microsoft patched 133 Critical / Important vulnerabilities in 2006. They also compare this number against previous years to presumably demonstrate that security isn’t getting better at Microsoft.

Oracle RDBMS vs Microsoft SQL Server

http://www.databasesecurity.com/dbsec/comparison.pdf

Introduction

This paper will examine the differences between the security posture of Microsoft’s SQL Server and Oracle’s RDBMS based upon flaws reported by external security researchers and since fixed by the vendor in question. Only flaws affecting the database server software itself have been considered in compiling this data so issues that affect, for example, Oracle Application Server have not been included. The sources of information used whilst compiling the data that forms the basis of this document include:

The Microsoft Security Bulletins web page
The Oracle Security Alerts web page
The CVE website at Mitre.
The SecurityFocus.com website

A general comparison is made covering Oracle 8, 9 and 10 against SQL Server 7, 2000 and 2005. The vendors� flagship database servers are then compared.

[..]