Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.
Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.
Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.
In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).
Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.
Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.
And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.
Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.
Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.
It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.
This is a great point. But, let’s read on and offer some context using your own words…
Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.
Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.
CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.
And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)
[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]
One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:
As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.
To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…
What’s right for Google is not always right for customers.
This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.
For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?
It is a zero sum game where all parties end up injured.
What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.
Betz’ blog goes on to quote the Microsoft CVD policy which states:
Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.
Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:
Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.
Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.
Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…
Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.
Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.
Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?
Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.
Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.
The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.
While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.
I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!
I noticed a Tweet from @SCMagazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.
Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.
It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.
A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12″ (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!
If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.
|2012 Advisories (83)||35 (42.2%)||46 (55.4%)||2 (2.4%)||–|
|2012 CVE (160)||100 (62.5%)||18 (11.3%)||39 (24.4%)||3 (1.8%)|
|2012 Total (176)||101 (57.4%)||19 (10.8%)||41 (23.3%)||15 (8.5%)|
It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:
The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.
In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.
- MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
- MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
- Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
- OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
- Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
- One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.
These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.
Perhaps it is the fine tequila this evening, but I really don’t get how our industry can latch on to the recent ‘Aurora’ incident and try to take Microsoft to task about it. The amount of news on this has been overwhelming, and I will try to very roughly summarize:
- News surfaces Google, Adobe and 30+ companies hit by “0-day” attack
- Google uses this for political overtones
- Originally thought to be Adobe 0-day, revealed it was MSIE 0-day
- Jan 14, confirmed it is MSIE vuln, shortly after dubbed “aurora”
- Jan 21, uproar over MS knowing about the vuln since Sept
Now, here is where we get to the whole forest, trees and some analogy about eyesight. Oh, I’ll warn (and surprise) you in advance, I am giving Microsoft the benefit of the doubt here (well, for half the blog post) and throwing this back at journalists and the security community instead. Let’s look at this from a different angle.
The big issue that is newsworthy is that Microsoft knew of this vulnerability in September, and didn’t issue a patch until late January. What is not clear, is if Microsoft knew it was being exploited. The wording of the Wired article doesn’t make it clear: “aware months ago of a critical security vulnerability well before hackers exploited it to breach Google, Adobe and other large U.S. companies” and “Microsoft confirmed it learned of the so-called ‘zero-day’ flaw months ago”. Errr, nice wording. Microsoft was aware of the vulnerability (technically), before hackers exploited it, but doesn’t specifically say if they KNEW hackers were exploiting it. Microsoft learned of the “0-day” months ago? No, bad bad bad. This is taking an over-abused term and making it even worse. If a vulnerability is found and reported to the vendor before it is exploited, is it still 0-day (tree, forest, no one there to hear it falling)?
Short of Microsoft admitting they knew it was being exploited, we can only speculate. So, for fun, let’s give them a pass on that one and assume it was like any other privately disclosed bug. They were working it like any other issue, fixing, patching, regression testing, etc. Good Microsoft!
Bad Microsoft! But, before you jump on the bandwagon, bad journalists! Bad security community!
Why do you care they sat on this one vulnerability for six months? Why is that such a big deal? Am I the only one who missed the articles pointing out that they actually sat on five code execution bugs for longer? Where was the outpour of blogs or news articles mentioning that “aurora” was one of six vulnerabilities reported to them during or before September, all in MSIE, all that allowed remote code execution (tree, forest, not seeing one for the other)?
|CVE||Reported to MS||Disclosed||Time to Patch|
|CVE-2010-0244||2009-07-14||2010-01-21||6 Months, 7 Days (191 days)|
|CVE-2010-0245||2009-07-14||2010-01-21||6 Months, 7 Days (191 days)|
|CVE-2010-0246||2009-07-16||2010-01-21||6 Months, 5 Days (189 days)|
|CVE-2010-0248||2009-08-14||2010-01-21||5 Months, 7 days (160 days)|
|CVE-2010-0247||2009-09-03||2010-01-21||4 Months, 18 days (140 days)|
|CVE-2010-0249||2009-09-??||2010-01-14||4 Months, 11 days (133 days) – approx|
|CVE-2010-0027||2009-11-15||2010-01-21||2 Months, 6 days (67 days)|
|CVE-2009-4074||2009-11-20||2009-11-21||2 Months, 1 day (62 days)|
Remind me again, why the “Aurora” conspiracy is noteworthy? If Microsoft knew of six remote code execution bugs, all from the September time-frame, why is one any more severe than the other? Is it because one was used to compromise hosts, detected and published in an extremely abnormal fashion? Are we actually trying to hold Microsoft accountable on that single vulnerability when the five others just happened not to be used to compromise Google, Adobe and others?
Going back to the Wired article, they say on the second to last paragraph: “On Thursday, meanwhile, Microsoft released a cumulative security update for Internet Explorer that fixes the flaw, as well as seven other security vulnerabilities that would allow an attacker to remotely execute code on a victim’s computer.” Really, Wired? That late in the article, you gloss over “seven other vulnerabilities” that would allow remote code execution? And worse, you don’t point out that Microsoft was informed of five of them BEFORE AURORA?
Seriously, I am the first one to hold Microsoft over the flames for bad practices, but that goes beyond my boundaries. If you are going to take them to task over all this, at least do it right. SIX CODE EXECUTION VULNERABILITIES that they KNEW ABOUT FOR SIX MONTHS. Beating them up over just one is amateur hour in this curmudgeonly world.
A while back, Microsoft announced they were moving to release patches on the second Tuesday of each month, lovingly called Patch Tuesday. Soon after, Oracle announced that they too would be moving to scheduled releases of patches on the Tuesday closest to the 15th day of January, April, July and October. Now, Cisco has announced they are moving to scheduled patches on the fourth Wednesday of the month in March and September of each calendar year.
In the attempt to make life easier on administrators and help avoid installing patches every few days, these scheduled releases are now causing organizations to enjoy life between monster patches.
Mar 11 – Microsoft
Mar 26 – Cisco
Apr 8 – Microsoft
Apr 15 – Oracle
May 13 – Microsoft
June 10 – Microsoft
July 8 – Microsoft
July 15 – Oracle
August 12 – Microsoft
September 9 – Microsoft
September 24 – Cisco
October 14 – Microsoft, Oracle
November 11 – Microsoft
December 9 – Microsoft
As you can see, October 14 promises to be a lot of fun for companies running Oracle products on Microsoft systems. While the scheduled dates look safe, I can’t wait until we see the ”perfect storm” of vendor patches.
Steven Christey of CVE recently commented on the fact that Microsoft, Adobe, Cisco, Sun and HP all released multi-issue advisories on the same day (Feb 13). My first reaction was to come up with an amusing graphic depicting this perfect storm. Due to not having any graphic editing skills and too much cynicism, I now wonder if these are the same vendors that continually bitch about irresponsible disclosure and it “hurting their customers”?
These same customers are now being subjected to patches for at least five major vendors on the same day. In some IT shops, this is devastating and difficult to manage and recover from. If a single patch has problems it forces the entire upgrade schedule to come to a halt until the problem can be resolved. If these vendors cared for their customers like they pretend to when someone releases a critical issue w/o vendor coordination, then they would consider staggering the patches to help alleviate the burden it causes on their beloved customers.
Microsoft: Responsible Vulnerability Disclosure Protects Users
By Mark Miller, Director, Microsoft Security Response Center
Responsible disclosure, reporting a vulnerability directly to the vendor and allowing sufficient time to produce an update, benefits the users and everyone else in the security ecosystem by providing the most comprehensive and highest-quality security update possible.
Provided “sufficient time” doesn’t drag out too long, else the computer criminal (who are in the ‘security ecosystem’) benefit greatly from responsible disclosure too.
From my experience helping customers digest and respond to full disclosure reports, I can tell you that responsible disclosure, while not perfect, doesn’t increase risk as full disclosure can.
Except “your experience” wouldn’t take full disclosure cases into account appropriately. Look at some of the vulnerabilities reported in Windows, Real, Novell and other big vendors. Notice that in more and more cases, we’re seeing the vendor acknowledge multiple researchers who found the issues independantly. That is proof that multiple people know about vulnerabilities pre-disclosure, be it full or responsible. If a computer criminal has such vulnerability information that remains unpatched for a year due to the vendor producing “the most comprehensive and highest-quality security update possible”, then the risk is far worse than the responsible disclosure your experience encompasses.
Vendors only take these shortcuts because we have to, knowing that once vulnerability details are published the time to exploit can be exceedingly short-many times in the range of days or hours.
See above, the bolded “proof” I mention. If vendors are going to move along with their head in the sand, pretending that there is a single person with the vulnerability or exploit details, and pretending that they alone control the disclosure, the vendors are naive beyond imagination.
The security researcher community is an integral part of this change, with Microsoft products experiencing approximately 75 percent responsible disclosure.
I’d love to see the chart showing issues in Microsoft products (as listed in OSVDB), relevant dates (disclosed to vendor, patch date, public disclosure) and the resulting statistics. My gut says it would be less than 75%.
McAfee is reporting that Microsoft patched 133 Critical / Important vulnerabilities in 2006. They also compare this number against previous years to presumably demonstrate that security isn’t getting better at Microsoft.
This paper will examine the differences between the security posture of Microsoft’s SQL Server and Oracle’s RDBMS based upon flaws reported by external security researchers and since fixed by the vendor in question. Only flaws affecting the database server software itself have been considered in compiling this data so issues that affect, for example, Oracle Application Server have not been included. The sources of information used whilst compiling the data that forms the basis of this document include:
The Microsoft Security Bulletins web page
The Oracle Security Alerts web page
The CVE website at Mitre.
The SecurityFocus.com website
A general comparison is made covering Oracle 8, 9 and 10 against SQL Server 7, 2000 and 2005. The vendors� flagship database servers are then compared.
Microsoft is finding themselves under increasing pressure to release fixes for critical vulnerabilities. This week, Microsoft broke from tradition again and opted to release and early fix for a critical Internet Explorer vulnerability. Since we’ve seen other critical vulnerabilities come up before this one, some of which were being exploited in the wild, why the change of policy? One factor that might be influencing this decision is the sudden availability of third-party patches. Back in March, eEye released an unofficial patch for the MSIE createTextRange() flaw which drew criticism and contempt from Microsoft. Windows/IE users were under no pressure to use the patch, but it gave some an alternative to disabling Active Scripting entirely.
This time around, we’re seeing multiple third parties come up with alternative patches that may help some companies while they wait for Microsoft to officially fix a vulnerability. This week the Internet Explorer setSlice vulnerability is being exploited in the wild with more than two weeks before Microsoft possibly releases a patch for it. With this reoccuring trend of critical vulnerabilities going unpatched for “too long”, a group of security professionals has created a new response team called ZERT to help consumers. Determina has also released a patch for the setSlice vulnerability, giving consumers even more choices in helping to mitigate the threat while waiting for Microsoft to patch.
With more and more third party patches available, will it pressure Microsoft to step up and break the monthly patch cycle more often? Will they realize that making patches available for critical vulnerabilities being exploited in the wild, even if not fully tested, is a better option than consumers finding themselves under the control of computer criminals and botnets? After all, we know that Microsoft is perfectly capable of producing fast patches when they think it is a serious issue.