We know SCADA is virtual swiss cheese, ready to be owned if someone can reach a device. We have preached airgaps for decades, even before we knew how bad the software was. Back then it was just, “this is so critical, it has to be separate!”
The last five years have proven how bad it is, with the rise of SCADA vulnerabilities. Sure, we can overlook the bad coding, proprietary protocols, no evidence of a SDLC, and the incredible amount of time it can take to patch. For some silly reason we put up with “forever-day bugs” because something is so critical it can’t be rebooted (forgetting how absurd that design choice is). But, what if we go a step beyond that?
An ICS-CERT 14-084-01 advisory released yesterday on vulnerabilities in Festo products is a good reminder of just how bad the problem is, and how much deeper it goes. First, the product has a backdoor in the FTP service allowing unauthenticated access (CVSSv2 9.3). This can allow a remote attacker to crash the device or execute arbitrary code. Second, the device is vulnerable due to bundling the 3S CoDeSys Runtime Toolkit which does not require authentication for admin functions (CVSSv2 10.0), and a traversal flaw that allows file manipulation leading to code execution (CVSSv2 10.0). Those two issues were reported in January of 2013, making this report as relates to Festo products over a year late.
So we have a vendor backdoor, unauthenticated administrator access, and a way to bypass authentication if it was there to gain privileges. So realistically, what type of organizations does this potentially impact? From the ICS-CERT advisory:
This product is used industrywide as a programmable logic controller with inclusion of a multiaxis controller for automated assembly and automated manufacturing. Identified customers are in solar cell manufacturing, automobile assembly, general assembly and parts control, and airframe manufacturing where tolerances are particularly critical to end product operations.
Now to dig the hole deeper. Under the “Mitigation” section, we see how serious Festo considers these vulnerabilities. Paraphrased from two lines in the advisory:
Festo has decided not to resolve these vulnerabilities, placing critical infrastructure asset owners using this product at risk … because of compatibility reasons with existing engineering tools.
The two 3S CoDeSys vulnerabilities have a fix available and just need to be integrated into the Festo products. What does “compatibility with existing engineering tools” really mean in the context of software? The ICS-CERT advisory also says:
According to the Festo product web page, other products are using newer versions of CoDeSys software and may not be vulnerable to the CoDeSys vulnerability, but this has not been evaluated by the researcher.
The researcher already spent time finding the issues, reporting them to a coordinating body, and following coordinated disclosure practices. Expecting them to also evaluate which products are not vulnerable is ridiculous. This is a case of the vendor just being lazy and irresponsible.
A company that makes vulnerable critical components that affect our infrastructure and directly impact our safety, but refuses to fix them. Why is this allowed to exist in our society?
Our sponsor Risk Based Security (RBS) posted an interesting blog this morning about Research In Motion (RIM), creator of the BlackBerry device. The behavior outlined in the blog, and from the original blog by Frank Rieger is shocking to say the least. In addition to the vulnerability outlined, potentially sending credentials in cleartext, this begs the question of legality. Quickly skimming the BlackBerry enterprise end-user license agreement (EULA), there doesn’t appear to be any warning that the credentials are transmitted back to RIM, or that they will authenticate to your mail server.
If the EULA does not contain explicit wording that outlines this behavior, it begs the question of the legality of RIM’s actions. Regardless of their intention, wether trying to claim that it is covered in the EULA or making it easier to use their device, this activity is inexcusable. Without permission, unauthorized possession of authentication credentials is a violation of Title 18 USC § 1030 law, section (a)(2)(C) and potentially others depending on the purpose of the computer. Since the server doing this resides in Canada, RIM may be subject to Canadian law and their activity appears to violate Section 342.1 (d). Given the U.S. government’s adoption of BlackBerry devices, if RIM is authenticating to U.S. government servers during this process, this could get really messy.
Any time a user performs an action that would result in sharing that type of information, with any third party, the device or application should give explicit warning and require the user to not only opt-in, but confirm their choice. No exceptions.
The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.
We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:
As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.
The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.
#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.
Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.
The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.
Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.
Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.
Steven Christey of CVE recently commented on the fact that Microsoft, Adobe, Cisco, Sun and HP all released multi-issue advisories on the same day (Feb 13). My first reaction was to come up with an amusing graphic depicting this perfect storm. Due to not having any graphic editing skills and too much cynicism, I now wonder if these are the same vendors that continually bitch about irresponsible disclosure and it “hurting their customers”?
These same customers are now being subjected to patches for at least five major vendors on the same day. In some IT shops, this is devastating and difficult to manage and recover from. If a single patch has problems it forces the entire upgrade schedule to come to a halt until the problem can be resolved. If these vendors cared for their customers like they pretend to when someone releases a critical issue w/o vendor coordination, then they would consider staggering the patches to help alleviate the burden it causes on their beloved customers.
Courtesy of Juha-Matti Laurio at the Securiteam Blogs:
Since 5th December we have seen three separate, serious vulnerabilities in Microsoft Word:
[Disclosed - original reference - CVE name
Affected products and product versions]
Tue 5th Dec – MS Security Advisory #929433 – CVE-2006-5994 and FAQ
Word 2003/2002/2000, Word 2004/v. X for Mac, Works 2006/2005/2004, Word Viewer 2003
Sat 9th Dec – MSRC Blog entry 10th Dec – CVE-2006-6456
Word 2003/2002/2000, Word Viewer 2003
Tue 12th Dec – Fuzzing list posting – CVE-2006-6561
Word 2003/2002/2000, Word 2004/v. X for Mac, Word Viewer 2003, OpenOffice.org 2/1.1.3, AbiWord 2.2
Of course, vulnerabilities in Word (and other MS Office components) are not new, but this recent wave demonstrate (yet again) just how bad the software industry can be and how security was never a consideration during the original design. Hopefully the recent buzz will finally make Microsoft spend serious time auditing the other big business applications like Visio and Project among others.
When reading various security resources, it constantly amuses me that they all seem to ignore the obvious conclusion and short sighted ‘solutions’ they recommend. “Don’t open [filetype] from untrusted people.” We’ve seen this in the past with ‘executables’ to help stop trojan attacks, ‘gif/jpg/bmp’ to stop various overflows and code execution situations in image processing software, ‘excel’ files after a small wave of vulnerabilities were found in MS Excel, and now ‘word’ documents. The people giving this advice are security professionals in many cases, and they all seem to forget that a fundamental component of security is trust. In short, quit specifying a given file format that is the craze of the day. “Don’t open ANY file from untrusted people.”
Microsoft is finding themselves under increasing pressure to release fixes for critical vulnerabilities. This week, Microsoft broke from tradition again and opted to release and early fix for a critical Internet Explorer vulnerability. Since we’ve seen other critical vulnerabilities come up before this one, some of which were being exploited in the wild, why the change of policy? One factor that might be influencing this decision is the sudden availability of third-party patches. Back in March, eEye released an unofficial patch for the MSIE createTextRange() flaw which drew criticism and contempt from Microsoft. Windows/IE users were under no pressure to use the patch, but it gave some an alternative to disabling Active Scripting entirely.
This time around, we’re seeing multiple third parties come up with alternative patches that may help some companies while they wait for Microsoft to officially fix a vulnerability. This week the Internet Explorer setSlice vulnerability is being exploited in the wild with more than two weeks before Microsoft possibly releases a patch for it. With this reoccuring trend of critical vulnerabilities going unpatched for “too long”, a group of security professionals has created a new response team called ZERT to help consumers. Determina has also released a patch for the setSlice vulnerability, giving consumers even more choices in helping to mitigate the threat while waiting for Microsoft to patch.
With more and more third party patches available, will it pressure Microsoft to step up and break the monthly patch cycle more often? Will they realize that making patches available for critical vulnerabilities being exploited in the wild, even if not fully tested, is a better option than consumers finding themselves under the control of computer criminals and botnets? After all, we know that Microsoft is perfectly capable of producing fast patches when they think it is a serious issue.
While digging around the usual sources of vulnerability information tonight, I ran into this sequence of links trying to find where an underlying vulnerability really was:
At this point, the sux0r release was linked two steps back to Snoopy, via MagpieRSS. This leads me to stress the value of vendors including such details in their release notes and changelogs. It can save people a lot of time when trying to figure this stuff out. Also attached to the same original vulnerability:
Obviously, most people in the security industry who read Bugtraq or Full-Disclosure for their only source of vulnerability information didn’t see all of this. Unless they are as deranged an anal retentive as I am, or monitor several vulnerability databases, they may have missed the fact that several software packages had a fairly serious vulnerability. This is a good example of the value-add that some vulnerability databases offer due to their follow-up research and organization.
I also have to wonder if the authors of sux0r know that one of the packages they use, also uses other packages. This makes me wonder how many layers deep some of the software goes these days, and if the authors of these packages fully grasp the web of code and dependencies that are created. Imagine having a really accurate mapping of such relationships and integration, that would let us see just how far one vulnerability can spread into different codebases. A while back, I mentioned how this would be incredibly helpful to vulnerability databases in some cases. Imagine having this same type of system that linked software package integration and dependencies. When a given package is found to contain a vulnerability, you could instantly know that it likely affects seven other software distributions, all of which need to upgrade their dependencies or fix the issue themselves. I know, pipe dream but still a nice thought!
Lance James of Secure Science Corporation posted an advisory detailing a serious flaw in the Fedex/Kinkos ExpressPay smart card payment system. A knowledgeable attacker with relatively minor resources can abuse the system to defraud the company. In response to the advisory, Fedex/Kinkos replied to them saying:
“Our analysis shows that the information in the article is inaccurate and not based on the way the actual technology and security function. Security is a priority to FedEx Kinko’s, and we are confident in the security of our network in preventing such illegal activity.”
Secure Science replied with an image of a receipt showing that it can be done. In case that wasn’t enough for some skeptics, they also released a video showing the abuse in action. Hopefully this will encourage Fedex/Kinkos to change their stance and take back the comment about their confidence in the security of their network/technology. This whole incident reminds me of the l0pht’s catchy slogan: “Making the theoretical practical since 1992“
From time to time, vendors will contact OSVDB to notify us of solutions to vulnerabilities included in the database. These are almost always very professional mails, usually polite, and sometimes include all the details we need/want. These mails may say something along the lines of “we have fixed this issue” which prompts us to ask if it is a patch, upgrade or workaround. Other times they are very descriptive and provide all the information we need to update our entry, add more detail and provide the best information to our users and their customers.
Every once in a while, we get a real winner. On Dec 29, 2005, Global I.S. S.A. contacted us regarding entry 21429, saying “This vulnerability has been addressed.” Within minutes I replied asking if this was in the form of an upgrade or patch but did not hear back from them. On Jan 2, 2006, they contacted us again asking “This is our second request for a change. Is anybody home?” So they didn’t receive my initial reply I assumed (nor did they acknowledge my second reply), but that isn’t what grabbed me. The rest of their mail did:
The vulnerability you refer to has been resolved.
For security we do not release the nature of the solution/s.
It is criminally negligent to publish hacks on the web without first notifying the author.
Let us know if you have a question.
On top of the veiled legal threat (which I love!), their comment that they do not release the nature of the solution is baffling.. more-so that they do this “for security”. Vendors, take note: the one time you want to be completely open and honest with information is when it comes to solutions to vulnerabilities. Withholding information or making it unclear/confusing only contributes to insecurity as customers don’t know the extent of the issue, nor how to easily mitigate the vulnerability.
Something lead you to the product that ended up on your systems. Be it a feature, a look, ease of use, or price, it was a driving force in your decision. Changing to a different product isn’t easily done, especially if your current solution is heavily integrated or customers/users are familiar with it. Besides, what other product can fill your needs that doesn’t have vulnerabilities of it’s own? Look at the amount of vulnerabilities released along with the diversity of the products. Whether it is no name freebies or million dollar commercial installations, every package seems to have vulnerabilities that would drive you back to where you started.
Offering a “solution” of “Use another product” doesn’t seem very intuitive, logical, or helpful to customers.