Security of Mashup Applications for Enterprises Part III
In the final article in this series, we move from content isolation and validation to an examination of how modern browsers protect your mashup’s users. By the end of this article, you’ll understand some of the safety measures that browsers provide against malicious input from mashup providers and see how a strategy that incorporates all the elements I’ve covered leads to a more secure mashup.
The Window into Your World
In the previous articles, I described building a mashup application for the fictional Contoso company. This mashup creates a world in which data from various locations interacts to create a sum greater than its parts. All the components run inside a Web browser window that users interact with to gain access to this world. These windows help protect users the same way that real windows protect us from the cold, UV light, theft, and other conditions. And just like with real windows, the level of protection you get depends on quality—in this case, the quality of your browser.
Let’s examine three protections that browsers offer to mashups that require very little or no interaction from users. This ambient protection doesn’t need a user to activate it. It’s ready when it’s needed.
Seeds in Your Heap
Imagine in our mashup application that the Physical Security department’s JSONP travel-alert data feed is compromised. The attacker who compromised the feed embedded a heap spray that will be delivered when the JSONP method is executed to return the JSON object. This attack vector could spell disaster for your users if their browser is not protecting them.
Features like the new Enhanced Protected Mode in Internet Explorer 10 make heap spraying (a lot) more difficult by injecting more randomness into how data is stored on the heap. Address Space Layout Randomization (ASLR) assigns random memory addresses to applications, removing the predictability that can be leveraged in a heap spray. In addition to the randomness provided by ASLR, the extremely large number of possible address locations in a 64-bit system (also known as High Entropy Address Space Layout Randomization, or HEASLR) makes heap spraying much more difficult to successfully execute. You will probably run out of memory on the machine before filling up the heap. Other memory protection mechanisms, such as Data Execution Prevention/No Execute (DEP/NX), add to the power of ASLR. You can learn more about DEP/NX at EricLaw’s IEInternals blog, which has an excellent, in-depth discussion about how this mechanism works and what protection it offers. Internet Explorer 10 for Metro applications runs in Enhanced Protection Mode by default, providing HEASLR to users without any extra configuration.
Developers Declare, Browsers Listen
As developers build their Web applications, they generally know how their sites should behave. Whether from use cases, requirements documentation or just their imagination, developers have a vision of how the application is supposed to be used. For the sample mashup application, we know what functionality we want to deliver because the business requested it. If we know what we want to have happen, we can
infer what we don’t want to have happen. Examples of what we don’t want include malicious content making our application appear as though the user is interacting with the system when the user is actually directing input to another location—an attack known as clickjacking.
One new feature in many modern browsers is the use of declarative security. The underlying concept is that developers know what they want their applications to do, and these features define the boundaries of the application for the browser. Sandboxing, which I described in Part 1, is an example of declarative security. You declare that an iframe should be sandboxed and then permit specific behaviors. X-FRAME-OPTIONS is another example of declarative security that browsers permit. By using the X-FRAME-OPTIONS header’s value in a page response, you can tell the browser:
- To deny that this page can reside in a frame (DENY value).
- That only pages having the same origin can frame the page (SAMEORIGIN value).
- To block framing if the origin is different from the one specified (ALLOW-FROM value).
This header was introduced in Internet Explorer 8, but it has since been adopted by numerous other browsers, including Firefox, Chrome and Safari.
Implementing the X-FRAME-OPTIONS header is as simple as adding a response header to your page. How you do this depends on your development technology, but a great example for ASP.NET can be found on the SDL Blog. The end result should be the following header and value:
X-FRAME-OPTIONS = DENY | SAMEORIGIN | ALLOW-FROM origin
In Part 1 I discussed the types of pages you don’t want to include in CORS (cross-origin resource sharing), such as a configuration or a login page. That rule is inverted here. Here, you want to be sure that pages that require an “authentic click” (as Eric Lawrence says) include the X-FRAME-OPTIONS header so that they cannot be framed or redressed. This applies to pages that perform sensitive actions such as financial transactions and interaction with personal health records.
As always, use this mechanism as part of your defense-in-depth strategy. While it’s a great step forward, X-FRAME-OPTIONS is not a cure-all. There are techniques that can be used to circumvent this header, such as using a proxy that strips headers, but that does not mean you should dismiss it. A great defense occurs in layers, and this can be an easy layer to add because your browser does most of the hard work.
Filters and Auditors
Every browser has some form of protection against one of the most popular Web attacks, the dreaded cross-site scripting (XSS) attack. Protection against XSS has become a key feature for browsers, but is it always the same? Let’s look at the similarities and differences in Internet Explorer and Webkit to understand what these major browsers offer in different ways.
Internet Explorer 8 introduced numerous security features, one of which was the XSS Filter. This filter is targeted at mitigating Type 1 XSS (aka reflected XSS). Each request and response is channeled through the XSS Filter, which searches for patterns in the request or response that match XSS attacks. When an attack pattern or signature is discovered, it is disabled and passed to the browser. (You can find all the details here.) Internet Explorer does not prompt the user or request any information, it simply shuts down the attack. Once the response output is rendered, the XSS Filter has completed its job, and the user continues on his or her merry way without knowledge of what just happened.
When a developer needs functionality that’s disabled by the filter, Internet Explorer allows you to disable the filter by adding the X-XSS-Protection header to a page with the value of 0 (zero). This is another example of declarative security, this time disabling a feature, which gives developers control over Internet Explorer’s behavior so that existing applications are not broken by this feature.
The XSS Auditor is built in to the Webkit rendering engine, which is the basis for Chrome and Safari. Because this feature comes with the system, XSS attacks can be caught prior to execution but after rendering to the page. This approach is different from the Internet Explorer model, which disables the script prior to rendering. Which is better? That’s a good question, and it’s been discussed in other posts. The auditor works by checking each script prior to execution and determines whether the script existed in the request that generated the page. A match results in a blocked script.
Note that the Webkit implementation simply blocks the script, it does not disable it. Both solutions result in similar outcomes for the user (mostly). The difference is that the XSS Filter in Internet Explorer alters the output rendered on the page, while the Webkit version only blocks the script.
Two Approaches, One Goal
Both browsers target the same goal: protecting users from reflected XSS. The browsers’ implementations differ in when they address the issue (before or after rendering). Along these lines, both browsers have their XSS protection feature enabled by default so that the user does not have to do anything but enjoy the added security. Just as with every other security feature we’ve discussed, these filters can be circumvented. Make them a part of your defense-in-depth strategy for maximum coverage.
Mashups: You Secure the Data of Others
In the world of mashup applications, you are delivering other people’s content to your users. As users interact with your application, they will perceive security incidents as an issue in your system, not as part of the data from your providers. You must build multiple layers of defense into your application—from data isolation to strict validation—and rely on the muscle behind your browser. Building a strategy that incorporates all these elements will lead to a more secure mashup and protect your users from other people’s data.
About the Author
Tim Kulp leads the development team at FrontierMEDEX in Baltimore, Maryland. You can find Tim on his blog at http://seccode.blogspot.com or the Twitter feed @seccode, where he talks code, security and the Baltimore foodie scene.