Can Client-Side JavaScript Protect Itself?

Security researcher Mario Heiderich (also creator of the HTML5 Security Cheatsheet and lead developer for PHPIDS) has been posting some interesting cross-site scripting challenges lately that highlight aspects of security on the client side. The most recent, called XSSMe², involved a page with a reflected XSS vulnerability that allowed one to insert arbitrary HTML – no filters applied by the server. The goal? Retrieve a particular bit of data, originally stored in document.cookie, without any user interaction. I say “originally,” because the page included JavaScript which attempted to lock down access to the data by removing it from document.cookie and hiding it unless retrieved by a user click. The code used evolved as bypasses were found, with several tricks employed along the way.

One trick was to hide the variable in a closure. In JavaScript, every function has its own local scope. If you define a variable within a function block, that variable is distinct from one defined in the global scope. In a way, the variable is hidden from code executed in the global scope, though the function can provide a gatekeeper method to access it. Consider this block of code:

document.cookie = "secret";

var Safe = function() {
    var cookie = document.cookie;
    this.get = function(magicWord) {
        if (magicWord === "please") {
            return cookie;
        }
        return null;
    }
}
window.Safe = new Safe();

document.cookie = "";

alert(document.cookie);
alert(Safe.get(""));
alert(Safe.get("please"));

The first alert returns nothing – document.cookie has been set to an empty string. The second alert only returns null, given the if statement in the definition of Safe.get. But with the third alert, the statement return cookie gets executed – and that statement is in the local scope of the function, so it returns the cookie variable defined in that scope, which is “secret”. This is the concept of a closure – the local variable of the function lives on as it was defined in that context.

Initially, this may seem to be a good defense against cross-site scripting, since the power of XSS comes from all a page’s scripts executing in the same scope. But as entries in the challenge demonstrated, a script has many resources for attacking itself. For instance, the challenge included code that checked whether a function requesting the secret variable was a mouse click event initiated by the user. That last bit came from checking the isTrusted property on the event, which should tell you whether the click came from a script or from the user.

But in JavaScript, new objects are created by cloning a model object called a prototype. If you change a particular prototype, any new variety of that object will inherit the changes you made. In this case, changing the isTrusted property of a mouse event’s prototype to always be true meant any spoofed clicks generated automatically by a script would fool the protective code and retrieve the secret value.

With each new bypass, Mario updated the code with new protections to block them. Eventually, he created a Firefox-specific version that essentially rewrote the entire page to get rid of the original Document Object Model and all its loopholes. If you’re interested in reading more about other bypass techniques and the challenge’s implications for client-side filtering, researcher Krzysztof Kotowicz has an excellent write-up that covers more details. But the challenge is also worth studying as a way of understanding more about web scripting and XSS. I certainly learned more about closures and event spoofing by tackling the puzzle, and it helps illustrate the difficulties of trying to protect against code running in the same origin and same scope. We may be moving towards DOM features that provide enough security to block even client-side attacks, but for right now, any untrusted script has myriad ways of overcoming client-side protections.

Post to Twitter Post to Facebook

Posted October 13 2011

Apricorn Aegis Padlock Review

Disclaimer: I requested and received an evaluation version of the Apricorn Aegis Padlock. I was sent the 250GB AES-256 version, and I need to return it to the company in 30 days.

This is a pretty sweet hard drive, but there are a few annoyances that I think can be improved upon. I was unable to test a few things just due to the time I could devote to this, the fact that I need to return the drive in working condition, and that I don’t have access to some specialized hardware to test timing attacks.

The drive is FIPS 197 validated – aka, uses AES according to NIST.

You can check out Apricorn’s site for the specs and details, but what you see on the product site is pretty much what you get. The drive draws power from your USB port, so you’ll need a powered port. The drive came with an adapter (1 USB to 2 USB) if one of your USB ports doesn’t provide enough power. I had no issues with power on my MacBook Air, but I did on my office desktop since all USB ports were already taken – easily solved with a powered USB hub.

(more…)

Post to Twitter Post to Facebook

Posted August 15 2011

Crockford’s History of JavaScript

Ever wonder about how we came to have the technologies and programming languages used today? Yahoo’s senior JavaScript architect Douglas Crockford gave a presentation in early 2010 that traces the developments which brought us the beloved and hated language that powers client-side web behaviors. The video is nearly two hours and only the first in a series on JavaScript, but Crockford relates many interesting stories about the history of computing and notes patterns in how technology tends to develop. Check it out if you want to learn more about the background of that quirky yet powerful bit of tech we call JavaScript:

Crockford on JavaScript: The Early Years

Post to Twitter Post to Facebook

Posted June 28 2011

Product Review: The hiddn Crypto Adapter Offers Secure USB Storage

Recently I had the chance to test out a clever little device called the hiddn Crypto Adapter. Made by Norway-based High Density Devices, the adapter looks somewhat like a miniature desk calculator with a USB port instead of a display, but its simple appearance belies some powerful functionality: transparent, real-time encryption of USB drives with two-factor authentication.

The adapter essentially acts as a proxy between your computer and a USB drive, meaning it needs no software, has no operating system requirement, and works with everything from a flash memory stick to an external hard drive. All communication with the USB device is encrypted on the fly using 256-bit AES via a certified FIPS 140-2 Level 3 crypto module, but the key isn’t stored on the drive: at the front of the hiddn adapter is a smart card slot.

When you insert a smart card, you have to enter the corresponding PIN code to use it. (After three unsuccessful attempts, the card becomes locked until a longer PUK code is given.) The device does not appear as an active USB device in the OS until a card is verified, and becomes “unplugged” when the card is removed. The encryption key (or half of it in split-key mode) stays on the smart card, making an encrypted drive unusable without it.

Setting up and operating the hiddn system is very straightforward. You connect it to your computer with a USB cable, plug a drive into the top USB port, insert your smart card, and then enter your PIN. From there, the experience is no different than using a USB drive normally – there’s not even a difference in speed.

When I first connected an unencrypted drive on a Windows machine, it appeared as an unformatted drive. After formatting, it behaved just as it would when plugged in directly. (A few times I had to reconnect the adapter to get Windows to recognize a new drive if I didn’t “eject” the drive first or tried a bad PIN, but those were minor issues.) Trying to use the drive without the hiddn adapter after it had been encrypted brought up another prompt to format – Windows could tell there was a volume, but it was completely unreadable.

After using the hiddn Crypto Adapter for a short time, I started wondering why no one else had thought of it before – or at least why I’d never heard of it before. It’s a great tool for anyone wanting a no-hassle method to encrypt removable storage. The only potential drawback is pricing; two adapters and two sets of pre-configured smart cards can run almost $900. High Density Devices offers a few different packages of units and cards, ranging from one of each to ten, as well as an enterprise key management system for creating new cards. But while some users may find hiddn too expensive for personal use, its flexibility, ease-of-use, and high security make for a combination that’s hard to beat.

Post to Twitter Post to Facebook

Posted June 2 2011

A Cloud of Suspicion…

It may be true that cloud computing services are permeating nearly every facet of our networked world; but in the process of sharing our data with the companies that provide these resources, what do we do about the trust issue? Data in the cloud is vulnerable unless it’s protected somehow. And if this protection isn’t implemented, then the whole service becomes less useful for those people who require it.

Not all services are affected equally, however; and some are not affected much at all. For example, protecting certain data fields stored in a distributed online database may be as common-practice as using strong encryption. However, more delicate services may not be as flexible…

How do you force the image data stored on a cloud image editor to be encrypted at their end? Or force a word processor to encrypt your latest holiday shopping list? Without the assistance of the service providers, the only solution is a customized technical workaround; colloquially known as a hack.

An example of precisely this kind of workaround was outlined in this paper (pdf) by Yan Huang and David Evans. In it, they describe a method (and a working example) by which a user can use Google Docs while maintaining both confidentiality and integrity.

It works by way of some very clever applications of incremental encryption, data structuring, and indexing to transparently handle all of the security operations. And although it interferes with some functional capabilities, it stands as an example of the kind of solutions needed to shine some light on the shady parts of the cloud.

Post to Twitter Post to Facebook

Posted May 11 2011

Google Now Offering Bounties for Web App Bugs

Back in January, Google announced they would pay between $500 and $1,337 for bugs in their Chromium web browser code, if the discoverer first reported it privately to them and followed certain conditions. Since then, the company has handed out quite a few bounties to security researchers who found problems.

Now, Google has expanded the program by offering similar bounties for vulnerabilities in their web-based applications. Hackers who find issues such as HTML injection or cross-site request forgery in important Google services can now report them and possibly qualify for rewards ranging from $500 to $3,133.70. As with the Chromium bounties, bug hunters have to follow a few rules and conditions, such as giving Google some time to fix the issue before public disclosure.

Given the success of the Chromium bounties, it’s likely this new experiment will be beneficial both for security researchers and Google’s users. It certainly adds an interesting new twist to the debate over how to handle outside bug discoveries – perhaps we’ll see more companies offering such compensation in the future.

Post to Twitter Post to Facebook

Posted November 5 2010