Tutorial Tuesday Category
I often talk about cross-site scripting (XSS), and that’s partly because I think it’s a pretty interesting type of vulnerability that many developers tend to overlook. It can be quite dangerous, but can also be quite misunderstood. For one thing, the name can be misleading: exploiting XSS does not always involve scripting, and the proliferation of web technologies has taken XSS issues beyond the browser.
One example of script-less cross-site scripting affected some high-profile MySpace users in 2007. Attackers were able to inject HTML into celebrity MySpace pages, but the service filtered out typical <script> payloads. Seemingly innocent <a> links were allowed, though, and adding a bit of CSS allowed one to create an invisible link that covered the entire page. In this case, clicking anywhere on an infected profile led to a malware download.
This attack could be one of the first prominent cases of clickjacking, though the term is usually applied to attacks that hijack clicks with malicious inline frames (iframes). Allowing <iframe> elements in user-controlled HTML opens up a range of issues more broadly known as UI redressing. For instance, an iframe that covers the entire page could render a fake login form that appears to be legitimate given the site’s address, leading to a powerful phishing attack. Frames and forms can also be used to bypass CSRF protections.
Of course, you can sometimes launch simple CSRF attacks using only images. By setting the “src” attribute of an <img> element to another page, the browser will still execute a GET request to that page when it tries to load the image. Without proper CSRF protections, such an attack may be possible without XSS to begin with. But images can also be a source of information leakage or tracking, since GET requests to a malicious server will also likely include a “Referer” header.
Alternate labels, such as “HTML injection” or “web content injection,” have been proposed to describe cross-site scripting, but the established term is likely here to say. Still, remember that protecting against XSS does not simply mean blocking script tags, and keep in mind the power of XSS when integrating web technologies with other platforms.
SSL certificates have been in the news lately (again), and there’s a huge uproar. Is SSL still OK? Is PKI dead?
While most people understand the technical side of PKI, I’ve found that the “soft”, or what I call the “political” side, is not as well understood.
Anyone can set up the technical infrastructure to become a CA – but what makes the Root CAs found in your browser special? And as a corollary, how do you get into that select list? Each company officially has their own method of determining what CAs are in their list of Trusted Roots. Mozilla clearly outlines their requirements on their wiki, and Microsoft has a program for inclusion. In general, there are a few technical requirements, and “the audit” – usually a WebTrust audit. I’ve audited CAs (not a WebTrust audit), and what you look for is compliance with the stated policies. However, the stated policies might not be the best option for a Root CA. WebTrust just requires that
“Subscriber information was properly authenticated (for the registration activities performed by ABC-CA).
The integrity of keys and certificates it manages is established and protected throughout their life cycles.” (http://www.webtrust.org/item27804.pdf)
So, what is “protected”, what’s “properly authenticated?” That’s left up to the CA to decide. As long as the CP and CPS cover what the CA is doing, how they’re doing it, and the auditor thinks it’s “protected” or “properly authenticated”, it’ll pass the audit. Generally, once the audit is passed, it’s almost trivial to get into the operating systems and browsers – just a paperwork exercise.
In the case of Diginotar, we can assume they’ve had a recent audit (not necessarily WebTrust) because they’re in Windows and Firefox (NSS), and the auditor felt that they were “secure” enough. Something went wrong though in the subscriber identity proofing process (if it even happened). The CA is just a tool, it can enforce some policies, but not all – it has no clue that the people requesting the certificate for *.google.com are not really Google – the RA function checks that then instructs the CA to issue the certificate(s). If the RA function was bypassed (intrusion into the CA), then the CA will do as it’s told and issue the certificates.
Ideally, CAs have an off-line root CA – no network connection, generally turned off, and only able to be turned on by the folks who have control of the CA. This is the Root CA that is in the operating systems and browsers. Then, that Root can revoke its sub-CA certificates, and life moves on (except for folks who now have to get a new certificate), and most people won’t even know. When an on-line root is compromised, it’s a bigger deal for everyone involved to revoke that CA – patches are issued, instructions go out on how to delete it or distrust it, etc.
Most people blindly trust the Root CAs in their browser/operating system – have you looked at the list in your OS/browser of choice? Do you know anything about these CAs or are you trusting the folks at Mozilla, Apple, and Microsoft to tell you if they’re to be trusted?
Sometimes it can be a daunting task to keep up with computer security best practices, especially when it comes to prevention. There is an almost unlimited amount of things to take into account, not to mention significant decisions on which risks you need to address and which aren’t worth the effort. In addition, many different people have many different ideas about what’s important when it comes to baseline mitigation. This may explain why there are so many sources on the topic, often with different core focuses in mind. For example, Cisco’s Network Security Baseline is geared towards networking configuration, while the PCI-DSS regulations are focused on the technology surrounding credit /debit cards. The truth is that no one set of general rules will ever be ideal for all scenarios; in most cases, the best-fitting strategy would be a custom solution.
However, even an imperfect solution can be useful. This week I came across this list of 35 general mitigation strategies suggested by the Australian DSD (they’re sorta like the NSA). Many of these paint with a wide brush (patch all the things!), but some are directed at specific applications of technology and software. The approach is very proactive in targeting the most widely used components of modern attack vectors. On their website, DSD makes the claim that implementing the top 4 suggested strategies would have prevented 85% of the incidents they responded to in 2010. A bold claim (assisted by wide scopes):
- Update and patch Adobe products, Microsoft products, and Java.
- Update and patch your OS
- Be stingy with administrator/superuser access
- Whitelist your programs
I’m sure that taking these steps can eliminate much of the low hanging fruit, and doing all 35 would probably eliminate even more. But even if all 35 are not ideal for every scenario, it’s still all-around decent computer security advice. These strategies can be a great reference source when fleshing out a custom security policy for mitigating attacks. The rest of the list can be found here (pdf).
LDAPS is used among security folks and developers pretty indiscriminately. The general gist is that the LDAP connection is encrypted between the client and server via SSL/TLS – with a lot of hand waving involved. But there is actually a slight difference in how SSL and TLS are negotiated over LDAP. TLS can be negotiated over the standard 389 port, rather than the 636 port we normally associate with SSL connections – although for the sake of convention, it’s generally done over port 636 as well.
LDAPS comes from LDAPv2 (retired in 2003) where the SSL negotiation takes place before any commands are sent from the client to the server. With a TLS connection, the connection is negotiated (non-encrypted) before any commands are sent – but the first command is StartTLS, which tells the server to renegotiate the connection, but this time, use TLS for encryption and authentication.
Despite these protocols being technically different, the general usage of the term LDAPS implies at least one of the methods.
My last post on the topic of S/MIME on iOS 5 got a lot of helpful comments from readers filled in the gaps left by Apple’s current lack of documentation on this topic. The previous article is still the best place for information on how to set up your device to use S/MIME. This post has more information on actually using S/MIME for encrypting email messages.
There’s a setting I missed in the previous post was pointed out by a commenter. After getting iOS 5 on the device and putting your certificates on there, you need to edit your email settings. Click Settings->Mail, Contacts, Calendars->Your email account->Account->Advanced. Scroll down to the S/MIME section and turn on S/MIME. (Note that this wasn’t required in order to read S/MIME encrypted email.) Enabling S/MIME causes two new options to appear, Sign and Encrypt. Selecting these will cause your iOS device to try and sign and/or encrypt each outgoing message. Make sure you enable the Encrypt option at this point to make your iOS device attempt to encrypt outgoing messages when possible. (more…)