Low Orbit Ion Cannon – A Very Simple Tool for Broad Distribution

So, last night I downloaded a version of the Low-Orbit Ion Cannon, the traffic generation tool which Anonymous has been using to attack various websites. The version I acquired, from SourceForge, was not one which had been modified for use by Anonymous – it didn’t have the “Hive” function which allows it to be utilized remotely. I should mention that although it was originally made by Praetox, and many versions available for download still have Praetox branding, Praetox no longer supports the code, nor is in any way affiliated with Anonymous.

It’s not really a terribly complicated tool. All it does is flood out requests in one of three ways: http requests, TCP packets, or UDP packets. It allows the user to specify the target by URL or IP address, the timeout, port number, the number of threads used, and the attack mode – that being http, TCP, or UDP. If using http, the user can specify the subsite, and if using TCP or UDP, the payload can be given. There’s also a slider for the speed – though no information on what the actual bandwidth will be – and a checkbox for whether or not to wait for a reply. With this set of parameters given, the user need only tell it to go by hitting a button entitled “IMMA CHARGIN MAH LAZER” and watch the status across the bottom.

It’s not a very sophisticated tool; it doesn’t have anything to help it get past even rudimentary countermeasures. Given that it was written as a load-testing tool, that’s hardly surprising. What it lacks in sophistication, it does offer in simplicity. This is a tool which is simple, intuitive, and effective. In terms of usability, a great many professional developers could stand to learn from it. This is a tool which can be used with virtually no networking knowledge. Given that it’s a tool which is being given out to people with virtually no networking knowledge, it’s not a bad fit.

LOIC isn’t exactly a major threat to a large website. As is the nature of DOS attacks, it simply uses a brute-force attempt to flood a site. Smaller servers can readily be overwhelmed, of course, but this isn’t a new issue. That being said, LOIC has proven remarkably effective even though it is hamstrung both by its simplicity and by the steps users must take to preserve their anonymity while using it. So long as groups like Anonymous retain a use for such a tool, newer versions can be expected. While they may have newer tricks, they’ll likely remain by the curve technologically, preferring to keep the same simple usability which allows LOIC to be wielded by so many people.

Post to Twitter Post to Facebook

Posted March 24 2011

A Non-Technical Guide to Understanding the Fraudulent Comodo Certificates Story

Over the last few months, many people have talked about using HTTPS with sites such as Facebook and Twitter. The technology came up often after the release of Firesheep, which allowed Wi-Fi users to hijack other users who used these sites without HTTPS.

Part of the technology behind HTTPS are certificates – small electronic files that help your browser ensure it’s connecting to a trusted site and protect the connection from eavesdropping or tampering. For instance, when you visit https://www.google.com, the Google server has a certificate that lets your browser know it’s connecting to Google and not an impostor.

But how does your browser know if the certificate is not also from an impostor? Each browser maintains a list of certificate authorities, or CAs – special servers whose main purpose is issuing certificates for all those HTTPS websites. These CAs may also vouch for other authorities, creating a hierarchy of trust. If you access a site whose certificate is not from one of these authorities or has been marked by one of them as revoked, you’ll get an error or warning about a certificate problem. Ideally, all of the authorities are all trustworthy and only issue certificates for reputable websites.

Unfortunately, the current reality is less than ideal, and attacks can happen. Yesterday, a blog post from the Tor Project detailed research showing that two major browsers had quietly added code which blocked a few specific certificates. These certificates were issued by an authority in a hierarchy controlled by Comodo, who released a statement today providing a bit more information on what happened.

According to Comodo, attackers were able to access the account of a user who helped manage one of the servers for issuing certificates. They then created their own certificates for verifying websites from Google, Yahoo, Skype, and others. These fraudulent certificates could be used to make a user’s browser think it was connecting to legitimate sites when actually communicating with a malicious site.

Comodo stated that many of the attacks appear to be from Iran, and has said they believe the attack to be state-driven, but many details are still unknown at this point, and the situation calls into question several aspects of Comodo’s security policies. In the meantime, you should make sure you’re using the latest version of a modern browser, such as Chrome or Firefox, and avoid connecting to untrusted networks. The fraudulent certificates that have already been identified will be blocked by an updated browser, and we’ll have to wait and see if more fallout results from the attack.

Post to Twitter Post to Facebook

Posted March 23 2011

Did Comodo violate its own practices?

Earlier today, news began to spread about an exploited certification authority (CA) spotted in the wild. The Tor project blog has an excellent write-up on how they detected the presence of patches blocking particular SSL certificates and worked backwards to determine that a Comodo issuer had been compromised. The folks at Tor suppose (rightly) that if people who monitor the patches for Firefox and Chrome hadn’t noticed, this entire incident might have been swept under the rug. Since that time, Comodo has come clean with an incident report which describes in detail the certificates that were issued and even states

 

 All of the above leads us to one conclusion only:- that this was likely to be a state-driven attack.

I am not as convinced – I think it might have been referenced more to try to deflect interest and speculation away from their own poor management. Also, I would think that a state attack would be more involved than a simple username and password.

Yes, Comodo notes in a separate blog post that the compromise was related to the theft of a username and password of a registration authority (RA) account. I was shocked to find out that their registration authority users are able to log in with a username and password, and not requiring a more secure method of login (for example, public key infrastructure (PKI) login with a smart card). I took a look at the Comodo Certification Practice Statement (CPS) and found that “Trusted roles” (section 3.10.1) should in fact require it. The CPS states (for Trusted personnel) “Identification is via a username, with authentication requiring a password and digital certificate.”

Of course my first issue is with the semantics of the statement.  Presenting a digital certificate is not authenticating anything because digital certificates are public information; one must prove the possession of the private key corresponding to the digital certificate to be authenticated.

My second issue is that it is not clear in the CPS whether an RA would actually be a “Trusted role” or not. In section 3.9.3 they indicate the following:

All personnel in trusted positions handle all information in strict confidence. Personnel of RA/LRAs especially must comply with the requirements of the English law on the protection of personal data.

To me, this reads that personnel of RA/LRAs are “personnel in trusted positions” and therefore should qualify for the “Trusted role” in their CPS, which would have required certificate-based login. Unfortunately, I cannot find any more definitive statements in the CPS that would put the RA into or out of the “Trusted role” as defined.

Ultimately, I hope this compromise will help Comodo improve their practices and update their policies. Most organizations that run a PKI (whether internal or external) know that RAs should always be considered a trusted role in a PKI. The RA’s role is to direct the actions of the CA, the entity that issues the certificates and certificate status information. These certificates, in turn, allow us to trust transactions between parties (such as SSL sessions). If the RA is not trusted, then nothing in the PKI should be.

Post to Twitter Post to Facebook

Posted March 23 2011

The Case for OAuth 1.0a

Open Authorization (OAuth), the authorization standard centered around the granting of permissions and the exchange of access tokens, has slowly gained more widespread use as a result of its adoption as an API authorization system for large web services (Google, Facebook, and Twitter all embrace some version of OAuth). Although OAuth version 2.0 probably won’t look much different from 1.0a to end users (if they even notice), most improvements seem to be aligned with the needs of a rapidly-expanding apps market. This is not a bad thing. When implemented correctly, OAuth can certainly improve security. Naturally, there would be an interest in simplifying things for both users and developers.

But this simplification comes partially from the lack of signatures (used to protect requests over unsafe channels), and a reliance on adequate transport-layer security. Check the flows.

For comparison, OAuth 1.0a contains 2 main flows:

3-Legged: A complex dance by which an application (SuperApp) asks a server (Twitter) if it can act on your behalf (post some stuff).

2-Legged: A slightly-less-complex dance by which you already gave the application permission to post some stuff, and so it does it whenever it wants.

Both of these flows rely upon the security provided by the message signatures to avoid sharing the important secrets. And if OAuth 2.0 doesn’t provide an option for message-signing, there will always be some applications where OAuth 1.0a has a logistical advantage (especially considering the TLS requirement).

Sure, v2.0 might define some useful flows for specific things, and help improve HTTPS adoption, but OAuth 1.0a is a bit more well-rounded overall.

Post to Twitter Post to Facebook

Posted March 15 2011

When Good AntiVirus Goes Bad

Yesterday, I started getting a bunch of warnings from the anti-virus program I’ve got installed on my Mac – F-Secure Mac Protection Technology Preview. Since I wasn’t doing anything out of the ordinary or perform any “suspicious” behavior, this was a surprise to me. (Especially considering I had only received one virus alert from the software in the last 3 months.) The below is a screenshot I grabbed shortly after this began.

Every time I loaded a web page in my browser, a bunch of files would be detected and be automatically removed by the software. If I restarted the Google Chrome browser, the anti-virus deleted a critical enough file to cause Chrome to crash. Within about 20 minutes I had disabled the software and then set about trying to report it as a problem. (Notably this software does not have an option in the user interface to disable the anti-virus capability. You must run a very obscure command: sudo launchctl unload -w /Library/LaunchDaemons/com.f-secure.fsavd.plist)

What happened in this case is that the F-Secure beta software had a false-positive error, causing most if not all files to be flagged as having a virus. The F-Secure software automatically sends files to the trash when a virus is encountered and only provides the above notification window. There is no quarantine, and there is no way to restore files that are deleted.

What is notable is that I didn’t follow standard procedure. Normally when a user encounters a virus warning, the first thing they do is to scan all their files.  Since I immediately had a hunch that it was just broken, and I disabled it, I saved myself a lot of trouble.  Take a look at the pain being experienced by some of the folks in the forum posts:

I scanned my whole system and now I’ve got 90 000 files in the trash. I’m really waiting for an automated solution for this… To me this is a critical situation.

As one of the forum members noted, this is the worst possible scenario for an anti-virus software maker. While F-Secure has posted a fix along with an apology they have not yet answered my fairly critical question in the forum – how do I tell the fix has been applied? They also don’t yet have any capability to help users restore their files accidentally deleted by this error. Based on my experience, I don’t think I’ll be able to give this software a second chance.  Can you suggest alternatives?

Post to Twitter Post to Facebook

Posted March 15 2011

Lessons from the Fukushima Nuclear Accident

Unless you’ve been living under a rock for the past week, then you undoubtedly know that Japan was rocked a few days ago by an 8.9 magnitude earthquake (the 3rd largest in the past decade and top 10 overall – also check out the NYT’s before & after shots) and a subsequent tsunami that exponentially compounded the ill effects of the disaster. Coming out of that incident, one of the most hyped “news” items has been the aftermath at the Fukushima nuclear power generation facility. It turns out (unsurprisingly) that much of this coverage has been faulty, inappropriately throwing around talk of “melt downs” when, in fact, things are under control.

For a great, detailed description of the entire incident, check out Barry Brook’s post “Fukushima Nuclear Accident – a simple and accurate explanation” over on the Brave New Climate blog. It’s an excellent discussion of the accident, which highlights several poignant points that can be directly applied to information security and information risk management (also see this post, which dispels one inaccuracy in Brook’s post – that there is not, in fact, a “core catcher” installed – and provides even greater assurance that things are well in-hand).

Specifically, there are 5 take-away points to consider:
(more…)

Post to Twitter Post to Facebook

Posted March 14 2011