Since every time I posted my previous article people were asking questions, I wrote up the following as a Facebook comment and figured it deserved repeat posting here. Note that there’s an article in our archives which is similar but not as specific as this one. Get ready for your cryptography lesson.
A hash is a one-way function. This means that given some input, it creates some seemingly random output. It is one-way in that you can’t do math on the output to get back to the input.
So, “abc” -> (hash function) -> A9993E364706816ABA3E257178
and there’s no way to get “abc” back from that nasty string.
UNLESS you have taken the time to generate what’s called a rainbow table. Hackers (and presumably the NSA) run exhaustively through all possible strings and generate all the hashes for those strings. Once you have a rainbow table, all you have to do is look through your table and see if the hash is already in there. One of these is here: http://
Generating Rainbow Tables is time consuming, though. If you can generate 100,000 hashes a second, it would take 69 years to generate the rainbow table containing all the possible 8 character alphanumeric passwords. (or 69 computers one year, etc.) It would take 171 billion years to create the complete set of 12 character passwords made from all printable characters. That’s not terribly practical.
So, a cryptographic salt is a sprinkling of additional information before hashing. So, LinkedIn would take your password of “abc” and add some random data to it, perhaps “fM3sTe4d“. So instead of hashing “abc” it hashes “fM3sTe4dabc” and gets 5646e9ce061dee933de87fc4d6
What the salt does is it makes rainbow tables ineffective. As mentioned earlier, it’s impractical to make rainbow tables for longer, random inputs. I have seen tables of all English words up to 12 characters, and I know someone’s working on a table of all 9 character printable inputs… but every increase by one character multiplies the work of the rainbow table creators by 60-95 times.
Salting is good, and cryptographers and cryptographic developers have known this for years. Unfortunately, lots of non-cryptographic developers try to develop cryptography, and things fall apart.
Bottom line for everyone: the longer and more random your password, the less likely it is for anyone to ever be able to crack your password. And using unique (by which I mean different, not necessarily entirely unique) passwords can help protect you from the breach of one of your passwords. (See this XKCD.)Posted June 10 2012
Around this time of year, many people receive new devices and gadgets as gifts, and some of those gadgets turn out to be smart phones. But smart phone security is very tricky to pin down, as there are multiple vendors and platforms to take into consideration, not to mention the speed at which smart phone technology is evolving. So when I came across this Top 10 iPhone Security Tips whitepaper (pdf), I knew that it was probably a good thing that it attempts to target a specific platform. However, after reading through it, I think that many of the things McAfee points out can also apply to a Droid or BlackBerry. And so, by stripping away the platform-specific details, we arrive at a pretty decent list of things a new smart phone owner can do to achieve some basic smartphone security:
- Enable passcode/lock
- Erase all data before a return, repair, or resale
- Regularly update firmware
- Don’t run shady apps
- Take advantage of the web browser’s security
- If you’re not using it, disable it
- Secure that email
- Use a phone tracker
Mobile phones have had passcode capabilities for a long time. Make sure you’re using it, since a passcode lock is often the first line of defense.
If you will no longer be the owner in possession of the device, it’s best to erase everything you can first. Everything. If you can do a factory reset, do so, because your phone constantly records information and there is always some data that isn’t easily found, let alone purged.
I’m guilty of not doing this– sometimes the update notification will sit around for a week before I finally give it permission to run. But this is one of the easier things to do, since it’s mostly automatic.
Just like with a personal computer, if you run unknown or untrusted applications, you substantially increase your chances of getting got. So if you don’t want to get got, be prudent about what apps you run on your device.
For smartphones with native web browser apps, be sure to use the security features to clear caches and stored passwords when it’s necessary. Just because a web browser is on a mobile device doesn’t mean it’s a security lightweight. Check out the “settings” or “options” to see just how much your mobile phone web browser can do to help you out.
I’m also guilty of leaving stuff running unnecessarily. Be careful about leaving debug mode enabled, Bluetooth and wifi on, etc. Generally speaking, the more doors you leave unlocked, the lighter you sleep at night. Turning off unused services when they aren’t needed is a good habit to form, even outside the realm of security.
In addition to providing native web browser apps, many smartphones also come bundled with a native email app. Check the settings for these apps to take advantage of any security features they’re offering (such as SSL/TLS).
The GPS can be bad for privacy if you are reckless with it. However, it can also be a powerful tool to help you recover a lost/stolen device. I believe the iPhone 4 has a built in device-finding service (complete with a remote wipe). But even if you have a different smartphone, there is almost certainly an app that provides some remote tracking for lost devices (i.e. Where’s My Droid app for Android).
This certainly isn’t a comprehensive list, but it should be enough to get both new and old smartphone users thinking about general mobile device security in a healthy way.Posted December 16 2011
I’m not too ashamed of myself to whore out a few select email addresses for personal gain, or even promote a certain company by liking or retweeting something if it will benefit me more than the actions required, but I always keep a hesitant nature towards most of these promotions. I mean who doesn’t like free money?
I received an email the other day supposedly sponsored by a reputable programmer-related site. What it entailed was signing up for a big vendor’s developer program. If I did so, they would send me a $15 gift certificate to one of the major online retailers. I’m trying to keep all parties in this matter anonymous simply because I do not want to promote anything involved in this so-called promotion, and the actual parties involved are irrelevant. The email went something like this:
Happy Holidays Developers!
Get a $15 [online retailer] Gift Certificate by joining the [vendor] Developer Program (no charge!)
Thanks to your [programmer site] participation, here’s all you have to do!
1. Visit The [hyperlink to vendor site] and register at no cost!
2. [vendor] will send you a validation email: confirm your registration following the URL provided in the email which will prompt you to choose a password
3. Once you have chosen a password, [vendor] will then send you a password reset email: forward the password reset email and the sign up email address used to [promotional site email]
4. Once verified on our end, a gift certificate will be sent to you promptly after the program ends!
Hurry! This is limited to the first 600 respondents, one per person.
For full terms and conditions please visit [marketing link to promotional site]
Step 3 is the one that caught my eye here. You want me to forward you an email sent to me that allows me to reset my password? By doing this I would essentially be sending the promoter an email that contained a link with an embedded token allowing them to authenticate as myself and then change my password, essentially gaining access to my account at this vendor site. Mind you, this isn’t exactly a critical account. But still these are very poor security practices.
So, what’s to be learned from this? Pay attention to what’s being asked of you. If it seems slightly out of the ordinary, it probably is. Inboxes are being filled with more and more spam these days, some make it through, and some even seem legitimate. It’s up to the users to educate themselves on how to detect and avoid these types of situations. In closing, I’ll leave you with a list of things you can do to help protect yourself.
- If it seems too good to be true, it probably is. So use common sense people!
- Do not click on links in emails – period! Just because it says it’s a link to SiteA doesn’t mean it’s actually going there.
- Enable spam controls on your email client – if you’re using Outlook, Thunderbird, or even Gmail’s web interface – they are all pretty good at detecting what may or may not be spam.
- Use multiple emails or use gmail’s ‘+’ email features or mailnull to help sort out those mailing list emails and let you know which emails are being distributed to others.
- Do not load images by default or at all.
- Do not enable scripting at all!
These are just the tip of the iceberg, but you get the idea. Help protect yourself and you’ll be helping to protect all of us.
Posted December 16 2011
One of the biggest complaints I’ve had with VMWare vSphere and VMWare ESX/ESXi over the last few years is that managing my virtual machines from my Mac computer was a hassle. The VMWare management utilities are all Windows-only, and even the few web-based tools either do not work or are extremely limited from a Mac. While it isn’t perfect yet, VMWare vSphere 5 has made it so you can actually do just about anything you need to using a Macintosh computer; you just need to go through a few hurdles.
To enable the administration of your various virtual machines, storage, clusters, datacenters, and the like, you can now use the vSphere 5 Web Client. Before it can be used, it must be authorized; the best instructions I found for this are here. Follow the steps in the “Authorizing the vSphere Web Client (Server)” section. This is a one-time configuration necessary to enable the vSphere Web Client.
Once authenticated, you will see something that looks very similar to the Windows-based vSphere Client running in your browser.
This will satisfy most of your management needs, but it leaves out an all-important capability; the ability to remotely view the console of the systems. There’s a Console button, but it won’t work on a Mac. Once you’ve installed a machine, you can typically enable some sort of remote desktop capability in the operating system, but what do you do before then? If you’re running Windows, you use the vSphere client and open a console, but on a Mac, you’re out of luck. Right? Wrong.
There is an under-documented feature of vSphere that allows the capability of opening up VNC connections from the host directly to the console of the virtual machine. To perform this, we first have to enable incoming connections to your vSphere server, as vSphere 5 has an integrated firewall. This is the one step you will actually need to use the Windows vSphere Client; everything else can be done using the Web Client. This step needs to be executed once for each vSphere or ESXi host running virtual machines you want to access using VNC.
In the Windows vSphere client, choose the host you wish to enable VNC connections on. Choose the Configuration tab and on the left choose Security Profile. On the right, next to Firewall click Properties… As VMWare does not include VNC as a protocol, it is not listed as an available option. However the ports allowed by the gdbserver protocol will suit our purposes. Check the box next to gdbserver. (It is also wise to highlight the gdbserver line and click the Firewall… button and lock down where you will allow these VNC connections to take place from; in ours I restricted this to our intranet.) Click OK and you’ve now enabled the incoming ports to be used for VNC.
Finally, enabling VNC access to the console machines is a matter of setting advanced configuration parameters on each virtual machine, which can only be done when the virtual machine is off. To open up the advanced configuration:
- In the Windows vSphere client, choose the machine, click Edit Settings…, click the Options tab, choose Advanced->General on the left, and click Configuration Parameters… on the right.
- In the Web client, choose the machine, click Edit Settings… under the VM Hardware section, click VM Options, click Advanced, and click Edit Configuration….
In both cases, you now want to add three rows by clicking the Add Row button.
|RemoteDisplay.vnc.port||5900-5999 are the “standard” ports, choose one different from other VMs on the host.|
|RemoteDisplay.vnc.password||the VNC password used to access the VNC session; only the first 8 characters are encrypted using the VNC protocol, and weakly at that. Don’t rely on this for security.|
Once you’ve added these rows and click OK, you can now use a VNC client to connect to the console of the machine. Power up the machine, and then using Finder on the Mac, choose Go->Connect to Server (or hit Command-K), and type the following:
vnc://<ip or name of esxi host>:<port chosen in configuration settings>/
and click Connect. You will be prompted for your password, and depending on your client/version of OSX you may receive a warning about how keystroke encryption is not enabled. Accept the warning, and you will see the console of the virtual machine! (And note, since Macs don’t already use the three-finger salute, you can safely just press Ctrl-Alt-Del in that VNC-window to log into Windows systems!)
Once you’ve installed the operating system of choice, and enabled that OS’ remote desktop capability, you may want to disable this VNC access. Just shut down the VM, go back into the advanced options and change the RemoteDisplay.vnc.enabled setting to false.
Hopefully at some point soon, VMWare will enable a true web-based console application (which doesn’t require host-specific plugins to be installed) to go with their nice new web client. Until then, this is a reasonable workaround for accessing virtual machines using a Mac.Posted November 29 2011
A few years ago, a friend of mine served in Afghanistan. It was, as he described it, a long and mostly dull duty. When not busy with soldierly duties, he wrote on his blog and took pictures, often of the rather picturesque – to those who didn’t have to traverse it – scenery. At one point, however, he was informed that these landscape pictures were, in fact, an operational security violation. Not the ones taken in-camp, but the gorgeous panoramas of Afghani mountains and valleys. The theory was that, using those pictures, insurgents could find their position. My friend’s response was succinct: “I think they already know about the mountains, sir.”
In a previous job, I was charged with creating the security documentation for a particular government system, including the disaster recovery plan. That plan necessarily had to include the power requirements for the system. However, with a certain amount of digging, I discovered that by the standards to which I would be held, the simple fact that the servers used either 110V or 220V power was considered “secure unclassified information” and my report would require rather cumbersome treatment. Mind, what put it over the top was not that the servers required 110V, or that the servers required 220V, but simply that the servers might require one or the other. Or, in other words, that the servers required electricity in the same fashion as every other standard server. The bleedingly, patently, absurdly obvious. But that fact was somehow important for security.
There is a certain tendency, with respect to security, to classify, render confidential, or otherwise obscure every piece of information. I cannot count how many times I have heard “we can’t tell you what kind of encryption we use – that would make it insecure!” or some other variant. Indeed, there is a certain value to hiding some seemingly obvious pieces of information – the number of servers, the ports being used, the location of a datacenter in a building. These are not without purpose. There is no sense in making an intruder’s job any easier, and great value in making it as trudgingly difficult and annoying for them as possible.
But this must be tempered with a modicum of sense. In risk assessment terms, this means examining a piece of information and determining what level of risk it exposes. There is no sense in restricting the fact that servers run off of electricity; an intruder knows that – it’s not something that takes much knowledge to figure out. There’s no sense in hiding the fact that a base which is in contact with the local population can see the mountains – the insurgents know that. These are obvious things.
And there’s an important psychological component there. By trying to secure patently obvious things, security by obscurity (already a bad idea) becomes security of absurdity. The very concept of security becomes eroded. Yes, it’s easier to treat all information as secure, but the end users won’t view it that way. What they’ll see – correctly – is a security posture which has gone amok and which is not connected to the reality of their work. And they’ll start ignoring it because it’s ridiculous. And then they’ll be ignoring actually sensible security; they’ve lost confidence in the directives and the purpose behind them. And then you have a problem.
The point is to maintain a real connection with the people who have to implement security directives. As I’ve said before, their job is not to keep your infrastructure secure – their job is, well, their job. To keep people following secure processes, they have to be invested. They have to be able to understand why they’re doing these things. You have to acknowledge that they know the mountains are there, and work within that reality.Posted October 31 2011
A lot of what we security professionals do includes protecting information from being compromised (especially personal information). The shift towards more profit-driven computer crime has happened swiftly over the last decade, so it should come as no surprise that there is a booming underground market centered almost entirely around compromised financial and personal data. And, on the other end of the spectrum, we have laws and regulations to help minimize the leakage of this data in the first place. Plenty of research and documentation exists for the many ways we try to protect information, but there isn’t much (public) info on the underground market populated by the attackers and their associates who trade in illegally-gotten information. So, how do someone’s bank account credentials grease the wheels of this unique ecosystem?
The Underground Economy: Priceless (pdf) touches on the subject in a great amount of detail, even explaining the importance of reputation and the lengths people take to avoid prosecution.
Essentially, public and private servers host communities of individuals who offer their services for a fee. Maybe one person will help someone else cash out an entire bank account (for a 50% cut). And maybe another person will deliver ill-purchased goods to a safe location (for a 30% stake). In the mix are also those who initially did the work (or wrote the code) to capture the information, as well as people who specialize in forging IDs, curious researchers, law enforcement… the list goes on. Compromised financial data seems to lead to a very deep chain of events that attracts many people with varying skillsets, most of whom are simply offering to perform the same hustle(s) over and over. It is a system where both information and skills are bartered/exchanged and high risk is accepted for high returns on investment.
But not all participants are highly skilled– there should be some low-hanging fruit in there too, right? Surely, there are people who aren’t as cautious or who miscalculate their risk of exposure, yet we still have trouble keeping up with even a fraction of the online fraud. While I’m glad we are focusing efforts on preventing information from being compromised in the first place, I feel like there is a growing opportunity to focus a lot more research on thwarting these high-risk behaviors directly. Sometimes you have to treat both the symptom and the cause.Posted October 7 2011