A few years ago, a friend of mine served in Afghanistan. It was, as he described it, a long and mostly dull duty. When not busy with soldierly duties, he wrote on his blog and took pictures, often of the rather picturesque – to those who didn’t have to traverse it – scenery. At one point, however, he was informed that these landscape pictures were, in fact, an operational security violation. Not the ones taken in-camp, but the gorgeous panoramas of Afghani mountains and valleys. The theory was that, using those pictures, insurgents could find their position. My friend’s response was succinct: “I think they already know about the mountains, sir.”
In a previous job, I was charged with creating the security documentation for a particular government system, including the disaster recovery plan. That plan necessarily had to include the power requirements for the system. However, with a certain amount of digging, I discovered that by the standards to which I would be held, the simple fact that the servers used either 110V or 220V power was considered “secure unclassified information” and my report would require rather cumbersome treatment. Mind, what put it over the top was not that the servers required 110V, or that the servers required 220V, but simply that the servers might require one or the other. Or, in other words, that the servers required electricity in the same fashion as every other standard server. The bleedingly, patently, absurdly obvious. But that fact was somehow important for security.
There is a certain tendency, with respect to security, to classify, render confidential, or otherwise obscure every piece of information. I cannot count how many times I have heard “we can’t tell you what kind of encryption we use – that would make it insecure!” or some other variant. Indeed, there is a certain value to hiding some seemingly obvious pieces of information – the number of servers, the ports being used, the location of a datacenter in a building. These are not without purpose. There is no sense in making an intruder’s job any easier, and great value in making it as trudgingly difficult and annoying for them as possible.
But this must be tempered with a modicum of sense. In risk assessment terms, this means examining a piece of information and determining what level of risk it exposes. There is no sense in restricting the fact that servers run off of electricity; an intruder knows that – it’s not something that takes much knowledge to figure out. There’s no sense in hiding the fact that a base which is in contact with the local population can see the mountains – the insurgents know that. These are obvious things.
And there’s an important psychological component there. By trying to secure patently obvious things, security by obscurity (already a bad idea) becomes security of absurdity. The very concept of security becomes eroded. Yes, it’s easier to treat all information as secure, but the end users won’t view it that way. What they’ll see – correctly – is a security posture which has gone amok and which is not connected to the reality of their work. And they’ll start ignoring it because it’s ridiculous. And then they’ll be ignoring actually sensible security; they’ve lost confidence in the directives and the purpose behind them. And then you have a problem.
The point is to maintain a real connection with the people who have to implement security directives. As I’ve said before, their job is not to keep your infrastructure secure – their job is, well, their job. To keep people following secure processes, they have to be invested. They have to be able to understand why they’re doing these things. You have to acknowledge that they know the mountains are there, and work within that reality.