Vavada - это онлайн-казино, предоставляющее широкий выбор азартных игр, включая слоты, рулетку, блэкджек и другие. Vavada привлекает игроков разнообразными бонусами и акциями.

One of the irritating problems I have to deal with as a developer is the fact that I don’t really get to make any decisions. Sure I can make design decisions regarding the implementation of code modules, but when it comes to post-deployment issues, especially regarding security, all I can do is make some suggestions to managers and hope for the best. For example, in the extremely unlikely event that I find a security hole in one of my impeccably designed modules, I can write up a summary of the bug, mitigating factors, risks, and level of effort it would take to fix it. But, it’s not really up to me whether it winds up getting fixed or not.

A large factor in this is that the people who make decisions about patching security vulnerabilities, at least in custom applications, have very little at stake if a breach is made. If a mechanic puts bad lug nuts on a tire and then the tire fall off, clearly it’s the mechanic’s fault. If management decides that a security hole would cost to much to be patched and a breach occurs, it’s never management’s fault: the intruder’s fault, or the developer’s fault, or it was just a calculated risk that turned out poorly. Also, managers are just as human as developers; decisions about application maintenance can also be affected by fear of changing a “functioning” application, being lazy, or just being too busy with other things to fix something that isn’t broken.

Wouldn’t it be great if there were some accountability to everyone involved in the maintenance of an application in the case of a security breach? Unfortunately, because the outcome of an intrusion usually doesn’t directly affect the people who could have prevented it, there’s not much incentive for vigilance or threat of punishment for inattention. There are far too many ways to pass blame around, and not nearly enough ways to encourage people to consider security as essential as functionality.

One thought on “Insecurity through Management

  1. RaSchi says:

    I think what you’re looking for is the concept which is known as “internalization of external effects” in economics. A manufacturing company that saves on filters and waste disposal can produce cheaper but pollutes the environment. The effect is called an external effect, because the general society (external to the company) does suffer. A company that does spend on filtering, etc. has a competitive disadvantage due to higher costs and thus there is no economic incentive to care about the environment. The society as whole can create such an incentive by e.g. introducing penalties for environmental pollutions. This mechanism is called internalizing the external effect, because now the effect of having to pay a penalty is very internal to the company and the volume of the penalty should be at least level out the competitive disadvantage that the environment-friendly company had had.

    There’s a book called “Geekonomics – The Real Cost of Insecure Software” (http://www.geekonomicsbook.com/) which claims to explain this concept in application to software security. I haven’t read the book, but it just arrived from Amazon the other day and as soon as I’ve finished my current reading, that’s going to be the next one.

Comments are closed.