Meet Cormac Herley, a security researcher for Microsoft Corp. (Nasdaq: MSFT), who recently dived into the topic of rejection of sound security advice by users.
Herley states that most security advice offers a poor cost-benefit tradeoff to the user, leading to rejection of that advice. He offers an economic-based analysis of three pieces of well known security advice: password rules, recognition of phishing sites by reading URLs, and certificate errors. And he argues that users believe the effort they have to put in to comply with best practices is more onerous than the actual risk of being compromised due to such an event.
In his conclusion, Cormac says that "users are never offered security, either on its own or as an alternative to anything else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips. These sometimes carry vague and tentative suggestions of reduced risk, never security."
This made me think about the practice of cyber security today. I can open up any book in my library on information security and see the same recommendations found in any company's security policies. But we still have issues with rootkits and phishing. We push our developers and vendors to deliver solutions with sound security built in, but we still get less than optimal solutions from a security standpoint.
There are the OWASP Top 10 2004, 2007, and 2010 release candidates, and we still have cross-site scripting, injection attacks, and broken authentication on each list. The PCI Security Standards Council lists a lot of best practices as advice, but we still have spectacular issues such as Heartland Payment Systems. Is it that no one cares? Or is the effort greater than the cost of the incident?
I agree with Herley on his assessment. Take password complexity for example. If the number of passwords we need is small (1-3), it may not be a burden. But we access many Websites that require credentials. When the number increases, the effort (cost) of having to remember multiple, hard passwords becomes far greater than the risk of compromise to the user. If there is a keylogger involved, the effort to use complexity becomes a total waste.
What, exactly, is the risk? Are we talking about a corporate financials system, or a Website to see photos from the family reunion? The difference is compromised photos of Uncle Joe vs. a strategic enterprise system, leading to termination of employment.
And what is the impact? Look at the number of botted machines on the 'Net. The owners of the machines generally don't know they are botted, and they experience no loss, other than poorly performing PCs. According to a report in the Boston Globe, a review of TJX’s compliance to PCI Security Standards revealed that the company met only three of the 12 requirements. The Electronic Transactions Association noted that TJX was aware of these deficiencies in 2004, and took no action to correct them. My guess is the cost-benefit tradeoff was probably acceptable to TJX at that time. They probably have a different opinion today.
We as cyber-security professionals must be cognizant of real risks when we give advice or make policy. We must also communicate the real risk clearly in some manner other than FUD. The users have an idea of what level of risk they are willing to assume. If the risk is too much for them, they will comply.
— Bruce Kaalund is the cyber security group leader for a large telecommunications company.