Why Cyber Risk is different
In a recent interview, cybersecurity guru Bruce Schneier, made the case that many organisations will never improve their cybersecurity until there is a compelling incentive for them to change their posture. He used air-travel, automobiles, food-safety, and pharmaceuticals, as examples of industries which all experienced decades of crashes and safety issues until public pressure resulted in government intervention and safety regulations.
fundamental change only comes when there’s external pressure for that change. Those other industries couldn’t change themselves and neither will cyber security
He argued that fundamental change only comes when there’s external pressure for that change. Those other industries couldn’t change themselves and neither will cyber security and we now need that same shift in information technology security. He went on further to state that we can’t all be cyber security experts in order to use IT infrastructure safely and that efforts like end user training are misguided. We don’t need to be food safety experts to enjoy a meal at a restaurant or to board an airline so why do we need this for information technology. He equated security training for users to blaming users for bad technology design.
His argument appears compelling. We do advise users to not plug in USBs, not click on links or open attachments. Yet, that’s precisely the way these were all designed to be used. So, is this a design failure or a failure on the user’s part? Strong safety regulations appear to have worked elsewhere so why not cyber security? It’s a persuasive argument, but it’s incomplete and will not achieve the predicted results.
Let’s start with the concept of market failure which is usually the main driver of regulation. It does seem true that there is no market-based incentive to solve cybersecurity issues. Just take hyperscale cloud providers’ security as an example. The share-prices and market shares of Microsoft, Amazon and Google seem completely impervious to, what feels like, an endless parade of horror stories of unforeseen security flaws.
This is the equivalent of paying extra at a restaurant to avoid food poisoning.
Apparently, their customers are not demanding more secure products and services. If anything, customers appear content to pay more, just to use their services more securely. These vendors monetize their own insecurities. This is the equivalent of paying extra at a restaurant to avoid food poisoning. Would you be happy if business class tickets came with more safety features than economy class tickets? Would you pay more for seatbelts and lifejackets on an airplane? It’s inarguable that more regulation is worth discussing. However, there are some important considerations before demanding more regulation.
When regulating industries like pharmaceuticals, restaurants or airlines, the dominant adversary is mother nature. This is why, for the most part, it is possible to infer that any safety-features and regulations we prescribe today will protect us in future. Automobiles feel a bit more like cybersecurity because there is a larger element of the human “adversary” we need to account for. User training makes sense in the context of cars. Drivers need to be trained to use cars safely. Much like a USB stick or a URL there are safe and unsafe ways of using parts of an automobile like the steering wheel or accelerator. We rightly hold drivers accountable and blame, fine and indeed convict drivers for unsafe operations like speeding.
the dirty secret in cyber security is that most security awareness training is largely ineffective. The type of training matters.
User training clearly works in many contexts, but the dirty secret in cyber security is that most security awareness training is largely ineffective. The type of training matters. It matters even more when the ground shifts under the user base. A great example of this happened during the early stages of the pandemic in 2020. Mimecast researchers found that end user propensity to fall for cyberattacks rose three times from the previous year’s baseline. Yet, customers who used Mimecast Awareness Training were 5.2x less likely to fall prey to these threats than organizations with no awareness training. One would assume this to be true for all types of user training. But the efficacy of other vendors’ awareness training programs ranged from limited impact to none at all. This emphasises a clear need for the right kind of cybersecurity training. Training that results in higher retention and engagement is effective in helping people and businesses prepare against potential threats.
This example also highlights that while for cars, food, air-travel, and pharmaceuticals the threat landscape is relatively constant but for information technology it evolves constantly and sometimes in large Cambrian leaps. Even for automobiles, the type, scale and frequency of adversarial tactics from other drivers is usually easy to learn and remains relatively stable over a user’s lifetime. By contrast, with information technology, adversaries are constantly evolving their tactics. Any form of regulation or user training inevitably gets stale and declines in efficacy over time.
Recommended by LinkedIn
So, am I giving a free pass to large vendors by suggesting that regulation will be ineffective? In part yes, because it depends on the type of regulation proposed. To be fair to Bruce Schneir, he never outlined the type of regulation he was proposing but if we are to assume it is simply a more enforceable extension of existing frameworks and risk management approaches then it will inevitably be ineffective. Maybe not at first. But gradually the threat landscape will shift faster than the regulatory and compliance landscape. That’s because regulation tends to lag technological advances. Also, adversaries evolve their tactics faster than security standards can change. Large vendors and their clients may achieve technical compliance but will inevitably become exposed to new forms of attack.
Big vendors already adhere to a raft of security standards. It’s a well-worn discussion in information security that compliance won’t guarantee effective cybersecurity. Business forces, like internal business risk analysis, drive organizations to align their security posture with required practice rather than secure practice. There are already reasonable information security standards and regulations in many countries. It makes sense for a business to adhere to local, national and industry regulations and large vendors know this and usually align well with these standards. But there is a large degree of theatre here.
large vendors know this and usually align well with these standards. But there is a large degree of theatre here
Most of these standards are based on some form of risk management approach. There are numerous frameworks and control assessments that support cybersecurity risk management. These frameworks, such as ISO27005, CRAMM, MAGERT, EBIOS, SP 800-30, ISO 27001, CCRA 2009, CCM CSA 2016 have an extensive catalogue of threats, assets, and controls and provide detailed guidelines for enhancing an organization’s security posture.
What is missing is the adoption of approaches that combine the above methods with an adversarial risk analysis approach. Almost no frameworks account for the changing nature of the risk. This is doubly true for the laws and regulations that are based on these frameworks and assessments. The anticipation of what the adversary might do next month or next year is missing in most risk analysis.
A vendor can sell a new product but cannot easily sell a patch. Regulation aimed at disincentivizing these practices would help.
But regulation is not a dead idea. One area where regulation could make a huge difference is the problem of unaddressed known vulnerabilities. This is a clear market failure. All too often we hear that a vulnerability has been brought to the attention of a mega vendor. Only to discover that their chosen response is to do delay or do nothing. This could be forgiven in a context where there are limited resources already committed to fixing other problems. But in the main, their responses can be characterized as steering their development resources towards building products and features that generate revenue. A vendor can sell a new product but cannot easily sell a patch. Regulation aimed at disincentivizing these practices would help. Customers making product purchases contingent on the development of key patches would also help.
While I agree with Bruce Schneier that cybersecurity won’t be improved until there is a compelling incentive to change, I don’t believe this will come from government or industry regulation alone. We need a combination of factors. Regulation is a necessary but not sufficient condition. The type of regulation matters. Forcing vendors to patch known vulnerabilities within a set timeframe is one such type of regulation I would like to see. But because cyber security revolves around human adversaries we need to train humans to use IT safely. Again the type of training matters. We need to measure risk appropriately. The type of risk analysis matters.
check box compliance merely measures required practice and this is not the same as secure practice
Cyber security can be improved. But to do so requires an understanding that check box compliance merely measures required practice and this is not the same as secure practice. It requires building and maintaining a human firewall to ensure safe use of technology. But it also requires us as customers, governments and citizens to demand that all vendors ensure their products are as secure as possible and that known weaknesses are not ignored in the pursuit of vendor revenue. In short it is a complex adaptive system. It is an arms race against an ecosystem of adversaries that requires adversarial risk analysis to determine the right level of investment. It needs to be constantly attended to an updated.
I am confident we can get there so that in the future every industry will be as protected as possible and know how to remain protected against ever escalating risk.
I think there is also an important contextual issue with regard to the end result of the potential cyber risk. With cars, food, air-travel, and pharmaceuticals we have a sense of actual personal harm, but for cyber risk it is more nebulous. It portrayed with the usual image of hooded "hacker" in a dark room with a laptop against a background image of some random javascript. When you ratchet that up with corporate culture of passing the risk mitigation hot potatoes moving between the IT team, the vendor and the end users it's easy to see why it gets so little attention outside the cyber industry bubble. If clicking on a "bad" URL caused someone's connected pacemaker to suddenly stop working or take remote control of a Telsa and target pedestrians, it would get more attention.
What's said here is all true, and all important. It did make me think of what a boss of mine said many years back: the only thing that will change behaviour is pain. Little kids will play with fire no matter what you tell them until they burn their fingers. Adults will plug in USB sticks until they're fired. Mimecast's user training probably works because those who fall for the fake phishing mails get embarrassed in front of IT (and/or their peers). There is also the issue of "my problem, your problem, our problem". A business decision maker deciding to spend money on security assessments and mitigation will only do so if a security problem is their problem. That they will feel actual pain for not taking it seriously. As long as it's "your problem" (IT guys that will be hung out to dry after a hack) or "our problem" (the faceless incarnation of their employer that will need to pay a fine) they will punt any hard decisions, like spending proper money on security remediation.