NHS ransomware attack: who is to blame?

The second week of May appeared to be quite rich on cyber security news. On Friday, while the concerned public were still discussing clever techniques used by new French president’s IT team to deal with Russian government’s hackers, another major news has topped the bill – tens of thousands workstations, including those belonging to UK NHS trusts, have been taken down by a ransomware virus WannaCry. By Sunday, the virus had managed to affect over 200,000 workstations around the world, and could have affected much more if it wasn’t quasi-magically brought to a halt by an anonymous UK security researcher.

The attack summoned a lively discussion in the cyber security community and beyond on the reasons that led to the possibility of the attack and made it so devastating. A lot of fingers had pointed at Microsoft Windows, naming vulnerabilities in the OS the primary reason for the attack. The spiciest ones had even been calling to ban Windows in favour of other, ‘more secure’, operating systems.

Frankly speaking, it came to a bit of surprise for me. While there indeed had been a time when the Microsoft’s OS was an object for heavy criticism for its insecurity, the guys have eventually managed to do a really good job on catching up with the industrial best practices, and incorporated a decent set of security features into Windows XP SP2 and all versions ever since.

While there is little doubt that vulnerabilities in the corporation’s products were used by the malware to spread itself around, it is important not to confuse the reason for the attack – which is the cyber criminals creating the malware and launching the attack for easy gain – with the means they used to conduct it.

Consider chickenpox. This highly contagious virus spreads from one person to another through the air by coughing and sneezing. Still, no one of sound mind views human beings as the reason for the disease. Instead, the reason causing chickenpox is named as varicella zoster virus, while human beings are assigned with the virus carrier role. Getting back to the cyber space, virus carrier is the role played by Windows in the WannaCry case.

Modern software products are extremely sophisticated systems. We should admit that on current stage of human development it is nearly impossible to create a perfect, bug free software product. The chance of a compound system built up of a number of interconnected components having an exploitable security flaw is therefore very close to 100%. Virtually every software product can be used to perform a successful attack of one kind or another. The likeliness of a product to be used as a tool for a virus attack depends not as much on its origin, vendor, or technology used, but rather on its popularity and capability to spread the virus as quickly as possible. No doubt that if Windows suddenly disappeared with its market share taken over by OS X, the next virus we were going to see would have targeted the Apple’s platform.

And one more consideration.

Undoubtedly, the most efficient method of preventing infecting with chickenpox is to stop communicating with other people entirely. Surprisingly, no one uses that. People continue going out on Friday nights risking to catch chickenpox from someone in the bar. Parents continue taking their children to school, risking them getting the disease and spreading it to their siblings and the parents themselves.

Why? Because the common sense is not only about avoiding risk. In fact, avoidance is just one of [at least] four methods of dealing with risk, with reduction, retention, and transfer being another three. When mitigating the risk of getting chickenpox people normally stick to retention (“ah, whatever!”), or reduction (vaccination), but extremely rarely to avoidance.

The same approach can be transferred to the cyber space. There is no much need to make your computer a stronghold by protecting it from all possible attacks. You can spend a fortune and still never reach the 100% protection bar.

Instead, think wider, and try to mitigate the risk in a wiser way. It is very likely – see above – that one day or another you will get ransomware or some other kind of cyber scum on your PC. So instead of trying to avoid the unavoidable, why not think of the unfortunate event as if it is set to happen tomorrow, and employ the preventive countermeasures, such as data backup or emergency response plan, until it’s too late?

So who is to blame in the end?

The question becomes much simpler if we extend common crime handling practices to the cyber space. Information and computer systems are business assets. If a computer system becomes unavailable due to a ransomware attack, the attack obviously constitutes a criminal damage act at the very least. If unresponsiveness of the system leads to impossibility for the business to conduct its operations, the legal treatment for both the attacker and the victim would depend on the criticality of the business and its responsibilities as per the relevant acts of law.

Those will obviously differ for a corner shop and a hospital, with the latter being legally in charge of ensuring the continuity of its operations despite most of emergencies. Just as hospitals deal with potential power outages by installing emergency generators, they employ appropriate security measures to minimise the impact of cyber attacks on their operations – including their choice of secure operating system and any add-on protection methods. It is hospitals’ legal obligation to make sure they continue to operate at appropriate level if their system gets down due to accidental crash or deliberate attack. As for the attackers, they will most likely not get away with criminal damage in case of a large-scale hospital system crash, with escalated charges ranging from causing harm to health to manslaughter.

In this sense, cyber attacks are no different to traditional crimes against businesses and citizens. While their arena, methods, tools, and scale are very different to those of traditional crimes, the principal relationships between the subjects and objects of crime, as well as their legal treatment, largely remain the same.

Good News

One of English translations of Victor Hugo’s words On résiste à l’invasion des armées; on ne résiste pas à l’invasion des idées reads as No army can stop an idea whose time has come. In our case, the army is even going to help promote such an idea instead of resisting it.

Atlantic Council is set to host a discussion that was long awaited by me and a solid crowd of experts in information security, business continuity and cyber risk management. The Cyber Risk Wednesday: Software Liability discussion will take place on November the 30th in Washington, DC.

The discussion will be dedicated to a difficult question of increasing liability of software vendors for defects in their products, and the ways of trading it off with economic factors. Taking into account the extent to which software, in a variety of forms, infiltrates into the inmost aspects of our lives (such as a smart house running a hot tub for you), as well as the extent to which we trust software in managing our lives for us (letting it run driverless cars and smart traffic systems), the question of liability is vital – primarily, as a trigger for vendors for employing proper quality assurance and quality control processes. That’s why I wholly welcome the Atlantic Council’s initiative, and truly hope that it will help raise awareness of the problem and give a push to wide public discussion of the same.

On security perimeters

It is my humble point of view that the ‘security perimeter’ concept used widely by security professionals and men in the street provides more harm than good. There are many reasons as to why it does, but the main one is that the use of this concept gives a false sense of security.

If you ask an average person what a security perimeter is, they will probably tell you something like ‘it is a warm and cozy place where I can relax and have my cake while everyone outside is coping with the storm.’

The problem is that it is not entirely so. Contrary to popular belief, security risks don’t go away when you are inside the perimeter. Instead, they transform, they change their sources, targets and shapes, but they are still there, waiting for the right moment to strike. What is particularly bad is that those risks are often overlooked by security staff, who only concentrate on risks posed by hostile outside environment (the storm) – but not the ‘safe’ environment inside the perimeter (yet, an odd cherry bone in the cake that might cause the man to choke to death).

The chaos at JFK is a good (well, not for its participants) illustration of this point. For sure, the area of supposed shooting was viewed by security people as belonging to the security perimeter (and extremely well-protected one – I bet it’s nearly impossible to get to the area even with a fake, not to say a real gun). They probably believed that as long as the borders of the perimeter are protected up to eleven, they don’t need to care about anything happening inside it. Indeed, they might have done a great job about protecting the passengers from gunfire, but they overlooked an entirely different type of risk – which, happily, didn’t cause any casualties.

That’s why any security perimeter (in the meaning of ‘straightforward defence facility’) should be viewed not as a security perimeter, but rather as a transition point from one security setting to another. In no way the inner setting is more secure than the outer one – and sometimes it can even be more dangerous than the outer one (imagine there’s no one in to help the choked man deal with the bone). Thinking in this way will help to make a clearer picture of the variety of risks targeting every particular security setting, and come up with appropriate countermeasures.

Cloud services as a security risk assessment instrument

One of the hidden gems of cloud computing platforms (how many more of them are out there?) is the possibility of performing quite accurate quantitative assessment of risks to security systems.

The strength of a big share of information security measures rests on computational complexity of attacks on underlying security algorithms. To name a few, you need to factor some 2048 bit integer to crack an RSA key, you need to get through an average of 2^127 tries to recover an AES encryption key, you need to iterate over 20 million dictionary passwords to find the one matching the hash, and so on – I’m sure you’ve got the idea. All of these tasks require enormous amounts of time and computational resources, and the unavailability of those to the vast majority of potential attackers is the cornerstone of modern data security practices. This hasn’t changed much for last several decades – yet, something around it has.

In ye goode olde days, a security architect had to rely on some really vague recommendations when deciding which security parameters to employ in the system, which often sounded more like voodoo predictions rather than a well-defined formally justified methodology. These ones from NIST, for example, literally say, ‘if you want your data to be secure up until 2030, protect it with 128 bit AES’. Hmm, okay. And what are the chances of my data being cracked by 2025? 2035? What if the data I encrypt is really valuable – would it be worthwhile for the attacker to jump over their head and try to crack the key well before 2030? What is the price they’d have to pay to do that and what are the chances they’d succeed?

The rise of cloud computing platforms brought in a big deal of certainty onto the table. With the availability of commercial cloud platforms one can estimate the costs of breaking a computation-dependent security scheme unbelievably accurately. Back in 2012, the cost of breaking a scheme by a potential attacker could hardly be estimated. It was believed that the NSA probably has the power to break 1024 bit RSA, and a hacker group big enough could probably break SHA-1 with little effort. Probably.

Everything is different today. Knowing the durability of the security system they need to deploy or maintain, and being aware of the computational effort needed to break it, a security architect can estimate the ceiling of the price the attacker needs to pay to conduct a successful attack on the system – in dollars and cents.

To obtain that estimation the security architect would create a scalable cloud application that emulates the attack – e.g. by iterating over those 20 million passwords in distributed manner. Afterwards, they would work closely with the cloud service provider to figure out the price of running that application in the cloud, which will be a function of the system’s security parameters and the amount of time needed to conduct the attack. Having built the price function, they would be able to make a justified and informed decision about the security parameters to employ, by balancing the attack duration and cost with any benefits the attacker would get from a successful attack. This is a huge step forward in the field of security risk assessment, as it allows to describe the strengths and weaknesses of the security system in well-defined ‘I know’ terms rather than ‘I feel’, and view the system from a business-friendly ‘profit and loss’ perspective as opposed to enigmatic ‘vulnerabilities and their exploitation’.

It is worth mentioning that a good security architect would then monitor any changes around the cost of breaking the system, including changes in the cloud service providers’ SLAs and price schedules, and be prepared to make any necessary amendments to the risk figures and the security plan. With computation prices going down all the time, reviewing the risks periodically is vital to guarantee the continuous security of the system.