When the theft is inevitable

The hack of Equifax data centre followed by the Yahoo’s revelation of the exposure of its 3bn user accounts (in contrast to 1bn reported before) once again drew attention to the question of exposure of our private and personal data retained by global information aggregators. Due to enormous amounts of information they hold about you, me, and millions of others, they are quite a catch for cyber criminals. As the number of attacks similar to the one that targeted Equifax and their sophistication level will undoubtedly be increasing in near future, so will the chance of your personal data ending up in the hands of criminals.

While there is little we can do about Equifax and their security competencies, we certainly can do a lot more about platforms and services within our control. I am not talking social networks here; surprisingly, the fact that we understand the risks they pose to our privacy helps us perform some form of self-moderation when sharing our private details through them.

Such institution as banks, insurance companies, online retailers, payment processors, and major cross-industry service providers like BT, NHS, or DVLA, especially those under the obligation of KYC or AML compliance, hold enormous amounts of information about their customers, often without them realising this. The scope and value of this information expands far beyond payment card details. A hacker who gains access to a customer database held by any of those companies would almost certainly obtain an unconditional capability to impersonate any customer at any security checkpoint that does not require their physical presence (such as a telephone banking facility or a login form on a web site). For example, they could order a new credit card for themselves through your online banking account, or buy goods on Amazon in your name – but you’ll never see any of them.

This means that we may soon face an even steeper rise in the numbers of identity thefts and related fraud offences, and the Equifax precedent shows that we should take reasonable steps to protect us from those despite all the security assurances given to us by the information custodians. While in most cases we can’t influence online aggregators as to what details to keep and what security methods to employ, we can choose to strengthen the security checkpoints instead, and do this by tightening identity checks, limiting levels of access they grant us, and monitoring them for any suspicious activity.

Employing two-factor authentication is one of the best approaches to tightening the identity checks. If an online service offers it, use it. Even if the attacker manages to use your stolen identity to change your password through the legitimate password recovery procedure, they will be unable to sign in without having access to your second factor.

Limiting access levels is primarily about setting up artificial limits on the actions that you – or the impostor – can conduct with your account. These include any maximum amounts of money that can be spent in one day or month, hours of the day during which the account may be accessed, permitted locations and so on. Many online services offer support for such limitations, and it’s wise to use them. This is mainly a corrective facility that would help minimise your losses should your account get hacked.

Monitoring is about setting up e-mail or text notifications that would inform you about any usual and unusual activity around your account. Having a notification system in place is often the fastest way to identify that your account was hacked. Checking consistency of your account data manually from time to time may help much too.

Finally, it is always a good idea to follow the principle of the least disclosure. If the service doesn’t ask you for some details, or allows you not to answer – don’t give the details away just because. It inevitably turns out that the less a service knows about you, the better it is for you. Again, if you are offered a choice between providing less safe and more safe details, choose wisely. For example, setting up a recurring payment to be collected by direct debit is safer than have it charged monthly to a credit card.

To summarise the above,

1. Most online services suck at security; expect your details to be stolen one day.

2. Minimise the impact of the prospective theft by securing your sign-ins, limiting legitimate access, and setting up access monitoring.

3. Don’t give your personal information away unless required/forced to do so.

Equifax hacked

Equifax says that personal details it held for 143 million U.S. consumers have been stolen by hackers.

We are obviously going to see more of that in near future. Large personal data aggregators, like Equifax, Experian, global banks and large healthcare service providers are among the most attractive targets for data thieves. Unlike social media services like Facebook, which typically use complicated and highly distributed systems to store and access user accounts data, smaller aggregators like credit agencies or banks use far less sophisticated databases, making them much easier to steal.

But what is more important, the theft implies a worrying conclusion – any personal data that played a role of a virtual ‘fingerprint’ by being strongly and privately bound to a particular person, stops being so. Our social security numbers, mobile service providers and monthly spendings cannot be relied upon any more – at least to the extent they used to in the past. This hack is a precursor of a forthcoming fundamental change in the whole ecosystem of authentication and identification of citizens basing on their personal data.

Queen Elizabeth running Windows XP: how big is the issue?

Britain’s Largest Warship Uses Windows XP And It’s Totally Fine, says Michael Fallon, UK Defence Secretary. So is it really – is it OK to run a nearly-twenty-year-old operating system on a strategic battleship?

Unfortunately, what we know so far is way too little to come up with any justified answers. The statement as it is being put on the media (‘the ship runs on Windows XP‘) is utterly vague, unprofessional, and misleading. A warship like Queen Elizabeth has hundreds of different subsystems responsible for tasks of greater or lesser importance. Therefore the first thing that should be identified is the level of involvement of Windows XP in the general routine of operating the warship.

In other words, are those XP machines responsible for crew entertainment? Storing/accessing the logbook? Managing aircraft flight schedules? Tuning up engines? Transmitting cryptograms to the on-shore facilities?

Are they connected to the local warship’s network? To the Internet? If they are, do they have the latest IPS software installed? Any firewalls? Any certified firewalls?

What kind of software is run on those machines? Who can access them? What tasks are they able to perform?

Only after answering the above and other similar questions it would be possible to establish whether those XP machines present any risk to the operation of the warship and the extent of that risk. Otherwise it would be no different to speculating about your neighbour being an extremist just because you once saw them with a big slaughter knife through their kitchen window.

And, by the way, it’s not only about Windows XP’s vulnerability to WannaCry or any other form of malware. Apart from that, a lot of genuine security technologies used in Windows XP are quite outdated. An eloquent example here is that the most recent version of the main communication security protocol, TLS, supported natively by Windows XP (1.0), had been officially retired a year ago. This means that any protected communications that the warship transmits from its XP machines would actually be not protected, and could be easily eavesdropped by third parties.

And yet, all of that wouldn’t make any sense if those Windows XP machines are only used by the crew to kick ass in Call of Duty in their free time, of course.

NHS ransomware attack: who is to blame?

The second week of May appeared to be quite rich on cyber security news. On Friday, while the concerned public were still discussing clever techniques used by new French president’s IT team to deal with Russian government’s hackers, another major news has topped the bill – tens of thousands workstations, including those belonging to UK NHS trusts, have been taken down by a ransomware virus WannaCry. By Sunday, the virus had managed to affect over 200,000 workstations around the world, and could have affected much more if it wasn’t quasi-magically brought to a halt by an anonymous UK security researcher.

The attack summoned a lively discussion in the cyber security community and beyond on the reasons that led to the possibility of the attack and made it so devastating. A lot of fingers had pointed at Microsoft Windows, naming vulnerabilities in the OS the primary reason for the attack. The spiciest ones had even been calling to ban Windows in favour of other, ‘more secure’, operating systems.

Frankly speaking, it came to a bit of surprise for me. While there indeed had been a time when the Microsoft’s OS was an object for heavy criticism for its insecurity, the guys have eventually managed to do a really good job on catching up with the industrial best practices, and incorporated a decent set of security features into Windows XP SP2 and all versions ever since.

While there is little doubt that vulnerabilities in the corporation’s products were used by the malware to spread itself around, it is important not to confuse the reason for the attack – which is the cyber criminals creating the malware and launching the attack for easy gain – with the means they used to conduct it.

Consider chickenpox. This highly contagious virus spreads from one person to another through the air by coughing and sneezing. Still, no one of sound mind views human beings as the reason for the disease. Instead, the reason causing chickenpox is named as varicella zoster virus, while human beings are assigned with the virus carrier role. Getting back to the cyber space, virus carrier is the role played by Windows in the WannaCry case.

Modern software products are extremely sophisticated systems. We should admit that on current stage of human development it is nearly impossible to create a perfect, bug free software product. The chance of a compound system built up of a number of interconnected components having an exploitable security flaw is therefore very close to 100%. Virtually every software product can be used to perform a successful attack of one kind or another. The likeliness of a product to be used as a tool for a virus attack depends not as much on its origin, vendor, or technology used, but rather on its popularity and capability to spread the virus as quickly as possible. No doubt that if Windows suddenly disappeared with its market share taken over by OS X, the next virus we were going to see would have targeted the Apple’s platform.

And one more consideration.

Undoubtedly, the most efficient method of preventing infecting with chickenpox is to stop communicating with other people entirely. Surprisingly, no one uses that. People continue going out on Friday nights risking to catch chickenpox from someone in the bar. Parents continue taking their children to school, risking them getting the disease and spreading it to their siblings and the parents themselves.

Why? Because the common sense is not only about avoiding risk. In fact, avoidance is just one of [at least] four methods of dealing with risk, with reduction, retention, and transfer being another three. When mitigating the risk of getting chickenpox people normally stick to retention (“ah, whatever!”), or reduction (vaccination), but extremely rarely to avoidance.

The same approach can be transferred to the cyber space. There is no much need to make your computer a stronghold by protecting it from all possible attacks. You can spend a fortune and still never reach the 100% protection bar.

Instead, think wider, and try to mitigate the risk in a wiser way. It is very likely – see above – that one day or another you will get ransomware or some other kind of cyber scum on your PC. So instead of trying to avoid the unavoidable, why not think of the unfortunate event as if it is set to happen tomorrow, and employ the preventive countermeasures, such as data backup or emergency response plan, until it’s too late?

So who is to blame in the end?

The question becomes much simpler if we extend common crime handling practices to the cyber space. Information and computer systems are business assets. If a computer system becomes unavailable due to a ransomware attack, the attack obviously constitutes a criminal damage act at the very least. If unresponsiveness of the system leads to impossibility for the business to conduct its operations, the legal treatment for both the attacker and the victim would depend on the criticality of the business and its responsibilities as per the relevant acts of law.

Those will obviously differ for a corner shop and a hospital, with the latter being legally in charge of ensuring the continuity of its operations despite most of emergencies. Just as hospitals deal with potential power outages by installing emergency generators, they employ appropriate security measures to minimise the impact of cyber attacks on their operations – including their choice of secure operating system and any add-on protection methods. It is hospitals’ legal obligation to make sure they continue to operate at appropriate level if their system gets down due to accidental crash or deliberate attack. As for the attackers, they will most likely not get away with criminal damage in case of a large-scale hospital system crash, with escalated charges ranging from causing harm to health to manslaughter.

In this sense, cyber attacks are no different to traditional crimes against businesses and citizens. While their arena, methods, tools, and scale are very different to those of traditional crimes, the principal relationships between the subjects and objects of crime, as well as their legal treatment, largely remain the same.

On security perimeters

It is my humble point of view that the ‘security perimeter’ concept used widely by security professionals and men in the street provides more harm than good. There are many reasons as to why it does, but the main one is that the use of this concept gives a false sense of security.

If you ask an average person what a security perimeter is, they will probably tell you something like ‘it is a warm and cozy place where I can relax and have my cake while everyone outside is coping with the storm.’

The problem is that it is not entirely so. Contrary to popular belief, security risks don’t go away when you are inside the perimeter. Instead, they transform, they change their sources, targets and shapes, but they are still there, waiting for the right moment to strike. What is particularly bad is that those risks are often overlooked by security staff, who only concentrate on risks posed by hostile outside environment (the storm) – but not the ‘safe’ environment inside the perimeter (yet, an odd cherry bone in the cake that might cause the man to choke to death).

The chaos at JFK is a good (well, not for its participants) illustration of this point. For sure, the area of supposed shooting was viewed by security people as belonging to the security perimeter (and extremely well-protected one – I bet it’s nearly impossible to get to the area even with a fake, not to say a real gun). They probably believed that as long as the borders of the perimeter are protected up to eleven, they don’t need to care about anything happening inside it. Indeed, they might have done a great job about protecting the passengers from gunfire, but they overlooked an entirely different type of risk – which, happily, didn’t cause any casualties.

That’s why any security perimeter (in the meaning of ‘straightforward defence facility’) should be viewed not as a security perimeter, but rather as a transition point from one security setting to another. In no way the inner setting is more secure than the outer one – and sometimes it can even be more dangerous than the outer one (imagine there’s no one in to help the choked man deal with the bone). Thinking in this way will help to make a clearer picture of the variety of risks targeting every particular security setting, and come up with appropriate countermeasures.

Cloud services as a security risk assessment instrument

One of the hidden gems of cloud computing platforms (how many more of them are out there?) is the possibility of performing quite accurate quantitative assessment of risks to security systems.

The strength of a big share of information security measures rests on computational complexity of attacks on underlying security algorithms. To name a few, you need to factor some 2048 bit integer to crack an RSA key, you need to get through an average of 2^127 tries to recover an AES encryption key, you need to iterate over 20 million dictionary passwords to find the one matching the hash, and so on – I’m sure you’ve got the idea. All of these tasks require enormous amounts of time and computational resources, and the unavailability of those to the vast majority of potential attackers is the cornerstone of modern data security practices. This hasn’t changed much for last several decades – yet, something around it has.

In ye goode olde days, a security architect had to rely on some really vague recommendations when deciding which security parameters to employ in the system, which often sounded more like voodoo predictions rather than a well-defined formally justified methodology. These ones from NIST, for example, literally say, ‘if you want your data to be secure up until 2030, protect it with 128 bit AES’. Hmm, okay. And what are the chances of my data being cracked by 2025? 2035? What if the data I encrypt is really valuable – would it be worthwhile for the attacker to jump over their head and try to crack the key well before 2030? What is the price they’d have to pay to do that and what are the chances they’d succeed?

The rise of cloud computing platforms brought in a big deal of certainty onto the table. With the availability of commercial cloud platforms one can estimate the costs of breaking a computation-dependent security scheme unbelievably accurately. Back in 2012, the cost of breaking a scheme by a potential attacker could hardly be estimated. It was believed that the NSA probably has the power to break 1024 bit RSA, and a hacker group big enough could probably break SHA-1 with little effort. Probably.

Everything is different today. Knowing the durability of the security system they need to deploy or maintain, and being aware of the computational effort needed to break it, a security architect can estimate the ceiling of the price the attacker needs to pay to conduct a successful attack on the system – in dollars and cents.

To obtain that estimation the security architect would create a scalable cloud application that emulates the attack – e.g. by iterating over those 20 million passwords in distributed manner. Afterwards, they would work closely with the cloud service provider to figure out the price of running that application in the cloud, which will be a function of the system’s security parameters and the amount of time needed to conduct the attack. Having built the price function, they would be able to make a justified and informed decision about the security parameters to employ, by balancing the attack duration and cost with any benefits the attacker would get from a successful attack. This is a huge step forward in the field of security risk assessment, as it allows to describe the strengths and weaknesses of the security system in well-defined ‘I know’ terms rather than ‘I feel’, and view the system from a business-friendly ‘profit and loss’ perspective as opposed to enigmatic ‘vulnerabilities and their exploitation’.

It is worth mentioning that a good security architect would then monitor any changes around the cost of breaking the system, including changes in the cloud service providers’ SLAs and price schedules, and be prepared to make any necessary amendments to the risk figures and the security plan. With computation prices going down all the time, reviewing the risks periodically is vital to guarantee the continuous security of the system.