Skill vs Technology: a zero-sum game?

Last week I came across two peculiar stories dedicated to the role played by technology in the evolution of the civil aviation industry. While the stories were barely related to each other at first glance – and had I come across them at different points in time I probably would have never spot that huge connection between them – but luckily I was still thinking about the first story when I bumped into the second one, and immediate realisation of the scale of the apparent trend made quite an impression on me.

That first story was about the role of technology in the crash of Air France transatlantic flight 447 back in 2009. The primary conclusion from the investigation that the article elaborates on is that the pilots were so got used to flying with assistance of the autopilot that they became completely lost when they faced the need to fly the aircraft manually. They’ve just got no understanding of the situation whatsoever, as they’ve lacked the hands-on feel for flying the aircraft at cruise altitudes – something normally handled by the autopilot. In addition to that, the autopilot, designed with intelligence and pilot-friendliness in mind, didn’t warn the pilots that the aircraft was approaching a complete stall, after interpreting the way-too-sharply plummeting speed as an indicator of a probable false alarm.

Confused and lost, the pilots applied several corrective actions to get the aircraft back on its course. Unfortunately, due to the pilots’ lack of situational awareness, those actions had become fatal. The A330 lost its airspeed and crashed into the ocean, killing all 228 people on board. Ironically, the investigation had shown that should the pilots not intervene in the situation, AF447 would have continued at its cruise altitude as it should even with its autopilot switched off.

The second story I read was on a far more positive side, depicting prospective transition of the London City airport air traffic control tower from the airfield itself to a small place called Swanwick, Hampshire, some 80 miles away. Specifically, twelve HD screens and a thick communication channel are going to be used to replace the existing watch tower, and are claimed to provide far better insight into aircraft landings and take-offs performed at the airport, as well as a number of augmented reality perks. The experience of LCY is then expected to be picked up by other airports around the country, effectively making air traffic control tower operations an outsourced business.

The fact that impressed me the most about these two articles was that they both, despite being barely related per se, are essentially telling us the same story, the story of skills typically attributed to humans being taken over by technology. It’s just that the first article tells us about the end of the story, while the second one is rather at the very beginning of it.

Just like such advances in technology as the glass cockpit and way-too-smart autopilots led to pilots losing their grip in manual flying, switching to augmented HD view of the runway will inevitably lead to air traffic control operators losing their basic skills, like tuning binoculars or assessing meteorological conditions by a dozen of nearly subconscious cues. The trained sharpness of their eyes, now supported by HD zoom, will most certainly diminish. Sooner or later, the operators will be unable to manage the runway efficiently without being assisted by the technology.

And this is the challenge we are going to face in the near future. The more human activities typically referred to as ‘acquired skills’ are going to be taken over by technology and automation, the less able we are going to be about those skills ourselves. If a muscle isn’t constantly trained, it wears off. If a musician stops playing regularly, she eventually loses her skill to improvise. If a cook doesn’t dedicate as much time to cooking, his food loses its character, despite cooked from the same quality ingredients and using the same proportions.

And that’s not necessarily bad. As the technology is inevitably making its way to our lives, taking over those of our skills which it can perform better than we do, there is no reason not to embrace it – but embrace thoughtfully, realising the consequences of us losing grip on them. Remember that we had lost a big deal of our skills to the past already. Your great-grandfather is very likely to had been particularly good in fox hunting, your grandad probably performed much better than you in fly fishing, and certainly a much wider proportion of population were capable in horse riding two centuries ago than it is today. Those skills had been taken away from us by technology and general obsolescence, but do we really need them today?

What we need though is to have a clear understanding of the consequences of sharing activities we got used to do with technology, and be prepared to observe a steep decline in the quality of our own respective hand skills as technology gradually takes them over. Understanding that problem alone and taking our imperfect human nature as it is will most certainly help us manage the risks around technological advances more efficiently.

(pictured: a prototype of the digital control room at NATS in Swanwick, credit: NATS)

Detective story. Almost a classic.

When we are away, our house is looked after by security cameras. Whenever a camera detects motion in its view, it captures a set of photos and sends them to a dedicated mailbox. This setup adds to the peace of mind about the safety of our house while we are away, and comes with a nice bonus of observing random shots of the cat wandering around the house.

Our last trip added a piece of an action to the scheme. On the second day’s morning I woke up only to find ~200 camera e-mails in my inbox (the cat’s portraits typically account for 5-8). “Gotcha!”, I rubbed my hands. But I was too quick. All the 200+ photos, apart from 2-3 that actually captured the cat, were quite boring and very similar to each other: an empty room and some blurred spots in the centre. And no sign of burglars.


And that was only a beginning. Hour after hour, camera e-mails continued to come in, one in a minute. Finally, I gave up and returned to doing my business as usual. This decision proved to be tactically correct, as every morning since I woke up to find yet another 200-300 new camera e-mails in my inbox. Every morning I opened 2-3 emails randomly, observed the empty room and the spots, and proceeded to my business. At the time, I didn’t pay attention to the fact that all those messages were only coming in when it was night time at the camera time zone, and this fact was of big significance.

I managed to get back to this avalanche of alerts well after I returned back home. My findings appeared to be quite amusing.

In one of the rooms monitored by a camera a flying insect had found itself a shelter. When the lights went low in the evening, the camera switched from daytime to infrared mode, which resulted in a dim reddish backlight being turned on. Apparently, the bug was attracted to this backlight, and began to flutter around the camera. The camera detected the bug’s motion, and in full accordance with its setup activated the shutter and dispatched the pictures where instructed. During the daylight the camera was turning to a simple piece of furniture, the insect was losing its interest in it, and the flow of e-mails was stopping for a while – to start the cycle over in the dusk.

But that’s not the end of the story. To send out the photos, the cameras use a dedicated e-mail address at my hosting account. To prevent this e-mail account from being used by spammers, the number of messages that can be sent through is capped with 300 per day. The bug was apparently in a darn good shape, as it was managing to consume the whole message allowance way before noon – after which the mail server stopped accepting further messages from the cameras until the start of the next day. This meant that should the hypothetical burglars have planned their dark affairs for the afternoon, they could have avoided the scrutiny of the cameras, and make it off without being noticed – and all due to some tiny bug in the system (*).

The moral of this fable is,

(1) no matter how good at risk assessment you are, there always will be an unaccounted bug whose fluttering will turn all your mitigations down to a joke;

(2) sometimes the measures you expect to protect you (I’m speaking about my outgoing e-mail limits) may turn against you;

(3) (the most important of all!) leave much less food for your cats than you normally do when you go away, so they have an incentive to hunt for any nonsense fluttering around your cameras!

(*) They actually couldn’t – you don’t think that some levitating invertebrate would just knock my whole security system down, do you?

When the theft is inevitable

The hack of Equifax data centre followed by the Yahoo’s revelation of the exposure of its 3bn user accounts (in contrast to 1bn reported before) once again drew attention to the question of exposure of our private and personal data retained by global information aggregators. Due to enormous amounts of information they hold about you, me, and millions of others, they are quite a catch for cyber criminals. As the number of attacks similar to the one that targeted Equifax and their sophistication level will undoubtedly be increasing in near future, so will the chance of your personal data ending up in the hands of criminals.

While there is little we can do about Equifax and their security competencies, we certainly can do a lot more about platforms and services within our control. I am not talking social networks here; surprisingly, the fact that we understand the risks they pose to our privacy helps us perform some form of self-moderation when sharing our private details through them.

Such institution as banks, insurance companies, online retailers, payment processors, and major cross-industry service providers like BT, NHS, or DVLA, especially those under the obligation of KYC or AML compliance, hold enormous amounts of information about their customers, often without them realising this. The scope and value of this information expands far beyond payment card details. A hacker who gains access to a customer database held by any of those companies would almost certainly obtain an unconditional capability to impersonate any customer at any security checkpoint that does not require their physical presence (such as a telephone banking facility or a login form on a web site). For example, they could order a new credit card for themselves through your online banking account, or buy goods on Amazon in your name – but you’ll never see any of them.

This means that we may soon face an even steeper rise in the numbers of identity thefts and related fraud offences, and the Equifax precedent shows that we should take reasonable steps to protect us from those despite all the security assurances given to us by the information custodians. While in most cases we can’t influence online aggregators as to what details to keep and what security methods to employ, we can choose to strengthen the security checkpoints instead, and do this by tightening identity checks, limiting levels of access they grant us, and monitoring them for any suspicious activity.

Employing two-factor authentication is one of the best approaches to tightening the identity checks. If an online service offers it, use it. Even if the attacker manages to use your stolen identity to change your password through the legitimate password recovery procedure, they will be unable to sign in without having access to your second factor.

Limiting access levels is primarily about setting up artificial limits on the actions that you – or the impostor – can conduct with your account. These include any maximum amounts of money that can be spent in one day or month, hours of the day during which the account may be accessed, permitted locations and so on. Many online services offer support for such limitations, and it’s wise to use them. This is mainly a corrective facility that would help minimise your losses should your account get hacked.

Monitoring is about setting up e-mail or text notifications that would inform you about any usual and unusual activity around your account. Having a notification system in place is often the fastest way to identify that your account was hacked. Checking consistency of your account data manually from time to time may help much too.

Finally, it is always a good idea to follow the principle of the least disclosure. If the service doesn’t ask you for some details, or allows you not to answer – don’t give the details away just because. It inevitably turns out that the less a service knows about you, the better it is for you. Again, if you are offered a choice between providing less safe and more safe details, choose wisely. For example, setting up a recurring payment to be collected by direct debit is safer than have it charged monthly to a credit card.

To summarise the above,

1. Most online services suck at security; expect your details to be stolen one day.

2. Minimise the impact of the prospective theft by securing your sign-ins, limiting legitimate access, and setting up access monitoring.

3. Don’t give your personal information away unless required/forced to do so.

Equifax hacked

Equifax says that personal details it held for 143 million U.S. consumers have been stolen by hackers.

We are obviously going to see more of that in near future. Large personal data aggregators, like Equifax, Experian, global banks and large healthcare service providers are among the most attractive targets for data thieves. Unlike social media services like Facebook, which typically use complicated and highly distributed systems to store and access user accounts data, smaller aggregators like credit agencies or banks use far less sophisticated databases, making them much easier to steal.

But what is more important, the theft implies a worrying conclusion – any personal data that played a role of a virtual ‘fingerprint’ by being strongly and privately bound to a particular person, stops being so. Our social security numbers, mobile service providers and monthly spendings cannot be relied upon any more – at least to the extent they used to in the past. This hack is a precursor of a forthcoming fundamental change in the whole ecosystem of authentication and identification of citizens basing on their personal data.

Queen Elizabeth running Windows XP: how big is the issue?

Britain’s Largest Warship Uses Windows XP And It’s Totally Fine, says Michael Fallon, UK Defence Secretary. So is it really – is it OK to run a nearly-twenty-year-old operating system on a strategic battleship?

Unfortunately, what we know so far is way too little to come up with any justified answers. The statement as it is being put on the media (‘the ship runs on Windows XP‘) is utterly vague, unprofessional, and misleading. A warship like Queen Elizabeth has hundreds of different subsystems responsible for tasks of greater or lesser importance. Therefore the first thing that should be identified is the level of involvement of Windows XP in the general routine of operating the warship.

In other words, are those XP machines responsible for crew entertainment? Storing/accessing the logbook? Managing aircraft flight schedules? Tuning up engines? Transmitting cryptograms to the on-shore facilities?

Are they connected to the local warship’s network? To the Internet? If they are, do they have the latest IPS software installed? Any firewalls? Any certified firewalls?

What kind of software is run on those machines? Who can access them? What tasks are they able to perform?

Only after answering the above and other similar questions it would be possible to establish whether those XP machines present any risk to the operation of the warship and the extent of that risk. Otherwise it would be no different to speculating about your neighbour being an extremist just because you once saw them with a big slaughter knife through their kitchen window.

And, by the way, it’s not only about Windows XP’s vulnerability to WannaCry or any other form of malware. Apart from that, a lot of genuine security technologies used in Windows XP are quite outdated. An eloquent example here is that the most recent version of the main communication security protocol, TLS, supported natively by Windows XP (1.0), had been officially retired a year ago. This means that any protected communications that the warship transmits from its XP machines would actually be not protected, and could be easily eavesdropped by third parties.

And yet, all of that wouldn’t make any sense if those Windows XP machines are only used by the crew to kick ass in Call of Duty in their free time, of course.

NHS ransomware attack: who is to blame?

The second week of May appeared to be quite rich on cyber security news. On Friday, while the concerned public were still discussing clever techniques used by new French president’s IT team to deal with Russian government’s hackers, another major news has topped the bill – tens of thousands workstations, including those belonging to UK NHS trusts, have been taken down by a ransomware virus WannaCry. By Sunday, the virus had managed to affect over 200,000 workstations around the world, and could have affected much more if it wasn’t quasi-magically brought to a halt by an anonymous UK security researcher.

The attack summoned a lively discussion in the cyber security community and beyond on the reasons that led to the possibility of the attack and made it so devastating. A lot of fingers had pointed at Microsoft Windows, naming vulnerabilities in the OS the primary reason for the attack. The spiciest ones had even been calling to ban Windows in favour of other, ‘more secure’, operating systems.

Frankly speaking, it came to a bit of surprise for me. While there indeed had been a time when the Microsoft’s OS was an object for heavy criticism for its insecurity, the guys have eventually managed to do a really good job on catching up with the industrial best practices, and incorporated a decent set of security features into Windows XP SP2 and all versions ever since.

While there is little doubt that vulnerabilities in the corporation’s products were used by the malware to spread itself around, it is important not to confuse the reason for the attack – which is the cyber criminals creating the malware and launching the attack for easy gain – with the means they used to conduct it.

Consider chickenpox. This highly contagious virus spreads from one person to another through the air by coughing and sneezing. Still, no one of sound mind views human beings as the reason for the disease. Instead, the reason causing chickenpox is named as varicella zoster virus, while human beings are assigned with the virus carrier role. Getting back to the cyber space, virus carrier is the role played by Windows in the WannaCry case.

Modern software products are extremely sophisticated systems. We should admit that on current stage of human development it is nearly impossible to create a perfect, bug free software product. The chance of a compound system built up of a number of interconnected components having an exploitable security flaw is therefore very close to 100%. Virtually every software product can be used to perform a successful attack of one kind or another. The likeliness of a product to be used as a tool for a virus attack depends not as much on its origin, vendor, or technology used, but rather on its popularity and capability to spread the virus as quickly as possible. No doubt that if Windows suddenly disappeared with its market share taken over by OS X, the next virus we were going to see would have targeted the Apple’s platform.

And one more consideration.

Undoubtedly, the most efficient method of preventing infecting with chickenpox is to stop communicating with other people entirely. Surprisingly, no one uses that. People continue going out on Friday nights risking to catch chickenpox from someone in the bar. Parents continue taking their children to school, risking them getting the disease and spreading it to their siblings and the parents themselves.

Why? Because the common sense is not only about avoiding risk. In fact, avoidance is just one of [at least] four methods of dealing with risk, with reduction, retention, and transfer being another three. When mitigating the risk of getting chickenpox people normally stick to retention (“ah, whatever!”), or reduction (vaccination), but extremely rarely to avoidance.

The same approach can be transferred to the cyber space. There is no much need to make your computer a stronghold by protecting it from all possible attacks. You can spend a fortune and still never reach the 100% protection bar.

Instead, think wider, and try to mitigate the risk in a wiser way. It is very likely – see above – that one day or another you will get ransomware or some other kind of cyber scum on your PC. So instead of trying to avoid the unavoidable, why not think of the unfortunate event as if it is set to happen tomorrow, and employ the preventive countermeasures, such as data backup or emergency response plan, until it’s too late?

So who is to blame in the end?

The question becomes much simpler if we extend common crime handling practices to the cyber space. Information and computer systems are business assets. If a computer system becomes unavailable due to a ransomware attack, the attack obviously constitutes a criminal damage act at the very least. If unresponsiveness of the system leads to impossibility for the business to conduct its operations, the legal treatment for both the attacker and the victim would depend on the criticality of the business and its responsibilities as per the relevant acts of law.

Those will obviously differ for a corner shop and a hospital, with the latter being legally in charge of ensuring the continuity of its operations despite most of emergencies. Just as hospitals deal with potential power outages by installing emergency generators, they employ appropriate security measures to minimise the impact of cyber attacks on their operations – including their choice of secure operating system and any add-on protection methods. It is hospitals’ legal obligation to make sure they continue to operate at appropriate level if their system gets down due to accidental crash or deliberate attack. As for the attackers, they will most likely not get away with criminal damage in case of a large-scale hospital system crash, with escalated charges ranging from causing harm to health to manslaughter.

In this sense, cyber attacks are no different to traditional crimes against businesses and citizens. While their arena, methods, tools, and scale are very different to those of traditional crimes, the principal relationships between the subjects and objects of crime, as well as their legal treatment, largely remain the same.

Writing passwords down without writing them down

Whether it is acceptable or not to write your passwords down has been a debatable question for ages. Just like any other eternal question, it doesn’t have a one-size-fits-all answer, with many factors affecting the final decision for every particular password. What we should admit though is that there are situations where writing a password down is hard to avoid, if possible at all. This is partially caused by the myriads of passwords we need to use to access different services, and the increased requirements for their length and complexity. In most scenarios, the two most important rules of thumb are that

using a complex password and writing it down is safer than using a simple one and not,

and

using different passwords and writing them down is safer than remembering a single, however complex, password and using it throughout.

And as long as we have to write our passwords down, it is quite important to do that right. This is because the passwords you write down are subject to a totally different scope of threats, comparing to the passwords you remember. While the passwords you keep in your memory are normally cracked with high-speed automated tools which use dictionaries to work out millions of passwords per second, the passwords you write down are likely to be found, stolen and used by humans. This, on the one hand, makes them somewhat easier to protect (‘we are all humans’, in the end), and on the other hand, the protection needs to be really smart (‘the computer is incredibly fast, accurate, and stupid. Man is unbelievably slow, inaccurate, and brilliant’).

Largely, there are three general rules you need to follow when writing down your passwords. Neither of those is a must, and some may appear too complicated, yet the more of them you will manage to follow, the safer your passwords will be.

The applicability of these rules is not limited with passwords – writing down your card details or any other sensitive information is subject to the same threats, and as such the same rules can be used to protect them.

The first and foremost rule,

Aim to only write passwords down where absolutely necessary.

There are plenty of ways to keep passwords secure without writing them down in plain. Use password managers or built-in browser facilities to remember the passwords for you, and protect them with sound master password. If unsure, back up the master password on a piece of paper using the rules given below. Generally, if there is a choice between writing your password down in plain in an electronic document on your computer or smartphone or on a piece of paper, choose the paper.

Hide as many facts as you can.

Don’t indicate it’s a password anywhere around it. This applies both to passwords written on the paper and those saved in your computer or smartphone as a note or file. Don’t name it ‘My Passwords.doc’, don’t place it in ‘Important Stuff’ folder, and so on. Keep it alongside your normal work documents/in a similarly looking folder on a shelf. If you need to keep your passwords in the Cloud, mix it up with some unrelated stuff. A friend of mine writes her passwords on an old newspaper, takes a picture of her cat playing with it, and saves the picture together with the rest of her photos – making it look like an innocent ‘yet another’ picture of her pet rather than an important password document.

Don’t write usernames. Normally you would only use few usernames across different online resources, the majority of which would be your e-mail address. Try remembering them instead of writing them down. By omitting the usernames, you make it harder for the villain to get use of the password.

Don’t mention the service to which the password belongs. If you follow the first rule, you will only have few passwords written down. Invent a system of indicating which password belongs to which resource, such as by using multi-colour sheets or sorting the passwords alphabetically. If absolutely necessary, use hints and associations instead of resource names.

Use multiple dimensions. Mix real and fake passwords. Write passwords on different media, use different pieces of multi-dimensional information to put the password together. Another friend of mine writes down his passwords on old business cards, and secretly uses letters and digits from names and telephone numbers printed on the cards as part of the passwords.

Finally,

Become a cryptography enthusiast – encipher your password!

When it comes to amateur kind of encryption, most of us would probably recall the technique that was used by American mafia in the Dancing Men story of the Sherlock Holmes series. The reality is that the cipher used by the mafia was not so good, as it was a basic one round substitution cipher with a constant key too easy to break, and, what is more important, it was a nightmare to use (go and try to remember an alphabet of 26 similarly looking shapes!) No wonder Sherlock managed to crack it right away.

Our goal here is to invent something more sophisticated yet easier to use for your passwords – and it’s not as difficult as you might think. Still, it is an important step, and contributing some quality brainwork will help make sure your passwords are safe. What you need to do here is invent a two-way transformation that would allow you to alter your passwords before writing them down, and reconstruct them back when you need to use them.

You can use the following techniques, combine them, or invent your own:

Change occasional letters. Avoid well-known schemes, such as replacing l with 1, E with 3, or B with 8. Everyone knows them. Invent your own scheme. One of the options is to take a random word in which no letter appears more than once, and use that word as a transformation mechanism. For example, a word ALMOST would instruct us to replace all A’s with L, all L’s with M, all M’s with O, all O’s with S, all S’s with T, and all T’s with A. To get the password back, you follow the reverse procedure, by replacing all A’s with T, all T’s with S, and so on.

Change the order of letters – e.g., by swapping letters on odd places with those on even places, or reverting the word as a whole.

Add random prefix, postfix and suffix – but make sure you remember how many characters and in what places you have added.

Sometimes it helps to write down a hint to remind you of the kind of changes that have been applied to the password, as long as the hint doesn’t disclose them right away.

All in all, the exact technique to use would depend on the criticality of the password that you need to hide. If you are only hiding your TV PIN from your kids, using a simple letter substitution would probably work just fine (though I personally would never underestimate the intelligence of kids!) If it’s your Paypal password that you keep in your wallet, you obviously need to apply a more sophisticated technique.

Now you know enough to start writing your passwords down securely. Please keep in mind, however, that the less information you give away, in any form, the safer you are, and only use the techniques described above where you have no other choice but to do it.

Good News

One of English translations of Victor Hugo’s words On résiste à l’invasion des armées; on ne résiste pas à l’invasion des idées reads as No army can stop an idea whose time has come. In our case, the army is even going to help promote such an idea instead of resisting it.

Atlantic Council is set to host a discussion that was long awaited by me and a solid crowd of experts in information security, business continuity and cyber risk management. The Cyber Risk Wednesday: Software Liability discussion will take place on November the 30th in Washington, DC.

The discussion will be dedicated to a difficult question of increasing liability of software vendors for defects in their products, and the ways of trading it off with economic factors. Taking into account the extent to which software, in a variety of forms, infiltrates into the inmost aspects of our lives (such as a smart house running a hot tub for you), as well as the extent to which we trust software in managing our lives for us (letting it run driverless cars and smart traffic systems), the question of liability is vital – primarily, as a trigger for vendors for employing proper quality assurance and quality control processes. That’s why I wholly welcome the Atlantic Council’s initiative, and truly hope that it will help raise awareness of the problem and give a push to wide public discussion of the same.

On security perimeters

It is my humble point of view that the ‘security perimeter’ concept used widely by security professionals and men in the street provides more harm than good. There are many reasons as to why it does, but the main one is that the use of this concept gives a false sense of security.

If you ask an average person what a security perimeter is, they will probably tell you something like ‘it is a warm and cozy place where I can relax and have my cake while everyone outside is coping with the storm.’

The problem is that it is not entirely so. Contrary to popular belief, security risks don’t go away when you are inside the perimeter. Instead, they transform, they change their sources, targets and shapes, but they are still there, waiting for the right moment to strike. What is particularly bad is that those risks are often overlooked by security staff, who only concentrate on risks posed by hostile outside environment (the storm) – but not the ‘safe’ environment inside the perimeter (yet, an odd cherry bone in the cake that might cause the man to choke to death).

The chaos at JFK is a good (well, not for its participants) illustration of this point. For sure, the area of supposed shooting was viewed by security people as belonging to the security perimeter (and extremely well-protected one – I bet it’s nearly impossible to get to the area even with a fake, not to say a real gun). They probably believed that as long as the borders of the perimeter are protected up to eleven, they don’t need to care about anything happening inside it. Indeed, they might have done a great job about protecting the passengers from gunfire, but they overlooked an entirely different type of risk – which, happily, didn’t cause any casualties.

That’s why any security perimeter (in the meaning of ‘straightforward defence facility’) should be viewed not as a security perimeter, but rather as a transition point from one security setting to another. In no way the inner setting is more secure than the outer one – and sometimes it can even be more dangerous than the outer one (imagine there’s no one in to help the choked man deal with the bone). Thinking in this way will help to make a clearer picture of the variety of risks targeting every particular security setting, and come up with appropriate countermeasures.

Now official: SMS is not a viable second authentication factor

A lot has been said on this topic, but now it’s official: SMS is not a viable second authentication factor in 2FA.

However, as I wrote earlier, it’s not specifically the text messages that are the primary source of troubles for phone-based authentication, but rather the whole authentication model relying solely on the mobile phone activities. Still, it’s great that NIST is aware of the problem and is making steps towards improving the security of 2FA.