Facepalm

Facebook, ever again, shows that it prefers to learn on its own mistakes rather than someone else’s. This time, it’s about storing passwords in plain text: a textbook security negligence, at different times stepped on by Equifax, Adobe, and Sony.

And this really doesn’t help in building confidence in the social network. We entrust them our most personal pieces of information, and they don’t give a damn about keeping it safe.

We have found no evidence to date that anyone internally abused or improperly accessed them.”, said Pedro Canahuati, Facebook’s vice president of engineering, security, and privacy. Given all the recent breaches in this company’s security, I can’t help translating this to human language as “we didn’t bother so we didn’t put any access control audit mechanisms in place, so whoever saw your passwords, there is no (and can’t be) any real evidence to that.”

Just a couple of days ago I was asked to send money via Facebook payment service. In the middle of the payment process I realized it is not possible to make the payment – which would have been a one-off one for me – without having Facebook remember either my card or Paypal details. I stopped, closed the Facebook tab, and paid with a different method. Glad I did.

Picture credit: Alex E. Proimos

North Korean hacker held responsible for major attacks

Last Thursday the US Department of Justice accused a North Korean hacker, Park Jin Hyok, of hacking Sony Pictures in 2014, committing a theft of $80m from the Bangladesh Central Bank in 2016, the launch of WannaCry malware in 2017, and a series of attacks on Lockheed Martin.

This is undoubtedly quite an impressive list of achievements for a single individual, especially taking into account education and career opportunities his country of residence is capable of providing.

Yet, there are two more thoughts about this matter that come to my mind.

First, it looks like a vast share of North Korean state hacking efforts had been concentrated in the hands of a single individual or a small group of individuals. Not only it is quite amusing to compare that, effectively, family business to some anthill-like underground syndicate carefully instilled in our heads by a crowd of politicians and journalists a good few orders of magnitude bigger; but it is also intriguing to speculate if we shall see the fall of the North Korean hacking programme now that Hyok is out of the game.

Second, all the above attacks have become known to us because they involved a straightforward and noticeable loss for their victims. However, a successful attack doesn’t always imply an immediate and tangible damage. Many hacks are performed by ‘black’ and ‘white’ enthusiasts doing that for all sorts of self-satisfaction; a lot are only performed to plant a time bomb and trigger it after a while; finally, a huge share of attacks target private and sensitive data stored at the victims’ premises, only to sell it later anonymously on dark web. All such attacks rarely get publicity. The bottom line here is, if a small group of NK hackers could do the damage of that scale, what is the actual potential of all the hackers out there?

Land of Ears

Did it ever occur to you that since not that long ago, at any given moment, we are being constantly listened to by dozens and hundreds of electronic ears. I am primarily talking not about our very own personal buddies Alexa and Siri, but rather about a diversity of less identifiable smart devices and apps of our friends, co-workers, fellow commuters, and occasional passers by. Talking about a trip to Spain over a lunch with your friend, only to find numerous ads of Andalucian seaside villas in your news feed the very next day, became a new norm and doesn’t surprise us any more. But talking about the same trip with your phones off and still getting those villas pop up in your search results – is it something we can expect in the near future?

Voice and speech recognition systems are only getting better and cleverer, involving cutting-edge AI and machine learning technologies. The apps that listen to us and the massive data centres behind them are not that primitive pattern-based monkeys anymore. They are learning to understand more languages, more intonations; they are learning to identify the mood and the reasoning of the speaker; they are learning to decode Yorkshire dialect and Cockney slang. And given they have enough storage capacity, they can totally hold voice patterns for every person they are aware of – and use that database to identify unknown speakers on any recordings they get their hands on.

What this ultimately means for you is that a random okaygoogle on your fellow tube traveller’s phone can overhear you chatting with your girlfriend, identify both of you by the patterns of your voices, recognise what you were talking about, and use that information to spam you.

This indicates a major shift in paradigm. For many years, we used to be in control of most information flows between us, people, and computer systems. We encrypted sensitive information. We restricted access to critical computer systems. We assigned labels to data to separate sensitive stuff from unclassified, and controlled the transition from one classification level to another. We could control all the milestones of the data we possessed, from its genesis through its useful lifetime and to its disposal. We could apply verifiable security measures to each such milestone and be sure that our data remained adequately protected at all times.

Today, we are not in that control anymore. Our information is slipping away from our hands, being captured all the time without us even knowing about it. You can assign the highest sensitivity level to your strategic roadmaps and encrypt them in transit and at rest, but you will never know if your coffee maker made a note of your private telephone call with your CFO about them this morning. You can turn Siri off and even throw away your phone, but you won’t have the slightest idea of what sorts of smart TVs, fitness trackers, or Furby Booms are going to be quietly recording your verbal interactions and when. Walls are naturally getting their ears.

And this means that we should be more careful about what we are saying out loud, not only in private but in public places as well – especially in public places. Anything you say may be used against you. Old school methods, like going out to the fields to discuss a confidential matter tête-à-tête, might well start looking for a new life. Sign languages will gain huge popularity in near future, too.

Not sure about you, but I’m definitely signing up for the course.

(Don’t) Delete Facebook

Everyone’s so agitated about Cambridge Analytica and #deletefacebook as if they have never been warned about the stuff for over a decade or so. The easiest way to conceal information that makes you vulnerable (whatever that is for anyone) is as plain as – surprise! – not give it away.

It is quite amusing that Facebook and Mark Zuckerberg personally fell the biggest victims of the scandal. Not Cambridge Analytica, not Alex the intriguer, not the ‘I-did-nothing-wrong’ Alex. No, it’s Facebook.

Sorry guys, but Facebook’s role in this story is as pure as a drop of water. Facebook, openly and honestly, offers you a stage and a loudspeaker. It doesn’t force you into using them to reveal your secrets. It doesn’t force you into actually using them at all. It’s your choice whether to use the stage and  what exactly to shout into the loudspeaker – and what not to.

This is a good point to recall the next time an unknown app asks your permission to access your contacts, mailbox, or news feed. You probably don’t share everything you write on your page with your mum, so why should you share that with some s̶u̶s̶p̶i̶c̶i̶o̶u̶s̶ ̶l̶a̶d̶s̶ ̶i̶n̶ ̶g̶r̶a̶y̶ ̶h̶o̶o̶d̶i̶e̶s̶  respectable company from Cambridge?

Picture credit: https://www.freeimages.com/photo/the-missing-delete-button-1455215

(A Belated) Christmas Story

Last Christmas, I came across a quite bizarre targeting experience.

Just a few days before Christmas, during my final gift hunt, I visited a large department store to buy a present to a member of my family. I am not very good at shopping, yet I had enough luck to find the ideal option that I immediately knew they would love, take it to the cash desk, and leave the store quickly.

Later on the day, I started observing new ads in my social network feeds. The strangest thing about them was that they were promoting the exact same brand that I had chosen for my family member a few hours ago.

It’s not a secret that we all are being tracked 24/7 by data aggregators, with every single step we make being monitored, recorded, and used later to sell us goods ‘carefully picked’ for us. Everyone of us have come across this annoying situation a thousand times, when our browsing experience all of a sudden became flooded with ads of goods and brands we searched a couple of days before (and some of us may even have ended up with a spoiled surprise if they shared their device with a girlfriend). And with voice assistants coming into play it is well enough just to mention a brand aloud while hanging around your phone to be caught.

But in this case everything was utterly different.

– It was quite unusual for me to buy a product of this particular kind and brand. To be exact, I have never bought anything similar in the past.

– It was a purely offline purchase, and a nearly random choice. No preliminary research. No shopping around for different brands. Saw it, liked it, bought it.

– The purchase being made at a large department store made tracking the chosen brand by location next to impossible. The location service on my phone was off anyway.

– Needless to say that my voice assistant was, and always is, off.

After spending quite a bit of time trying to figure out the source for the leak, I can say for sure that the only link between me and the purchase was my debit card that I used to pay for the gift. And this raises two uncomfortable questions: plainly, ‘who?’ and ‘how?

The transaction involves the shop, which sells me the item, and the bank, which charges my card. These two business operations, per se, are not connected to each other – the shop has no access to my card details, and the bank has no access to the contents of the till receipt. Yet, the data aggregator needs both pieces of information to know about the purchase!

Obviously, the shop leaked my basket. But how did the aggregator manage to set it off against my identity?

There are only two possible mechanisms to do that without violating payment card industry legislation, and, frankly, I don’t know the use of which one is worse to admit.

The first is that the bank communicates all my transactions to the aggregator. This is easy to do technically, and by signing up with your bank you effectively allowed them to treat your account as they want (re-read your current account use policy, if you don’t believe me). It is easy for the aggregator to establish a correspondence between the basket and the card by the amount, time of purchase, and the merchant. An implication from this version is that your income, financial position, and spending habits are known to a much bigger crowd and at much greater level of detail than you supposed they are. And considering the recent leaks from Equifax, that crowd gains a really enormous size.

The second mechanism clears the bank and assumes that the aggregator uses a few pieces of intelligence they hold about us to establish the match. This is much, much harder, but not impossible. With each purchase you make, the shop gets hold of a tiny piece of your card details (as a general rule, the last four digits of the card number – you will often find them printed on your shop receipts). They can send this little piece to the aggregator together with the contents of your basket. The aggregator would then add this piece of evidence to a huge neural network which they use to store all sorts of information about us – our address, recent purchases, spending habits, and shopping routes, which they accumulate over the years. The network would then use all its knowledge about the persons whose card numbers end with the same four digits as provided by the shop, to assign probabilities to assumptions that the purchase in question was made by each particular person from that list. The person with the highest probability, or maybe a few of them, would then be selected as a target for the next campaign from that brand.

In either case, it’s not good. The main issue is that, one way or another, we don’t know the depth of the aggregators’ knowledge about us. And when you don’t know the rules of the game, you start suspecting the worst.

And who knows whether the worst you are suspecting – that is, the worst as you know or assume it – is the worst worst there is.

N. B. If I start seeing ads from that brand in the near future again, I will know for sure it has nothing to do with the bank.

Picture credit: pxhere.com

Nowhere to hide. How technology is getting control over our private lives.

Last week Gizmodo published an entertaining story of a part-time sex worker Leila who found herself in distress after observing a few of her ‘secret life’ encounters among friend suggestions on her ‘public life’ Facebook account, despite doing her very best to keep the two identities apart. While Facebook is traditionally reluctant about revealing its friend candidates selection criteria (just as it is about nearly all its social algorithms), it is quite clear that the location service activated on her ‘public life’ phone played not the last role. Reports from many Facebook users suggest that random members of the public whom they met a few times on a train on their way to work often ended up on their suggested friends list. This research supports the point too.

This can hardly be referred to as something we didn’t come across before; everyone knows that services like Facebook collect our location information to ‘provide us with better user experience’, as everyone of us most certainly acknowledged after looking through the social network’s usage policy (just kiddin’ mate, of course no one in sound mind ever did that). However, Leila’s experience helped draw attention to the hidden implications of giving this powerful right to Facebook, and gave us a yet another reason to rethink the whole approach to our offline privacy. Indeed, every single connected person out there leaves an enormous trail of evidence. Twenty years ago each of us was a creeping pinhead-sized point on map. Today, we rather look like endlessly unwinding balls of yarn, with the escaping thread being our ever expanding online footprint. It is needless to say that it is very easy to get to the ball by following the thread, and it is very easy for someone to pull the whole thread up once they have caught the ball.

Clearly, the only way to stay incognito is to get rid of the thread. This can be harder than you might think at first, and switching the location service off might not be enough. A Nectar card you scanned at the petrol station in the morning, an Uber that takes you home after a night out, a YouTube video you watched through a Costa WiFi all give away your locations throughout the day without any assistance of the location service. No one can tell where this data travels further. Besides being used for sending you straightforward marketing materials or tailoring the price you pay for the service, it may easily be aggregated by third party businesses (just have another quick look through that usage policy) and then used for any imaginable purpose.

To get rid of your data trail, not only you need to disable the standard ‘senses’ of your phone (location, WiFi connectivity, camera and mic) to stop direct information collection. You should also walk through the list of your apps and assess each and every of them critically against their capabilities of distinguishing your identity from the others’. Typically this would mean that you have some form of an account with them. You never know what information about you and when an app may be collecting and accumulating in its knowledge base. Too many pieces of seemingly harmless or even anonymised details can be put together to establish identities of specific people with sufficient probability – just like it famously happened with NYC celebs’ taxi travels.

But that’s not it either, and the final aspect is a real toughie. The issue is that certain types of apps collect well enough information about us to make assumptions about our identities basing on surprisingly indirect facts. This is predominantly a capability of highly diversified collections of apps from the same vendor that offer a number of services of different kinds.

Suppose you plan to take your cat to a vet for the first time. You open Google search and look for any vets in your area. Once you are satisfied with your choice, you ring them up, arrange an appointment, and add a timed entry to your Google calendar. If Google is really lucky that day, you also use their Maps service to find out the best route to get there.

After a few days you’re off to the vet. Now, if you use one of the Google’s services anonymously or from your second phone over the vet’s WiFi network while waiting to be asked in, Google can make an assumption, basing on the knowledge they already hold about ‘known you’ (a selection of places you are likely to be at at this time and day, or even the exact place if you had used Maps), that this anonymous person is likely to be you. They may not be certain about it at first (meaning they would assign a lesser weight to this assumption and probably ignore it this time – while still keeping a note of it somewhere), but after one or two coincidences of this kind they will have evidence of sufficient weight to associate the anonymous surfer with your known identity. Neural networks are particularly good in tracing and aggregating large arrays of data to identify higher level relationships between seemingly unrelated facts.

This means that protecting your privacy is not an occasional or one-off activity, not something you can enable when you need protection and disable when you don’t. If you have reasons to split between two or more personalities – and sex workers are not the only or even the widest social group here; most of politicians and showbiz celebrities have very similar issues, – the task of keeping your privacy should become a strategy with clearly identified goals, conditions, and a well-defined process that fulfills and supports it.

And there’s definitely more to this to come. Wearables and IoT stuff, which are only making their first steps into the ‘big Internet crowd’ will add up to this world of glass heavily. The rise in data mining and neural networks will make it very simple to conduct high-quality automated research basing on indistinct and incomplete information very soon. So it’s a good moment to stop reading, go outside, look around, and breathe in the air of freedom without risking of being noticed by anyone – or anything. The chances are very high that your kids will only be dreaming of the times when privacy was achievable so easily.

(Picture credits: many thanks for the playing kittens to Stuart Rankin)

When the theft is inevitable

The hack of Equifax data centre followed by the Yahoo’s revelation of the exposure of its 3bn user accounts (in contrast to 1bn reported before) once again drew attention to the question of exposure of our private and personal data retained by global information aggregators. Due to enormous amounts of information they hold about you, me, and millions of others, they are quite a catch for cyber criminals. As the number of attacks similar to the one that targeted Equifax and their sophistication level will undoubtedly be increasing in near future, so will the chance of your personal data ending up in the hands of criminals.

While there is little we can do about Equifax and their security competencies, we certainly can do a lot more about platforms and services within our control. I am not talking social networks here; surprisingly, the fact that we understand the risks they pose to our privacy helps us perform some form of self-moderation when sharing our private details through them.

Such institution as banks, insurance companies, online retailers, payment processors, and major cross-industry service providers like BT, NHS, or DVLA, especially those under the obligation of KYC or AML compliance, hold enormous amounts of information about their customers, often without them realising this. The scope and value of this information expands far beyond payment card details. A hacker who gains access to a customer database held by any of those companies would almost certainly obtain an unconditional capability to impersonate any customer at any security checkpoint that does not require their physical presence (such as a telephone banking facility or a login form on a web site). For example, they could order a new credit card for themselves through your online banking account, or buy goods on Amazon in your name – but you’ll never see any of them.

This means that we may soon face an even steeper rise in the numbers of identity thefts and related fraud offences, and the Equifax precedent shows that we should take reasonable steps to protect us from those despite all the security assurances given to us by the information custodians. While in most cases we can’t influence online aggregators as to what details to keep and what security methods to employ, we can choose to strengthen the security checkpoints instead, and do this by tightening identity checks, limiting levels of access they grant us, and monitoring them for any suspicious activity.

Employing two-factor authentication is one of the best approaches to tightening the identity checks. If an online service offers it, use it. Even if the attacker manages to use your stolen identity to change your password through the legitimate password recovery procedure, they will be unable to sign in without having access to your second factor.

Limiting access levels is primarily about setting up artificial limits on the actions that you – or the impostor – can conduct with your account. These include any maximum amounts of money that can be spent in one day or month, hours of the day during which the account may be accessed, permitted locations and so on. Many online services offer support for such limitations, and it’s wise to use them. This is mainly a corrective facility that would help minimise your losses should your account get hacked.

Monitoring is about setting up e-mail or text notifications that would inform you about any usual and unusual activity around your account. Having a notification system in place is often the fastest way to identify that your account was hacked. Checking consistency of your account data manually from time to time may help much too.

Finally, it is always a good idea to follow the principle of the least disclosure. If the service doesn’t ask you for some details, or allows you not to answer – don’t give the details away just because. It inevitably turns out that the less a service knows about you, the better it is for you. Again, if you are offered a choice between providing less safe and more safe details, choose wisely. For example, setting up a recurring payment to be collected by direct debit is safer than have it charged monthly to a credit card.

To summarise the above,

1. Most online services suck at security; expect your details to be stolen one day.

2. Minimise the impact of the prospective theft by securing your sign-ins, limiting legitimate access, and setting up access monitoring.

3. Don’t give your personal information away unless required/forced to do so.

Equifax hacked

Equifax says that personal details it held for 143 million U.S. consumers have been stolen by hackers.

We are obviously going to see more of that in near future. Large personal data aggregators, like Equifax, Experian, global banks and large healthcare service providers are among the most attractive targets for data thieves. Unlike social media services like Facebook, which typically use complicated and highly distributed systems to store and access user accounts data, smaller aggregators like credit agencies or banks use far less sophisticated databases, making them much easier to steal.

But what is more important, the theft implies a worrying conclusion – any personal data that played a role of a virtual ‘fingerprint’ by being strongly and privately bound to a particular person, stops being so. Our social security numbers, mobile service providers and monthly spendings cannot be relied upon any more – at least to the extent they used to in the past. This hack is a precursor of a forthcoming fundamental change in the whole ecosystem of authentication and identification of citizens basing on their personal data.

Antivirus 2017: Security with a hint of surveillance

Modern antivirus tools use controversial techniques to monitor the user’s HTTPS traffic, which may affect the security of the whole system and put their privacy at risk. What powers should be deemed acceptable for tools protecting us from malware, and where is the red line?

Read my new article “Antivirus 2017: Security with a hint of surveillance” in the fresh issue of (IN)SECURE Magazine.