Moby-Dick; or, the Threat

Norwegian fishermen caught a white beluga whale carrying a harness with surveillance equipment attached to it. Marine experts believe that the whale had been trained by Russian navy, before escaping from its base in Murmansk and heading west through the waters of the Arctic ocean.

I doubt the whale had anything to do with Russian navy for a number of reasons (and it’s not for the ‘St Petersburg’ label on its harness, which, despite its absurdity, counts towards the opposite), but, really, there is nothing that would have prevented the navy from being the actual origin of the animal. For many years Russian military have been experimenting with training underwater mammals to guard their military bases in the Arctic, not to mention that one of their first initiatives in Ukrainian Crimea after temporarily anschluß’ing the peninsula in 2014 was restoring a long-dismissed Soviet dolphin training facility in Sevastopol.

What’s worth noting about this curious occasion is that we got used to believing that attacks, intrusions, and security compromises that originate from man-made sources normally rely on the man-made technologies. The Norwegian story illustrates that it is a mistake to underestimate the risks posed by nature’s own creations, in particular due to their natural ability to disguise, and our own, very human, propensity to think of ourselves as being above the nature, and, conversely, of the nature being well below us.

Trained animals, while probably being one of the most significant, is not the only man-aided source of security threats having their origins in the natural environment. There are certain geological threats: man-provoked floods, rainfalls, earthquakes, and tsunamis. There are biological threats: inflicted invasions of vermin, planted insect-spread diseases, and distribution of weed species capable of taking over large areas of land. Those threats are very hard to recognise, very hard to investigate, and very hard to mitigate.

Apart from direct risks of proactive exploitation of geological and biological opportunities, nature opens up a huge number of covert channels which can be used to spy on opponent’s activities. One example is that excessive waste from a highly concealed military base can lead to increase in population of foxes and other scavengers in surrounding areas. However small those deviations could be, modern monitoring and data mining facilities are likely to be capable in detecting them. Modern AI (let’s just call it that way) is exceptional in detecting and matching patterns, and nature provides countless possibilities for it to learn what the right way of things should look like – and what it should not.

Undoubtedly, crafting attacks involving nature is quite demanding, and brain- and labour-intensive. Setting them up requires a lot of investment and effort, which are only affordable for the richest of this world. Still, it’s all about ‘Il fine giustifica i mezzi’, in the end, isn’t it?

Picture credit: Guardian

Land of Ears

Did it ever occur to you that since not that long ago, at any given moment, we are being constantly listened to by dozens and hundreds of electronic ears. I am primarily talking not about our very own personal buddies Alexa and Siri, but rather about a diversity of less identifiable smart devices and apps of our friends, co-workers, fellow commuters, and occasional passers by. Talking about a trip to Spain over a lunch with your friend, only to find numerous ads of Andalucian seaside villas in your news feed the very next day, became a new norm and doesn’t surprise us any more. But talking about the same trip with your phones off and still getting those villas pop up in your search results – is it something we can expect in the near future?

Voice and speech recognition systems are only getting better and cleverer, involving cutting-edge AI and machine learning technologies. The apps that listen to us and the massive data centres behind them are not that primitive pattern-based monkeys anymore. They are learning to understand more languages, more intonations; they are learning to identify the mood and the reasoning of the speaker; they are learning to decode Yorkshire dialect and Cockney slang. And given they have enough storage capacity, they can totally hold voice patterns for every person they are aware of – and use that database to identify unknown speakers on any recordings they get their hands on.

What this ultimately means for you is that a random okaygoogle on your fellow tube traveller’s phone can overhear you chatting with your girlfriend, identify both of you by the patterns of your voices, recognise what you were talking about, and use that information to spam you.

This indicates a major shift in paradigm. For many years, we used to be in control of most information flows between us, people, and computer systems. We encrypted sensitive information. We restricted access to critical computer systems. We assigned labels to data to separate sensitive stuff from unclassified, and controlled the transition from one classification level to another. We could control all the milestones of the data we possessed, from its genesis through its useful lifetime and to its disposal. We could apply verifiable security measures to each such milestone and be sure that our data remained adequately protected at all times.

Today, we are not in that control anymore. Our information is slipping away from our hands, being captured all the time without us even knowing about it. You can assign the highest sensitivity level to your strategic roadmaps and encrypt them in transit and at rest, but you will never know if your coffee maker made a note of your private telephone call with your CFO about them this morning. You can turn Siri off and even throw away your phone, but you won’t have the slightest idea of what sorts of smart TVs, fitness trackers, or Furby Booms are going to be quietly recording your verbal interactions and when. Walls are naturally getting their ears.

And this means that we should be more careful about what we are saying out loud, not only in private but in public places as well – especially in public places. Anything you say may be used against you. Old school methods, like going out to the fields to discuss a confidential matter tête-à-tête, might well start looking for a new life. Sign languages will gain huge popularity in near future, too.

Not sure about you, but I’m definitely signing up for the course.

(Don’t) Delete Facebook

Everyone’s so agitated about Cambridge Analytica and #deletefacebook as if they have never been warned about the stuff for over a decade or so. The easiest way to conceal information that makes you vulnerable (whatever that is for anyone) is as plain as – surprise! – not give it away.

It is quite amusing that Facebook and Mark Zuckerberg personally fell the biggest victims of the scandal. Not Cambridge Analytica, not Alex the intriguer, not the ‘I-did-nothing-wrong’ Alex. No, it’s Facebook.

Sorry guys, but Facebook’s role in this story is as pure as a drop of water. Facebook, openly and honestly, offers you a stage and a loudspeaker. It doesn’t force you into using them to reveal your secrets. It doesn’t force you into actually using them at all. It’s your choice whether to use the stage and  what exactly to shout into the loudspeaker – and what not to.

This is a good point to recall the next time an unknown app asks your permission to access your contacts, mailbox, or news feed. You probably don’t share everything you write on your page with your mum, so why should you share that with some s̶u̶s̶p̶i̶c̶i̶o̶u̶s̶ ̶l̶a̶d̶s̶ ̶i̶n̶ ̶g̶r̶a̶y̶ ̶h̶o̶o̶d̶i̶e̶s̶  respectable company from Cambridge?

Picture credit: https://www.freeimages.com/photo/the-missing-delete-button-1455215

(A Belated) Christmas Story

Last Christmas, I came across a quite bizarre targeting experience.

Just a few days before Christmas, during my final gift hunt, I visited a large department store to buy a present to a member of my family. I am not very good at shopping, yet I had enough luck to find the ideal option that I immediately knew they would love, take it to the cash desk, and leave the store quickly.

Later on the day, I started observing new ads in my social network feeds. The strangest thing about them was that they were promoting the exact same brand that I had chosen for my family member a few hours ago.

It’s not a secret that we all are being tracked 24/7 by data aggregators, with every single step we make being monitored, recorded, and used later to sell us goods ‘carefully picked’ for us. Everyone of us have come across this annoying situation a thousand times, when our browsing experience all of a sudden became flooded with ads of goods and brands we searched a couple of days before (and some of us may even have ended up with a spoiled surprise if they shared their device with a girlfriend). And with voice assistants coming into play it is well enough just to mention a brand aloud while hanging around your phone to be caught.

But in this case everything was utterly different.

– It was quite unusual for me to buy a product of this particular kind and brand. To be exact, I have never bought anything similar in the past.

– It was a purely offline purchase, and a nearly random choice. No preliminary research. No shopping around for different brands. Saw it, liked it, bought it.

– The purchase being made at a large department store made tracking the chosen brand by location next to impossible. The location service on my phone was off anyway.

– Needless to say that my voice assistant was, and always is, off.

After spending quite a bit of time trying to figure out the source for the leak, I can say for sure that the only link between me and the purchase was my debit card that I used to pay for the gift. And this raises two uncomfortable questions: plainly, ‘who?’ and ‘how?

The transaction involves the shop, which sells me the item, and the bank, which charges my card. These two business operations, per se, are not connected to each other – the shop has no access to my card details, and the bank has no access to the contents of the till receipt. Yet, the data aggregator needs both pieces of information to know about the purchase!

Obviously, the shop leaked my basket. But how did the aggregator manage to set it off against my identity?

There are only two possible mechanisms to do that without violating payment card industry legislation, and, frankly, I don’t know the use of which one is worse to admit.

The first is that the bank communicates all my transactions to the aggregator. This is easy to do technically, and by signing up with your bank you effectively allowed them to treat your account as they want (re-read your current account use policy, if you don’t believe me). It is easy for the aggregator to establish a correspondence between the basket and the card by the amount, time of purchase, and the merchant. An implication from this version is that your income, financial position, and spending habits are known to a much bigger crowd and at much greater level of detail than you supposed they are. And considering the recent leaks from Equifax, that crowd gains a really enormous size.

The second mechanism clears the bank and assumes that the aggregator uses a few pieces of intelligence they hold about us to establish the match. This is much, much harder, but not impossible. With each purchase you make, the shop gets hold of a tiny piece of your card details (as a general rule, the last four digits of the card number – you will often find them printed on your shop receipts). They can send this little piece to the aggregator together with the contents of your basket. The aggregator would then add this piece of evidence to a huge neural network which they use to store all sorts of information about us – our address, recent purchases, spending habits, and shopping routes, which they accumulate over the years. The network would then use all its knowledge about the persons whose card numbers end with the same four digits as provided by the shop, to assign probabilities to assumptions that the purchase in question was made by each particular person from that list. The person with the highest probability, or maybe a few of them, would then be selected as a target for the next campaign from that brand.

In either case, it’s not good. The main issue is that, one way or another, we don’t know the depth of the aggregators’ knowledge about us. And when you don’t know the rules of the game, you start suspecting the worst.

And who knows whether the worst you are suspecting – that is, the worst as you know or assume it – is the worst worst there is.

N. B. If I start seeing ads from that brand in the near future again, I will know for sure it has nothing to do with the bank.

Picture credit: pxhere.com

Nowhere to hide. How technology is getting control over our private lives.

Last week Gizmodo published an entertaining story of a part-time sex worker Leila who found herself in distress after observing a few of her ‘secret life’ encounters among friend suggestions on her ‘public life’ Facebook account, despite doing her very best to keep the two identities apart. While Facebook is traditionally reluctant about revealing its friend candidates selection criteria (just as it is about nearly all its social algorithms), it is quite clear that the location service activated on her ‘public life’ phone played not the last role. Reports from many Facebook users suggest that random members of the public whom they met a few times on a train on their way to work often ended up on their suggested friends list. This research supports the point too.

This can hardly be referred to as something we didn’t come across before; everyone knows that services like Facebook collect our location information to ‘provide us with better user experience’, as everyone of us most certainly acknowledged after looking through the social network’s usage policy (just kiddin’ mate, of course no one in sound mind ever did that). However, Leila’s experience helped draw attention to the hidden implications of giving this powerful right to Facebook, and gave us a yet another reason to rethink the whole approach to our offline privacy. Indeed, every single connected person out there leaves an enormous trail of evidence. Twenty years ago each of us was a creeping pinhead-sized point on map. Today, we rather look like endlessly unwinding balls of yarn, with the escaping thread being our ever expanding online footprint. It is needless to say that it is very easy to get to the ball by following the thread, and it is very easy for someone to pull the whole thread up once they have caught the ball.

Clearly, the only way to stay incognito is to get rid of the thread. This can be harder than you might think at first, and switching the location service off might not be enough. A Nectar card you scanned at the petrol station in the morning, an Uber that takes you home after a night out, a YouTube video you watched through a Costa WiFi all give away your locations throughout the day without any assistance of the location service. No one can tell where this data travels further. Besides being used for sending you straightforward marketing materials or tailoring the price you pay for the service, it may easily be aggregated by third party businesses (just have another quick look through that usage policy) and then used for any imaginable purpose.

To get rid of your data trail, not only you need to disable the standard ‘senses’ of your phone (location, WiFi connectivity, camera and mic) to stop direct information collection. You should also walk through the list of your apps and assess each and every of them critically against their capabilities of distinguishing your identity from the others’. Typically this would mean that you have some form of an account with them. You never know what information about you and when an app may be collecting and accumulating in its knowledge base. Too many pieces of seemingly harmless or even anonymised details can be put together to establish identities of specific people with sufficient probability – just like it famously happened with NYC celebs’ taxi travels.

But that’s not it either, and the final aspect is a real toughie. The issue is that certain types of apps collect well enough information about us to make assumptions about our identities basing on surprisingly indirect facts. This is predominantly a capability of highly diversified collections of apps from the same vendor that offer a number of services of different kinds.

Suppose you plan to take your cat to a vet for the first time. You open Google search and look for any vets in your area. Once you are satisfied with your choice, you ring them up, arrange an appointment, and add a timed entry to your Google calendar. If Google is really lucky that day, you also use their Maps service to find out the best route to get there.

After a few days you’re off to the vet. Now, if you use one of the Google’s services anonymously or from your second phone over the vet’s WiFi network while waiting to be asked in, Google can make an assumption, basing on the knowledge they already hold about ‘known you’ (a selection of places you are likely to be at at this time and day, or even the exact place if you had used Maps), that this anonymous person is likely to be you. They may not be certain about it at first (meaning they would assign a lesser weight to this assumption and probably ignore it this time – while still keeping a note of it somewhere), but after one or two coincidences of this kind they will have evidence of sufficient weight to associate the anonymous surfer with your known identity. Neural networks are particularly good in tracing and aggregating large arrays of data to identify higher level relationships between seemingly unrelated facts.

This means that protecting your privacy is not an occasional or one-off activity, not something you can enable when you need protection and disable when you don’t. If you have reasons to split between two or more personalities – and sex workers are not the only or even the widest social group here; most of politicians and showbiz celebrities have very similar issues, – the task of keeping your privacy should become a strategy with clearly identified goals, conditions, and a well-defined process that fulfills and supports it.

And there’s definitely more to this to come. Wearables and IoT stuff, which are only making their first steps into the ‘big Internet crowd’ will add up to this world of glass heavily. The rise in data mining and neural networks will make it very simple to conduct high-quality automated research basing on indistinct and incomplete information very soon. So it’s a good moment to stop reading, go outside, look around, and breathe in the air of freedom without risking of being noticed by anyone – or anything. The chances are very high that your kids will only be dreaming of the times when privacy was achievable so easily.

(Picture credits: many thanks for the playing kittens to Stuart Rankin)

Queen Elizabeth running Windows XP: how big is the issue?

Britain’s Largest Warship Uses Windows XP And It’s Totally Fine, says Michael Fallon, UK Defence Secretary. So is it really – is it OK to run a nearly-twenty-year-old operating system on a strategic battleship?

Unfortunately, what we know so far is way too little to come up with any justified answers. The statement as it is being put on the media (‘the ship runs on Windows XP‘) is utterly vague, unprofessional, and misleading. A warship like Queen Elizabeth has hundreds of different subsystems responsible for tasks of greater or lesser importance. Therefore the first thing that should be identified is the level of involvement of Windows XP in the general routine of operating the warship.

In other words, are those XP machines responsible for crew entertainment? Storing/accessing the logbook? Managing aircraft flight schedules? Tuning up engines? Transmitting cryptograms to the on-shore facilities?

Are they connected to the local warship’s network? To the Internet? If they are, do they have the latest IPS software installed? Any firewalls? Any certified firewalls?

What kind of software is run on those machines? Who can access them? What tasks are they able to perform?

Only after answering the above and other similar questions it would be possible to establish whether those XP machines present any risk to the operation of the warship and the extent of that risk. Otherwise it would be no different to speculating about your neighbour being an extremist just because you once saw them with a big slaughter knife through their kitchen window.

And, by the way, it’s not only about Windows XP’s vulnerability to WannaCry or any other form of malware. Apart from that, a lot of genuine security technologies used in Windows XP are quite outdated. An eloquent example here is that the most recent version of the main communication security protocol, TLS, supported natively by Windows XP (1.0), had been officially retired a year ago. This means that any protected communications that the warship transmits from its XP machines would actually be not protected, and could be easily eavesdropped by third parties.

And yet, all of that wouldn’t make any sense if those Windows XP machines are only used by the crew to kick ass in Call of Duty in their free time, of course.

Antivirus 2017: Security with a hint of surveillance

Modern antivirus tools use controversial techniques to monitor the user’s HTTPS traffic, which may affect the security of the whole system and put their privacy at risk. What powers should be deemed acceptable for tools protecting us from malware, and where is the red line?

Read my new article “Antivirus 2017: Security with a hint of surveillance” in the fresh issue of (IN)SECURE Magazine.