On Document Certification

Just like in many other countries, here in the UK there is a concept of document certification, a legal mechanism that allows residents to prove their identity by supplying certified photocopies of their identification documents (usually a driving licence and a recent utility bill) instead of the originals. This comes particularly handy if someone needs to prove their identity to a distant entity without going there physically or sending the originals by post. In most scenarios, certified documents have the same legal power as the corresponding originals.

Certification service is usually provided by public entities or persons ‘of good standing’, such as solicitors, local councils, notaries, and also the post office. An authorised official compares the photocopy to the original document, making sure they look exactly the same, and certifies the authenticity of the copy with their signature and/or seal.

A copy of a council tax bill certified by the post office

From the point of view of cryptography, document certification is a typical Trusted Third Party scheme. By certifying your documents, the official acts as an independent and verifiable third party with no conflict of interest with the originator and the recipient of the documents whatsoever.

Sadly, the scheme, as used in the UK and other European countries, is not entirely flawless. There are scenarios where a malicious individual or company can fraudulently use it to steal the document owner’s identity and use it for their own purposes. Imagine the following scenario:

1) Your new tax advisor asks you to prove your identity to her by sending certified copies of your ID and address to her (this is a genuine legal requirement imposed by Anti Money Laundering regulations).

2) She then remotely registers a limited company in Jersey. When asked for her ID by the company registrar (who follows the same AML ruling), she sends them the certified copies she’d received from you. The registrar has no reason not to trust certified documents, so they accept them.

3) Congratulations, now you are a rightful and fully accountable (with a stress on ‘fully accountable’) owner of that company in Jersey. Profit? Not for you, I’m afraid.

Cryptographers call this type of exploitation ‘a man in the middle.’ Being way more popular in electronic communication protocols, it can still be used to hack offline protocols, just like this one above. What you need for this attack to work, apart from the actors – a sender, a receiver, and, of course, that dark man in the middle who will be playing pass the parcel with both of you, – is an important factor called the absence of proof of origin.

The company registrar in Jersey has no means to verify that the documents they receive come from the real you. Really, what certification proves is that the photocopies are authentic. It doesn’t prove that you’ve made these copies willingly, knowingly, or with intent to set up an LTD. On the other hand, you have no means to ensure that the copies you sent to the tax advisor won’t go somewhere else. That’s how the concept of ‘proof of authenticity of a photocopy’ is mistakenly substituted for ‘proof of ID’.

So how to avoid getting your identity stolen by a disgraceful service?

Try to avoid employing photocopy certification at first place, if possible, even if such option is on the table. If there is a viable option to proof your identity in person, do it. A 30 mile journey is nothing comparing to dealing with the consequences of identity theft.

If you still have to do it remotely, perform due diligence on whoever is requesting the certification. Is it a reputable company? How long has it been around? Does it comply with the data protection act?

If possible, write a context-specific note on the certified copy. Let your note describe what this copy is intended for. For example,

Attn: Ms J Roberts, City Solicitors LLP, Re: Capital Gains Tax Affairs, in response to your letter Ref 30284-1 of 30/03/2018, 02/04/2018.

This will prevent the document from being re-used for a different purpose, as a writing of such kind will most certainly attract attention of an unintended recipient.

I believe that rules of document certification should change in a similar manner to actually provide for ‘proof of ID’ for the requester and ‘proof of use’ for the submitter, rather than a dubious ‘proof of authenticity of a photocopy’. What we need to do is extend the certifying writing with two pieces of information – ‘labels,’ – one from the requester, and another from the submitter.

The certifying official would then write both labels on the photocopy being certified, for example,

I certify these documents following a request from High Street Solicitors LLP Ref #311-235-1 (requester’s label), to be used in a civil case #213-1 ONLY (your label)

– thus unequivocally binding the certification to that specific context.

Having received a certified document created in such manner, the requester would know that it was created in response to their specific request. The document holder, on the other hand, could also be sure that the photocopy can only be used for the purpose they declared in their part of the certifying writing.

(Don’t) Delete Facebook

Everyone’s so agitated about Cambridge Analytica and #deletefacebook as if they have never been warned about the stuff for over a decade or so. The easiest way to conceal information that makes you vulnerable (whatever that is for anyone) is as plain as – surprise! – not give it away.

It is quite amusing that Facebook and Mark Zuckerberg personally fell the biggest victims of the scandal. Not Cambridge Analytica, not Alex the intriguer, not the ‘I-did-nothing-wrong’ Alex. No, it’s Facebook.

Sorry guys, but Facebook’s role in this story is as pure as a drop of water. Facebook, openly and honestly, offers you a stage and a loudspeaker. It doesn’t force you into using them to reveal your secrets. It doesn’t force you into actually using them at all. It’s your choice whether to use the stage and  what exactly to shout into the loudspeaker – and what not to.

This is a good point to recall the next time an unknown app asks your permission to access your contacts, mailbox, or news feed. You probably don’t share everything you write on your page with your mum, so why should you share that with some s̶u̶s̶p̶i̶c̶i̶o̶u̶s̶ ̶l̶a̶d̶s̶ ̶i̶n̶ ̶g̶r̶a̶y̶ ̶h̶o̶o̶d̶i̶e̶s̶  respectable company from Cambridge?

Picture credit: https://www.freeimages.com/photo/the-missing-delete-button-1455215

(A Belated) Christmas Story

Last Christmas, I came across a quite bizarre targeting experience.

Just a few days before Christmas, during my final gift hunt, I visited a large department store to buy a present to a member of my family. I am not very good at shopping, yet I had enough luck to find the ideal option that I immediately knew they would love, take it to the cash desk, and leave the store quickly.

Later on the day, I started observing new ads in my social network feeds. The strangest thing about them was that they were promoting the exact same brand that I had chosen for my family member a few hours ago.

It’s not a secret that we all are being tracked 24/7 by data aggregators, with every single step we make being monitored, recorded, and used later to sell us goods ‘carefully picked’ for us. Everyone of us have come across this annoying situation a thousand times, when our browsing experience all of a sudden became flooded with ads of goods and brands we searched a couple of days before (and some of us may even have ended up with a spoiled surprise if they shared their device with a girlfriend). And with voice assistants coming into play it is well enough just to mention a brand aloud while hanging around your phone to be caught.

But in this case everything was utterly different.

– It was quite unusual for me to buy a product of this particular kind and brand. To be exact, I have never bought anything similar in the past.

– It was a purely offline purchase, and a nearly random choice. No preliminary research. No shopping around for different brands. Saw it, liked it, bought it.

– The purchase being made at a large department store made tracking the chosen brand by location next to impossible. The location service on my phone was off anyway.

– Needless to say that my voice assistant was, and always is, off.

After spending quite a bit of time trying to figure out the source for the leak, I can say for sure that the only link between me and the purchase was my debit card that I used to pay for the gift. And this raises two uncomfortable questions: plainly, ‘who?’ and ‘how?

The transaction involves the shop, which sells me the item, and the bank, which charges my card. These two business operations, per se, are not connected to each other – the shop has no access to my card details, and the bank has no access to the contents of the till receipt. Yet, the data aggregator needs both pieces of information to know about the purchase!

Obviously, the shop leaked my basket. But how did the aggregator manage to set it off against my identity?

There are only two possible mechanisms to do that without violating payment card industry legislation, and, frankly, I don’t know the use of which one is worse to admit.

The first is that the bank communicates all my transactions to the aggregator. This is easy to do technically, and by signing up with your bank you effectively allowed them to treat your account as they want (re-read your current account use policy, if you don’t believe me). It is easy for the aggregator to establish a correspondence between the basket and the card by the amount, time of purchase, and the merchant. An implication from this version is that your income, financial position, and spending habits are known to a much bigger crowd and at much greater level of detail than you supposed they are. And considering the recent leaks from Equifax, that crowd gains a really enormous size.

The second mechanism clears the bank and assumes that the aggregator uses a few pieces of intelligence they hold about us to establish the match. This is much, much harder, but not impossible. With each purchase you make, the shop gets hold of a tiny piece of your card details (as a general rule, the last four digits of the card number – you will often find them printed on your shop receipts). They can send this little piece to the aggregator together with the contents of your basket. The aggregator would then add this piece of evidence to a huge neural network which they use to store all sorts of information about us – our address, recent purchases, spending habits, and shopping routes, which they accumulate over the years. The network would then use all its knowledge about the persons whose card numbers end with the same four digits as provided by the shop, to assign probabilities to assumptions that the purchase in question was made by each particular person from that list. The person with the highest probability, or maybe a few of them, would then be selected as a target for the next campaign from that brand.

In either case, it’s not good. The main issue is that, one way or another, we don’t know the depth of the aggregators’ knowledge about us. And when you don’t know the rules of the game, you start suspecting the worst.

And who knows whether the worst you are suspecting – that is, the worst as you know or assume it – is the worst worst there is.

N. B. If I start seeing ads from that brand in the near future again, I will know for sure it has nothing to do with the bank.

Picture credit: pxhere.com

Undeniably smarter: a few more words on smart contracts

I believe a couple of statements from my last post need some clarifications, the most important being that it is most certainly too early for lawyers to start looking for new qualifications – if ever at all.

Self-enforcing contracts can’t replace human lawyers, and won’t do so in foreseeable future. There will always be a place for a well-drafted written contract, just as there will always be a place for professionals knowing how to compose it.

There are no contradictions here. Self-enforcing contracts have a very specific application area. They are perfectly suited for defining and enforcing relationships where the parties are subject to well-defined obligations, which can be formalized and verified in mathematical or logical way. Particular examples of those are cryptocurrencies, stock exchange, automated tellers and vending machines, and various kinds of automated non-repudiation and proof-of-identity check mechanisms. And yes, you will still need a lawyer for anything harder than that. Both traditional and self-enforcing contracts will therefore find their own place under the sun.

Even more, the evolution of self-enforcing contracts will give rise to a new legal specialty of a self-enforcing contract professional. This function will need to possess and combine typical skills of a lawyer with those of a mathematician/cryptographer, to be able to produce robust and provably secure self-enforcing contracts.

These changes, in fact, are no different to changes we are observing in nearly any other area being transformed by the computer-led innovation – let’s leave the creative stuff for us humans, and let the computer do the routine.

Picture credit: openclipart

Undeniably smart: a word on self-executing contracts

One of the core and most fantastic features of blockchain infrastructures is the notion of smart contracts, also called self-executing contracts. This is what makes cryptocurrencies secure. Put simply, smart contracts do not need enforcement, or, in other words, they enforce themselves – exclusively due to the way they are drafted.

Every time that you use a cash machine, you are entering a contract with your bank. Your obligations are to provide a card and a pin. The bank’s obligations are to check the pin and give you money.

This contract has a number of enforcement flaws. You can use a stolen card. The ATM might have been skimmed. Or it can just eat your card, leaving you with no card and no cash. The bank’s software could be buggy and take more money from your account than it gave to you. In other words, there exist a number of situations where you or your bank may fail to fulfill its part of the contract, with the violation needed to be escalated to authorities outside the contract (e.g. a bank teller or the police force).

Smart contracts are not threatened by flaws like this. They just work, by their design. In Bitcoin, there is no way to forge a transaction, or to fool someone into the content of your wallet, because the environment provides a secure way to verify every single transaction via a so-called distributed ledger, which works in a mathematically proven way.

While commonly associated with blockchain technologies, smart contracts may actually come in a variety of forms and shapes. Most of Internet security protocols rely on them in one way or another. For example, Diffie-Hellman key exchange algorithm is a typical smart contract, even though the term itself was introduced much later than the algorithm. You can’t obtain the shared key without following the contract. If you violate the protocol, you’ll end up with a wrong key, so the need to get the correct key enforces you to implement and perform the protocol correctly.

Ironically, (or logically) Internet protocols suck heavily in the functions not covered by the self-enforcing logic. This is particularly annoying for protocols which are supposed to provide communication security. I think I won’t be mistaken if I say that a weakly implemented certificate validation routine in SSL/TLS is the weakest point of a huge number of the protocol deployments, having a potential of bringing in much bigger troubles than BEAST, POODLE and Heartbleed combined. Many, many times I watched developers neglecting this important compound of the protocol, by either bypassing the certificate validation entirely, or using simplified routines not providing adequate protection. Many times I pointed that out to them, and very few bothered to do anything to fix it.

This flaw continues into TLS 1.3*, with the certificate validation compound keeping its role as a standalone external module with the only purpose to tell ’yes’ or ‘no’ to the main protocol implementation when asked. Needless to say, it is very tempting to implement this module as a hard-coded ‘yes’ to avoid bothering with the validation altogether and have the project up-and-running right here, right now, especially if you are under time and budget pressure.

Designing the certificate validation routine into the protocol in the form of a self-enforcing contract would have done a great job in increasing its security. There would be no way for TLS implementers to cheat by avoiding the validation procedure or implementing it in a wrong/simplified way. The validation module would either work right, or cause the whole secure negotiation to fail.

To be fair, this is easier said than done. Designing a robust self-executing contract requires a deep knowledge of maths and formal logic, as well as cryptography. Decent level of knowledge of the application area of the contract is also quite essential. Ultimately, a proper self-executing contract is a high-grade cryptographic algorithm, and as such is subject to all the relevant strength implications. A good smart contract should be capable of proving its correctness and cryptographic strength formally, which is a good piece of challenge for inventors.

Still, despite the challenges, self-enforcing contracts are set to take a major part in our society. They do a great job by removing the whole surface for wrongdoing, fraud, and human error, and bringing in simplicity and convenience – and this is something that a lot of our typical everyday scenarios need.

Just to make you sure that the next generation ATM never eats your card.

(*) Strictly speaking, this is not a flaw of TLS as a protocol (as the standard thoughtfully transfers all responsibility for handling certificate validation properly to the implementers), but this fact doesn’t make TLS environments any more secure.

Picture credit: Pixabay

Skill vs Technology: a zero-sum game?

Last week I came across two peculiar stories dedicated to the role played by technology in the evolution of the civil aviation industry. While the stories were barely related to each other at first glance – and had I come across them at different points in time I probably would have never spot that huge connection between them – but luckily I was still thinking about the first story when I bumped into the second one, and immediate realisation of the scale of the apparent trend made quite an impression on me.

That first story was about the role of technology in the crash of Air France transatlantic flight 447 back in 2009. The primary conclusion from the investigation that the article elaborates on is that the pilots were so got used to flying with assistance of the autopilot that they became completely lost when they faced the need to fly the aircraft manually. They’ve just got no understanding of the situation whatsoever, as they’ve lacked the hands-on feel for flying the aircraft at cruise altitudes – something normally handled by the autopilot. In addition to that, the autopilot, designed with intelligence and pilot-friendliness in mind, didn’t warn the pilots that the aircraft was approaching a complete stall, after interpreting the way-too-sharply plummeting speed as an indicator of a probable false alarm.

Confused and lost, the pilots applied several corrective actions to get the aircraft back on its course. Unfortunately, due to the pilots’ lack of situational awareness, those actions had become fatal. The A330 lost its airspeed and crashed into the ocean, killing all 228 people on board. Ironically, the investigation had shown that should the pilots not intervene in the situation, AF447 would have continued at its cruise altitude as it should even with its autopilot switched off.

The second story I read was on a far more positive side, depicting prospective transition of the London City airport air traffic control tower from the airfield itself to a small place called Swanwick, Hampshire, some 80 miles away. Specifically, twelve HD screens and a thick communication channel are going to be used to replace the existing watch tower, and are claimed to provide far better insight into aircraft landings and take-offs performed at the airport, as well as a number of augmented reality perks. The experience of LCY is then expected to be picked up by other airports around the country, effectively making air traffic control tower operations an outsourced business.

The fact that impressed me the most about these two articles was that they both, despite being barely related per se, are essentially telling us the same story, the story of skills typically attributed to humans being taken over by technology. It’s just that the first article tells us about the end of the story, while the second one is rather at the very beginning of it.

Just like such advances in technology as the glass cockpit and way-too-smart autopilots led to pilots losing their grip in manual flying, switching to augmented HD view of the runway will inevitably lead to air traffic control operators losing their basic skills, like tuning binoculars or assessing meteorological conditions by a dozen of nearly subconscious cues. The trained sharpness of their eyes, now supported by HD zoom, will most certainly diminish. Sooner or later, the operators will be unable to manage the runway efficiently without being assisted by the technology.

And this is the challenge we are going to face in the near future. The more human activities typically referred to as ‘acquired skills’ are going to be taken over by technology and automation, the less able we are going to be about those skills ourselves. If a muscle isn’t constantly trained, it wears off. If a musician stops playing regularly, she eventually loses her skill to improvise. If a cook doesn’t dedicate as much time to cooking, his food loses its character, despite cooked from the same quality ingredients and using the same proportions.

And that’s not necessarily bad. As the technology is inevitably making its way to our lives, taking over those of our skills which it can perform better than we do, there is no reason not to embrace it – but embrace thoughtfully, realising the consequences of us losing grip on them. Remember that we had lost a big deal of our skills to the past already. Your great-grandfather is very likely to had been particularly good in fox hunting, your grandad probably performed much better than you in fly fishing, and certainly a much wider proportion of population were capable in horse riding two centuries ago than it is today. Those skills had been taken away from us by technology and general obsolescence, but do we really need them today?

What we need though is to have a clear understanding of the consequences of sharing activities we got used to do with technology, and be prepared to observe a steep decline in the quality of our own respective hand skills as technology gradually takes them over. Understanding that problem alone and taking our imperfect human nature as it is will most certainly help us manage the risks around technological advances more efficiently.

(pictured: a prototype of the digital control room at NATS in Swanwick, credit: NATS)

Detective story. Almost a classic.

When we are away, our house is looked after by security cameras. Whenever a camera detects motion in its view, it captures a set of photos and sends them to a dedicated mailbox. This setup adds to the peace of mind about the safety of our house while we are away, and comes with a nice bonus of observing random shots of the cat wandering around the house.

Our last trip added a piece of an action to the scheme. On the second day’s morning I woke up only to find ~200 camera e-mails in my inbox (the cat’s portraits typically account for 5-8). “Gotcha!”, I rubbed my hands. But I was too quick. All the 200+ photos, apart from 2-3 that actually captured the cat, were quite boring and very similar to each other: an empty room and some blurred spots in the centre. And no sign of burglars.


And that was only a beginning. Hour after hour, camera e-mails continued to come in, one in a minute. Finally, I gave up and returned to doing my business as usual. This decision proved to be tactically correct, as every morning since I woke up to find yet another 200-300 new camera e-mails in my inbox. Every morning I opened 2-3 emails randomly, observed the empty room and the spots, and proceeded to my business. At the time, I didn’t pay attention to the fact that all those messages were only coming in when it was night time at the camera time zone, and this fact was of big significance.

I managed to get back to this avalanche of alerts well after I returned back home. My findings appeared to be quite amusing.

In one of the rooms monitored by a camera a flying insect had found itself a shelter. When the lights went low in the evening, the camera switched from daytime to infrared mode, which resulted in a dim reddish backlight being turned on. Apparently, the bug was attracted to this backlight, and began to flutter around the camera. The camera detected the bug’s motion, and in full accordance with its setup activated the shutter and dispatched the pictures where instructed. During the daylight the camera was turning to a simple piece of furniture, the insect was losing its interest in it, and the flow of e-mails was stopping for a while – to start the cycle over in the dusk.

But that’s not the end of the story. To send out the photos, the cameras use a dedicated e-mail address at my hosting account. To prevent this e-mail account from being used by spammers, the number of messages that can be sent through is capped with 300 per day. The bug was apparently in a darn good shape, as it was managing to consume the whole message allowance way before noon – after which the mail server stopped accepting further messages from the cameras until the start of the next day. This meant that should the hypothetical burglars have planned their dark affairs for the afternoon, they could have avoided the scrutiny of the cameras, and make it off without being noticed – and all due to some tiny bug in the system (*).

The moral of this fable is,

(1) no matter how good at risk assessment you are, there always will be an unaccounted bug whose fluttering will turn all your mitigations down to a joke;

(2) sometimes the measures you expect to protect you (I’m speaking about my outgoing e-mail limits) may turn against you;

(3) (the most important of all!) leave much less food for your cats than you normally do when you go away, so they have an incentive to hunt for any nonsense fluttering around your cameras!

(*) They actually couldn’t – you don’t think that some levitating invertebrate would just knock my whole security system down, do you?

Nowhere to hide. How technology is getting control over our private lives.

Last week Gizmodo published an entertaining story of a part-time sex worker Leila who found herself in distress after observing a few of her ‘secret life’ encounters among friend suggestions on her ‘public life’ Facebook account, despite doing her very best to keep the two identities apart. While Facebook is traditionally reluctant about revealing its friend candidates selection criteria (just as it is about nearly all its social algorithms), it is quite clear that the location service activated on her ‘public life’ phone played not the last role. Reports from many Facebook users suggest that random members of the public whom they met a few times on a train on their way to work often ended up on their suggested friends list. This research supports the point too.

This can hardly be referred to as something we didn’t come across before; everyone knows that services like Facebook collect our location information to ‘provide us with better user experience’, as everyone of us most certainly acknowledged after looking through the social network’s usage policy (just kiddin’ mate, of course no one in sound mind ever did that). However, Leila’s experience helped draw attention to the hidden implications of giving this powerful right to Facebook, and gave us a yet another reason to rethink the whole approach to our offline privacy. Indeed, every single connected person out there leaves an enormous trail of evidence. Twenty years ago each of us was a creeping pinhead-sized point on map. Today, we rather look like endlessly unwinding balls of yarn, with the escaping thread being our ever expanding online footprint. It is needless to say that it is very easy to get to the ball by following the thread, and it is very easy for someone to pull the whole thread up once they have caught the ball.

Clearly, the only way to stay incognito is to get rid of the thread. This can be harder than you might think at first, and switching the location service off might not be enough. A Nectar card you scanned at the petrol station in the morning, an Uber that takes you home after a night out, a YouTube video you watched through a Costa WiFi all give away your locations throughout the day without any assistance of the location service. No one can tell where this data travels further. Besides being used for sending you straightforward marketing materials or tailoring the price you pay for the service, it may easily be aggregated by third party businesses (just have another quick look through that usage policy) and then used for any imaginable purpose.

To get rid of your data trail, not only you need to disable the standard ‘senses’ of your phone (location, WiFi connectivity, camera and mic) to stop direct information collection. You should also walk through the list of your apps and assess each and every of them critically against their capabilities of distinguishing your identity from the others’. Typically this would mean that you have some form of an account with them. You never know what information about you and when an app may be collecting and accumulating in its knowledge base. Too many pieces of seemingly harmless or even anonymised details can be put together to establish identities of specific people with sufficient probability – just like it famously happened with NYC celebs’ taxi travels.

But that’s not it either, and the final aspect is a real toughie. The issue is that certain types of apps collect well enough information about us to make assumptions about our identities basing on surprisingly indirect facts. This is predominantly a capability of highly diversified collections of apps from the same vendor that offer a number of services of different kinds.

Suppose you plan to take your cat to a vet for the first time. You open Google search and look for any vets in your area. Once you are satisfied with your choice, you ring them up, arrange an appointment, and add a timed entry to your Google calendar. If Google is really lucky that day, you also use their Maps service to find out the best route to get there.

After a few days you’re off to the vet. Now, if you use one of the Google’s services anonymously or from your second phone over the vet’s WiFi network while waiting to be asked in, Google can make an assumption, basing on the knowledge they already hold about ‘known you’ (a selection of places you are likely to be at at this time and day, or even the exact place if you had used Maps), that this anonymous person is likely to be you. They may not be certain about it at first (meaning they would assign a lesser weight to this assumption and probably ignore it this time – while still keeping a note of it somewhere), but after one or two coincidences of this kind they will have evidence of sufficient weight to associate the anonymous surfer with your known identity. Neural networks are particularly good in tracing and aggregating large arrays of data to identify higher level relationships between seemingly unrelated facts.

This means that protecting your privacy is not an occasional or one-off activity, not something you can enable when you need protection and disable when you don’t. If you have reasons to split between two or more personalities – and sex workers are not the only or even the widest social group here; most of politicians and showbiz celebrities have very similar issues, – the task of keeping your privacy should become a strategy with clearly identified goals, conditions, and a well-defined process that fulfills and supports it.

And there’s definitely more to this to come. Wearables and IoT stuff, which are only making their first steps into the ‘big Internet crowd’ will add up to this world of glass heavily. The rise in data mining and neural networks will make it very simple to conduct high-quality automated research basing on indistinct and incomplete information very soon. So it’s a good moment to stop reading, go outside, look around, and breathe in the air of freedom without risking of being noticed by anyone – or anything. The chances are very high that your kids will only be dreaming of the times when privacy was achievable so easily.

(Picture credits: many thanks for the playing kittens to Stuart Rankin)

When the theft is inevitable

The hack of Equifax data centre followed by the Yahoo’s revelation of the exposure of its 3bn user accounts (in contrast to 1bn reported before) once again drew attention to the question of exposure of our private and personal data retained by global information aggregators. Due to enormous amounts of information they hold about you, me, and millions of others, they are quite a catch for cyber criminals. As the number of attacks similar to the one that targeted Equifax and their sophistication level will undoubtedly be increasing in near future, so will the chance of your personal data ending up in the hands of criminals.

While there is little we can do about Equifax and their security competencies, we certainly can do a lot more about platforms and services within our control. I am not talking social networks here; surprisingly, the fact that we understand the risks they pose to our privacy helps us perform some form of self-moderation when sharing our private details through them.

Such institution as banks, insurance companies, online retailers, payment processors, and major cross-industry service providers like BT, NHS, or DVLA, especially those under the obligation of KYC or AML compliance, hold enormous amounts of information about their customers, often without them realising this. The scope and value of this information expands far beyond payment card details. A hacker who gains access to a customer database held by any of those companies would almost certainly obtain an unconditional capability to impersonate any customer at any security checkpoint that does not require their physical presence (such as a telephone banking facility or a login form on a web site). For example, they could order a new credit card for themselves through your online banking account, or buy goods on Amazon in your name – but you’ll never see any of them.

This means that we may soon face an even steeper rise in the numbers of identity thefts and related fraud offences, and the Equifax precedent shows that we should take reasonable steps to protect us from those despite all the security assurances given to us by the information custodians. While in most cases we can’t influence online aggregators as to what details to keep and what security methods to employ, we can choose to strengthen the security checkpoints instead, and do this by tightening identity checks, limiting levels of access they grant us, and monitoring them for any suspicious activity.

Employing two-factor authentication is one of the best approaches to tightening the identity checks. If an online service offers it, use it. Even if the attacker manages to use your stolen identity to change your password through the legitimate password recovery procedure, they will be unable to sign in without having access to your second factor.

Limiting access levels is primarily about setting up artificial limits on the actions that you – or the impostor – can conduct with your account. These include any maximum amounts of money that can be spent in one day or month, hours of the day during which the account may be accessed, permitted locations and so on. Many online services offer support for such limitations, and it’s wise to use them. This is mainly a corrective facility that would help minimise your losses should your account get hacked.

Monitoring is about setting up e-mail or text notifications that would inform you about any usual and unusual activity around your account. Having a notification system in place is often the fastest way to identify that your account was hacked. Checking consistency of your account data manually from time to time may help much too.

Finally, it is always a good idea to follow the principle of the least disclosure. If the service doesn’t ask you for some details, or allows you not to answer – don’t give the details away just because. It inevitably turns out that the less a service knows about you, the better it is for you. Again, if you are offered a choice between providing less safe and more safe details, choose wisely. For example, setting up a recurring payment to be collected by direct debit is safer than have it charged monthly to a credit card.

To summarise the above,

1. Most online services suck at security; expect your details to be stolen one day.

2. Minimise the impact of the prospective theft by securing your sign-ins, limiting legitimate access, and setting up access monitoring.

3. Don’t give your personal information away unless required/forced to do so.