Behavioural biometrics offer an additional layer of security to identify customers, but come with a host of privacy and ethical concerns that must be addressed. Experts debate the pros and cons
Pros of behavioural biometrics
In the wake of the Sony hack and Cambridge Analytica scandal, concerns around online data have engaged public discourse. When it comes to financial institutions, new ways of handling their customers’ data, and verifying who they are, have been introduced at a rapid pace.
In a bid to authenticate customers more efficiently, banks are turning to new metrics. They are moving from knowledge-based entries such as passwords and security questions to biometrics like our faces and fingerprints. But so-called behavioural biometrics are increasingly used to analyse how tightly we grasp our phone, how swiftly we swipe and how evenly we walk.
So, if you’re not typing as fast as you normally would, the system might fail to authenticate you. One advantage of these metrics is that it’s supposedly almost impossible to steal or replicate.
“Behavioural biometrics can be collected unobtrusively and multiple modalities can be collected at the same time, providing for better authentication and making it more difficult to spoof them,” says Dr Roman V. Yampolskiy, associate professor of computer science and engineering at the University of Louisville.
In addition to efficiency, behavioural biometrics are a way for financial institutions to respond to ever-increasing regulatory requirements for multi-factor authentication. Instead of requiring customers to enter multiple passwords and codes, which most people find fatiguing, biometrics offer a way of passively authenticating users without them having to make an effort.
Once financial companies have behavioural biometric data, its application could go beyond authentication.
“There has been some data that suggests such behavioural biometrics can be used not just to identify the individual, but to provide some indication of aspects of that person’s personality profile,” says Greg Davies, head of behavioural finance at Oxford Risk. “If so, these techniques could also be used to supplement existing client profiling processes, helping to establish more personalised and targeted communications and client engagement.”
But behavioural biometrics don’t just happen with your conscious input, they happen without most people’s knowledge too. They’re run behind the scenes. While behavioural biometrics hold huge potential for improving the security and usability of authentication processes, let’s not overlook the major drawbacks to this method.
Customers will have given their consent for data collection, but terms and conditions can be opaque and confusing when data is at stake.
If biometrics are not just telling me who you are, but also how best to communicate and sell to you, then this has consequences
Cons of behavioural biometrics
Let’s consider the other side of the story. Given that financial institutions or technology companies don’t typically share the extent to which they monitor their clients’ behaviour, instead forging ahead with new metrics and analytics, privacy and transparency can suffer. It’s the crux of any data collection method.
With hacking and fraud on the rise, financial institutions will have to work hard to protect their customers’ data in the process of their identity authentication. As Yampolskiy at the University of Louisville notes: “Behaviour is strongly related to cognitive function and so collecting behavioural observations in many cases means privacy violations, which may uncover undesirable aspects for public disclosure information, such as illnesses.”
Another issue with biometric data collection is that it can be biased. If most of the systems have been trained to recognise and measure the activities of white men, for example, they won’t be as good at analysing the biometrics of women and ethnic minorities. This could result in false positives or declined logins.
Davies at Oxford Risk agrees there are a number of reasons to be cautious about behavioural biometrics. He says there would need to be “a lot of empirical validation to ensure these techniques are establishing something valid and stable, and reliable about the individual”. This also means excluding the possibility such methodologies don’t provide unreliable results if they were measured at specific times, for example, when the person was stressed or exercising.
“If biometrics are not just telling me who you are, but also how best to communicate and sell to you, then this has large consequences for the use of some data,” Davies adds.
Customers expect more accuracy from brands’ online communication, but too much targeting can be creepy. So what can be done to address these downsides and ensure customer data is protected?
Yampolskiy points out that algorithmic techniques for obfuscation of collected data, such as hashing, which is a way of increasing security during the process of data storage, may help reduce some of the privacy concerns.
“I’d say the use of such profiling needs to be very careful and transparent,” says Davies, adding it’s likely behavioural biometrics will be used beyond the realms of authentication. He says there is less danger in using such profiling to improve client engagement and communication, but a fairly high risk if it was used to guide people with long-term investment advice.
“Being very clear about where each measure is used is vital, as is ensuring all use is made transparent to users and regulators,” Davies concludes.