How do we strike a balance between collecting comprehensive data about citizens to maximise the benefits of smart cities, while protecting privacy rights?
The collection and analysis of data in smart cities holds lots of promise. According to the UK government, it can “raise productivity, create jobs, improve safety, provide environmental benefits, and make public services more efficient and accessible”.
It can also, the government says, do that cheaply. The £24m Future City Glasgow project is said to have had an initial return on investment of £144m.
One aspect of smart cities, though, is more worrisome: the threat to privacy and the potential for surveillance and control.
In the dystopian 1982 film Blade Runner, it’s hard to tell who’s a replicant and who’s human, because the machines have such excellent artificial intelligence that they’re “more human than human”. In the real smart cities of the future, making a distinction between man and machine should be trivial. In principle, all ‘things’ will be networked as part of the internet of things (IoT) and can be constantly monitored and tracked. But for smart cities to work, they will also need data on the human population.
“Smart cities present an interesting opportunity to make society more efficient and sustainable,” says Andy Yen, founder and CEO of Proton, which was set up in 2014 by scientists who met at CERN and want to build an internet where privacy is the default. “But as with any digital innovation, we have to be very careful when considering the long-term implications on privacy. The volume of data that could potentially be collected on individuals is enormous and brings with it significant risks.”
Opportunities for surveillance capitalists
There are two main issues when it comes to network privacy. The first is the security of the underlying infrastructure. “The networks that we use every day weren’t designed for security,” says Vito Rallo, an associate managing director in the cyber risk practice at Kroll. “They were designed to carry data from A to B. They’ve been patched, upgraded and encrypted, but networks will always be hackable as long as they follow the paradigm of not having security by design.”
The second is that hacking may be the least of our worries. Yen argues that the business models of big tech firms – what he calls surveillance capitalism – are inimical to personal privacy. “We have already seen … social media and other platforms used to monetize and profit from people’s most private information,” says Yen. “Meta, Google and Apple have each been accused of exploiting user privacy for business gain multiple times in the last year alone. Smart cities open up additional opportunities for surveillance capitalists to gather more data on people than ever before.”
But what could firms or, indeed, governments monitor in a smart city? You might not care very much whether the system knows that you travelled from Holborn to Liverpool Street, or that your voice payment for coffee suggests you’re a little tired. But there is the potential for things to get very personal, very quickly. There is already technology that can assess your heartbeat without the need for a physical examination – so-called remote monitoring of physiological quantities. That needn’t be sinister. It has obvious value in, say, telemedicine, but it could be used to track individuals in ways that many would find intrusive.
The fundamental question, then, is whether there has to be a trade-off between privacy and using technology and data in a smart city. “While such a trade-off is not necessary, it has unfortunately been treated as one by smart city projects,” says Udbhav Tiwari, senior manager, global public policy, at Mozilla. “We think it’s possible to optimise service delivery while also balancing privacy and security concerns… The adoption of best practice can go a long way in ensuring that risks are adequately accounted for and mitigated at every stage of a smart city project.”
Lack of standards in IoT
But getting to the right standards could be a challenge. “Having overarching standards for the internet of things has been a problem for years,” says Rallo. “And that lack of standards in IoT is a problem for smart cities. The reason why the many proofs of concept and pilots haven’t gone mainstream is that we’ve never been able to reach the level where we could trust them blindly.”
Rallo argues that the chain of trust – that is, trust in the entire technological system, rather than in single devices or networks – is essential to using the IoT. “We already live with the idea of the chain of trust even if we don’t realise it,” he says – pointing out that we’re all quite confident about using mobile banking apps.
One aspect of the chain of trust is believing that the system won’t spy on you. “At any time, the smart city should be able to protect data subjects’ rights, for example, the right to be forgotten or the right of access,” says Rallo.
The snag, of course, is that the more personal data a user provides, the better the service that a smart city can offer. Rallo says that it is technically possible to draw a sharp line between what is personal and private data and what is not. That applies, he says, even when it comes to training artificial intelligence systems that need a lot of data to build reliable models, such as those in autonomous driving. “In principle, all the data used by the AI system could be sanitised to ensure that no personal information is included. The question is whether it has been sanitised,” says Rallo.
Tiwari also says the promise of smart cities can be realised without a dystopian edge. But it won’t be easy, or quick. “Given the rapid digitisation of society, the underlying tension between privacy and convenience in such projects is bound to persist into the near future,” he says. He thinks regulators, civil society and industry will have to work together “to ensure that human dignity and the fundamental right to privacy are given their due, both via technological advances but also better governance”.