
Public trust in digital platforms is waning. Meanwhile, generative AI systems are hoovering up every byte of data on the public internet – and sometimes the not-so-public internet, such as Meta’s alleged use of millions of pirated books to train its AI.
Carissa Véliz is an author and associate professor of philosophy and ethics at the University of Oxford. She is also a staunch advocate for privacy. Her book, Privacy is Power, outlines the ways in which governments and big tech track our digital footprints. At the Oxford Literary Festival this April, she joined a panel to discuss her contributions to a new collection of essays, AI Morality, largely about skills and the fragility of digital platforms.
Here, Véliz speaks on how privacy is being eroded in the GenAI era – and what companies should do to address it.
GenAI appears to be the most visible part of this vast data collection system we’ve been building as a society for the past couple of decades. Opting out of some AI services is difficult, but possible. However, opting out of the systems that train these models seems more challenging. Would you agree?
It seems to me like it’s impossible to opt out and that is a huge problem because it means that these systems do not respect privacy laws, whether in the UK or Europe, and yet we’re still allowing them to function.
What I’m worried about is that, instead of forcing tech companies to follow the law, we will change the law to adapt to the technology, because the inconsistency threatens our rule of law. That is the wrong way to resolve this tension, because laws are there to be followed – not to follow the technology.
There are two counts on which we are not opting out. The first one is that these systems are being trained with our data and they are using all data available on the internet: social media, forums, anything that’s online. Worse, we are supposed to have the right to ask companies to delete our data but these companies don’t even know which data they use.
Second, these businesses would have to delete their models to erase our data and they’re not going to do that. They will have to delete the model and retrain it without our data. It’s not going to happen.
So it’s not possible to opt out, really. Does there need to be a society-wide appraisal of GenAI in order to put the brakes on it?
More than put the brakes on it. I don’t like that kind of talk, because it creates defensiveness. We need better tech. Tech can be designed better to support democracy, privacy and our rights. It’s not putting the brakes on it, it’s making it better.
When you’re using an AI chatbot like ChatGPT, it’s making a lot of inferences about you – from the way you use language to where you might live and what age you might be.
So it’s really hard to understand how much data you might be losing. It’s not only what you type on your keyboard, it’s also what can be inferred from that and you might not realise how much can be inferred.
Where do you see this going? My sense is that although regulations like the GDPR are broadly a good thing, they do seem somewhat slow-moving and inadequate to address the problems at large in the GenAI era.
It’s a huge problem but I don’t think that it’s fair to blame it on regulation. Part of why the regulation is very far from perfect was the lobbying of the companies in the first place. The GDPR would have been much stronger if companies hadn’t pushed back. And so part of why we have this crazy system of saying no to cookies all the time is because of pressure from companies.
It would be much simpler if companies didn’t collect your data as the default. Then you wouldn’t have to say no to cookies every time you go onto the website and you’d just have to say yes once. That’s much more rational.
That’s reminiscent of the AI copyright discussion, which is moving towards an opt-out model.
It’s crazy. It’s the result of pressure from companies and it’s not going to work because it puts the burden on individuals and that’s not fair. But regulation is not at fault, it’s that regulation is not going far enough.
Two years ago, regulation seemed to be strengthening. For instance, Europe was thinking about the ePrivacy directive to fix some of the faults of the GDPR. Now we’re going backwards. US pressure and geopolitical tensions mean it’s becoming harder and harder.
Is there going to be a role for businesses to play here? Most businesses are not happy their data is being used to train GenAI platforms either – so do you think there might be more pushback from that angle as well?
Not only a pushback – the New York Times is suing OpenAI – but my hope is some companies will rise to the challenge. One example is Proton [an encrypted email and productivity suite]. Full disclosure: I’m part of the board of the Proton Foundation, but I am part of the board because I actually believe in them and I’ve been tracking them since they started.
Once a company ups their standards, everyone else follows, including the law. Sometimes it’s companies that can be the good citizens that improve standards for everyone. But when it comes to pushback, I really hope the newspapers stand their ground, because they’re most at risk. If they capitulate to tech companies, I’m not sure how it’s going to end.
In, Careless People [a new book about Facebook and its founder Mark Zuckerberg], Zuckerberg is quoted saying newspapers are going to disappear. He says there are a two options: ‘I can buy them or I can create my own’. And he doesn’t seem to realise, or care, about how catastrophic that would be for democracy.
Digital sovereignty is increasingly being discussed across Europe since the Trump administration came to power. I wondered if privacy might also move higher up the agenda?
I expect so. We are already seeing it with the concern from the US about a certain Chinese social media company. I think the concern is twofold. There’s the privacy [issues related to the apps] but also the country’s ability to control algorithms and therefore sway public opinion.
When you start seeing risk in different ways, it becomes more obvious why privacy is important. That’s exactly why we should have privacy all the time, because you never know what’s going to be a risk. Often, when you realise the danger, it’s too late. That’s precisely the point of privacy: to prevent abuses of power.
One thing you mentioned in your talk was that your students are increasingly turning away from digital platforms, which I thought was very interesting. Do you think we’re going to reach a kind of inflection point where people really pause before they turn to ChatGPT or similar platforms?
We might reach that inflection point, yes. This is a huge battle because, although we are tired of being exploited by big tech platforms and we are more wary of them, our lives are also more unmanageable. We’re so busy and overwhelmed, people succumb to convenience because it’s so exhausting to keep your your head above water. It is going be a constant struggle.
That’s why we need companies to come up with convenient ways of preserving privacy.
That goes back to your earlier point – that people feel the onus is on them to protect their privacy. But that does give a sense of fatigue if you’re constantly thinking about it.
That’s why the case of Signal is brilliant. Before Signal [a secure messaging app], it was really hard to encrypt a message. You had to spend a lot of time and be tech savvy.
Signal made it very easy and then other companies followed suit. If you just use Signal, it’s not like you have to think about it, it doesn’t take any more effort than using WhatsApp. So we need more of that.

Public trust in digital platforms is waning. Meanwhile, generative AI systems are hoovering up every byte of data on the public internet – and sometimes the not-so-public internet, such as Meta's alleged use of millions of pirated books to train its AI.
Carissa Véliz is an author and associate professor of philosophy and ethics at the University of Oxford. She is also a staunch advocate for privacy. Her book, Privacy is Power, outlines the ways in which governments and big tech track our digital footprints. At the Oxford Literary Festival this April, she joined a panel to discuss her contributions to a new collection of essays, AI Morality, largely about skills and the fragility of digital platforms.
Here, Véliz speaks on how privacy is being eroded in the GenAI era – and what companies should do to address it.