
In an era where AI is accelerating both opportunity and risk, tech leaders are finding themselves juggling multiple priorities. Martyn Booth, chief information security officer (CISO) at Dunnhumby, understands this pressure. Leading a 60-strong security team responsible for protecting insights drawn from nearly a billion customers, Booth is helping to shape a security culture that supports innovation without compromising trust.
In this conversation, he shares his approach to AI governance, risk management and the evolving role of the CISO in the age of AI.
How have you seen the role of the CISO evolve over the last few years?
I consulted for nearly a decade, and I’ve been a security leader for a bit longer than that now, mostly in data businesses.
The role has changed quite a lot over that time. It’s much more business and risk-focused now, whereas before, it was more control and regulatory-focused. So it was more about ensuring we’re hitting our compliance requirements and keeping our customers happy, and now it’s much more about enabling business innovation and keeping it safe.
We’ve been asked to be more collaborative and more of an advisor to the business, rather than just someone who ensures we have the right controls against the relevant pieces of legislation. That’s probably shifting again now as we move into AI-powered environments where there’s a split between who is responsible for AI from a security and governance perspective.
Is the security industry ready to keep pace with AI threats?
Threats are building faster than we can test new solutions to try and mitigate them. I think that’s probably true of most organisations. There are governance structures in any mid to large enterprise where you need to go through a lot of hoops to change something – as you should – to make sure it’s doing what it’s meant to be doing and it’s good value for money. That can leave you behind when AI-powered threats are growing so quickly.
There’s debate about whether fighting AI with AI is the right model. That’s where you deploy AI models that can flex and mould to the threat type, so they offset some of those inbound threats. The issue is that we don’t have enough of an understanding of what they’re going to do, and we’re not sure whether those controls might inadvertently cause security gaps.
So it’s difficult at the moment. We’re trying to work through guidelines to protect the business, and we’re blocking some activity where the risk is too high. We’re also trying to come up with ways we can monitor what’s going on while allowing people to use tools that make them more productive. Transparency and clear decision-making are probably the most important things. Why are we using this tool? What do we get out of it? What risks might there be, and is there any way we can mitigate them? Is the risk level acceptable to business leadership?
In the future, we’ll probably have a larger toolbox to use with AI capabilities. It’ll be helpful to see what kind of framework models the industry comes up with to deploy those safely.
You need to make people aware of what they should and shouldn’t be doing, and make them feel part of the decision
Is shadow AI a challenge for security teams?
Shadow IT has been an issue for a long time, but it’s not a technology issue as much as a cultural issue. I’ve worked in organisations where shadow IT has been handled really well. Everyone understands that it’s their data they’re risking and that anything that goes wrong impacts everybody. And then some companies are not quite so good at tackling the issue. You need to make people aware of what they should and shouldn’t be doing, and make them feel part of the decision. When you say you can’t use that, they understand why, rather than going off and using something similar with the same risks.
On the communication side, it’s also about showing them what they can use. I think that comes down to those guardrails or guidelines. If you then get caught doing something you shouldn’t, there aren’t many excuses. It comes down to getting the culture right and doing the right training and awareness.
How do you think AI will have a positive impact in security?
I’m hoping to move some of that repetitive work for my team, particularly in operations. They’ve got a massive queue of work and are doing very repetitive tasks before they get to the more interesting stuff.
We’re looking to use AI to do that early work – maybe the first 10% – to help improve morale, reduce burnout risk and reduce staff turnover. There will be some real benefits when it works, but we haven’t quite seen them yet.
What advice would you give to fellow CISOs who are trying to keep their organisations resilient?
There’s a lot you can do proactively. Even if you’re not in a heavily regulated sector, understanding the impact of any regulations around AI management is important. Doing threat modelling and risk assessments to show how exposed you are – so you understand where you’re likely to be hit and what level of risk you’re exposed to – is crucial.
It’s also crucial to be open and upfront with your exec board about your roadmap. So: this is a threat model, this is a risk assessment, these are the regulations, this is our roadmap to get on top of it, this is the support we’ll need, and when we’ll need it. Then your board knows you’ve understood the risk and are proposing a solution. They might challenge it, they might tell you to do it cheaper – but at least you’re in a dialogue. And even if it goes badly, you’ve given your advice. You’ve drawn out what your organisation needs to be safe.

In an era where AI is accelerating both opportunity and risk, tech leaders are finding themselves juggling multiple priorities. Martyn Booth, chief information security officer (CISO) at Dunnhumby, understands this pressure. Leading a 60-strong security team responsible for protecting insights drawn from nearly a billion customers, Booth is helping to shape a security culture that supports innovation without compromising trust.
In this conversation, he shares his approach to AI governance, risk management and the evolving role of the CISO in the age of AI.
You need to make people aware of what they should and shouldn’t be doing, and make them feel part of the decision