
With AI breakthroughs reshaping industries, leaders must rethink their approach. Elsevier, a global leader in scientific publishing and data analytics, is driving innovation at the intersection of research and technology. The company’s CTO, Jill Luber, shares her perspective on what today’s tech leadership demands.
How have tech leaders needed to adapt their skills?
Today, skills requirements go beyond technical expertise. Data fluency is a game-changer. It enables us to make smarter, more informed decisions and identify problems early and, in turn, means we are far more responsive to customer and business needs.
Equally as important are strong collaborative skills and customer-centric thinking. I always encourage my teams to consider the ‘why’ behind the work. How does this technology serve the user? How does it create value? Does the customer see and agree with that value? That mindset shift, from building tech for tech’s sake to building with purpose, is truly transformative.
How has your leadership style evolved?
Early in my career, I was a hands-on developer. I loved diving into the technical details. But as I grew into leadership roles, I realised that my impact could be greater if I focused on empowering others, setting a clear vision and building strong, inclusive teams.
I also took time away from leadership at one point in my career to focus on my family, and that experience gave me a deeper appreciation for balance, empathy and resilience. Now I prioritise a people-first approach in my leadership style, to foster an environment where individuals can develop and assume leadership roles themselves.
I’m spending more time than ever aligning AI capabilities with strategic business goals, ensuring we build solutions that are not only innovative but also useful, ethical and customer-centric
Has AI changed your role as CTO, and if so, how?
AI has significantly reshaped my role, and indeed any CTO’s role.
The shift isn’t just about integrating a new technology; it’s about rethinking how we lead, deliver value and prepare for what’s next. AI has pushed the boundaries of what’s possible, which means the expectations of what technology can deliver, and how quickly, have changed dramatically.
Today, I’m spending more time than ever aligning AI capabilities with strategic business goals, ensuring we build solutions that are not only innovative but also useful, ethical and customer-centric. This requires deep collaboration across the business and a relentless focus on understanding how this technology can solve real-world problems for our users, as well as acknowledging its shortcomings.
I’ve had to become more comfortable with ambiguity and more deliberate about creating space for experimentation and failure. And while I still value technical depth, I’ve found the greatest impact comes from empowering others, setting a clear ethical vision, and building diverse teams who can challenge assumptions and bring new perspectives.
In many ways, AI hasn’t just changed what I do, it’s transformed how I think about leadership, innovation and the responsibility that comes with this shift.
What are the biggest changes on the horizon in your industry?
Our industry has changed beyond recognition over the last three years. What’s immediately front of mind is the emergence of agentic AI. These aren’t just tools that support decision-making; they’re capable of independently completing complex tasks. In our industry, that could mean accelerating scientific discovery, streamlining peer review or helping researchers uncover new insights at scale.
What’s equally transformative is the pace of this evolution. We’re seeing breakthrough capabilities emerge in a matter of weeks, not years. That level of velocity requires organisations to become far more adaptive. Not only in how we build technology, but in how we listen to and act on the changing needs of our customers. Whether it’s researchers looking for faster access to insights, clinicians needing evidence-based answers at the point of care, or institutions managing large-scale data integrations, responsiveness is critical.
The challenge, of course, is to move at speed without compromising our core commitments to privacy, transparency and equity. Agentic AI raises new questions around autonomy, accountability and ethics, and it’s our responsibility to address those head-on. The future of our industry will belong to those who can innovate rapidly while building trust, transparency and value at every step.
How can organisations get the balance right between tech innovation and responsibility, especially when it comes to AI?
AI offers enormous potential, but there are understandably growing concerns around bias and data privacy. Especially within sectors such as ours, which rely on sensitive and diverse datasets and where our solutions support decisions that impact society, lives and careers.
Getting the balance right between innovation and responsibility in AI isn’t optional. It’s essential. AI systems are only as fair and ethical as the data and design choices behind them. We’ve seen real-world consequences when bias goes unchecked: facial recognition tools that misidentify people of colour or hiring algorithms that disadvantage women. These aren’t just technical errors; they’re cultural failures that reflect and reinforce systemic inequities.
Responsible innovation means embedding ethics into development from the start – interrogating data bias, ensuring diverse teams, and maintaining transparency about model training and use.
At Elsevier, we approach AI with a deep understanding that privacy and trust are inseparable from innovation. Our privacy principles frame data stewardship not as a compliance checkbox, but as a core element of our relationship with users. Trust is a competitive advantage, and losing it through misuse of data can be far more damaging than falling behind in a technology race.
How can tech teams prepare future-ready capabilities when the ‘future’ keeps changing as technology evolves?
The most important capability any technology team can build in today’s world is a culture of continual learning and curiosity. The technologies we’re working with are evolving at such a pace that what was cutting-edge six months ago can feel outdated today. The only way to keep up is to foster teams that are not only skilled, but also eager to explore, experiment and grow.
That means creating space for experimentation and even failure. Not everything will work the first time, and that’s okay. Future-ready teams are resilient teams: they test, learn, adapt and try again. This mindset allows us to respond quickly when new technologies emerge or when customer needs shift unexpectedly.
In my view, being future-ready isn’t about predicting the next big thing, it’s about preparing your teams to thrive in uncertainty and building resilience.
What would be your advice to aspiring leaders?
Don’t be afraid of a non-linear path. My own journey wasn’t a straight line, and that’s okay. What matters is staying curious, being adaptable and always looking for ways to learn. And finally, focus on impact. It’s not just about delivering code or hitting deadlines, it’s about making a difference and creating value.
What’s the best piece of leadership advice you’ve received?
The best piece of advice I received was from Erik Engstrom, CEO of Relx, who told me upon receiving the position of CTO: Don’t change to fit the chair, change the chair to fit you.

With AI breakthroughs reshaping industries, leaders must rethink their approach. Elsevier, a global leader in scientific publishing and data analytics, is driving innovation at the intersection of research and technology. The company’s CTO, Jill Luber, shares her perspective on what today’s tech leadership demands.
I’m spending more time than ever aligning AI capabilities with strategic business goals, ensuring we build solutions that are not only innovative but also useful, ethical and customer-centric