
The AI landscape has undergone a fundamental shift. While organisations have struggled to extract maximum value from AI, many are becoming more proactive with their approaches and seeing the ROI potential in agents.
Until recently, most AI use was reactive and cloud-based, with systems waiting for users to prompt them. But a 2025 EY report shows that this is changing: 64% of UK companies now let employees independently create or deploy AI agents.
As machines get smarter, agentic models have emerged that require less human interaction to proactively execute complex tasks. Yet while agentic AI is a departure from traditional, narrowly scoped AI systems, optimising the opportunity is about augmenting workers rather than replacing them. This approach is backed by guidance from the UK government, which advocates for a human-centred approach to scaling and de-risking AI tools.
Ensuring practicality, security and usefulness
Agents have been designed to understand multi-step goals; plan and sequence actions; and interact with multiple resources to achieve objectives autonomously. For example, an AI agent that can learn your preferences, financial constraints and priorities can use that information to independently negotiate a purchase. This scenario is already playing out and reshaping how business leaders think about enterprise and consumer AI.
For AI to be truly practical, secure and useful, the workflows underlying agents must be informed by real-time intelligence
For AI to be truly practical, secure and useful, the workflows underlying agents must be informed by real-time intelligence. This type of insight requires an underpinning of hybrid AI architecture – an ecosystem that strategically distributes workloads across devices, the edge and the cloud – all managed by teams of knowledge workers.
Why hybrid AI is a must-have
Agentic AI thrives on context, which often involves sensitive personal or organisational data, meaning that the cloud introduces legitimate privacy risks. However, hybrid AI keeps data processing and decision-making on local trusted devices or within secure environments. The AI works where the data resides, reducing exposure and aligning with data sovereignty regulations.
Another important requirement is personalisation, which is closely tied to data privacy. In the earlier example of the purchasing agent, user preferences and constraints are critical. They also frequently involve personally identifiable information (PII), which must be kept private. Storing and utilising this context locally safeguards user privacy.
Agentic AI success also requires immediate decisioning, meaning there is no time for data to travel across networks. Negotiating deals, responding to real-time sensor data and managing dynamic workflows all require immediacy. Lag time or, worse, disruptions can have significant ramifications. Hybrid AI enables low-latency, on-device computations that keep experiences smooth and real-time.
Hybrid AI also eliminates the need for constant cloud processing, which is resource-intensive and costly. Rather, it supports workload orchestration, using local compute for routine tasks and reserving the cloud for heavier data pulls or computations.
Finally, it allows partial task execution, enabling agents to remain functional even in offline or low-connectivity scenarios until cloud access resumes. The combination of localised intelligence and the scaling power of the cloud is what makes agentic AI experiences possible.
Addressing implementation challenges
Even before the emergence of agentic AI, organisations frequently struggled to derive clear ROI from their AI investments. While agents are not an immediate panacea, they do offer a compelling path forward when they are applied to holistic workflows as opposed to fragmented tasks. Agents managing end-to-end operations deliver much more visible and impactful returns.
Yet, meaningful ROI is only possible if a few key adoption barriers are addressed:
Firstly, predictability and ethics are of paramount importance for AI agents, driving significant growth in the adoption of governance platforms and techniques such as constitutional AI. These measures help ensure alignment with human values and provide oversight.
Reducing complexity and increasing reliability are also key to successful deployment, as managing multi-step tasks with agents is complicated. However, with advancements and the emergence of best practices in model training, performance is becoming more consistent. These types of development frameworks also enable teams to build predictable and robust agentic systems that are easier to deploy.
Secure integration with tools and APIs is another critical consideration, as agents need access to various data sources and applications. The industry is building protocols and standards for secure interactions, and confidential computing technologies are further protecting sensitive data during runtime.
Not only must tools be secure, but they must also be reliable, as agentic AI depends on real-time interaction with external software. Enhanced function-calling capabilities in foundation models and interoperability frameworks are simplifying this integration. For example, Model context protocol (MCP) supports secure and multi-step workflows, making agents more capable and predictable – and, therefore, effective.
Making it real
Agentic AI shines where goals are dynamic, distributed and resource-intensive – able to scale beyond the abilities of teams but needing their intelligence to be most effective.
Autonomous agents can manage supply chains, helping avoid logistical disruptions by analysing real-time inventory and shipment data. They can operate on edge devices, coordinating with central planning systems in the cloud and updating routing strategies to proactively keep data current and secure.
Agents can also be embedded on industrial workstations to monitor sensor data, trigger maintenance protocols or coordinate spare parts ordering, all of which improve operational resilience and reduce costly downtime.
AI PCs equipped with on-device agents can manage individual workflows, summarise meetings, draft content and interact with enterprise systems without compromising personal identity or putting private data at risk.
In each of these use cases, the critical throughline is the oversight of a knowledge worker, ensuring the data feeding into the agent is accurate and clean.
Building a more autonomous future
Businesses that implement agents today and invest in training their workforce to manage them are setting themselves up to be ahead of their competitors. Agentic AI is foundational to the future, with advances like AI twins on the horizon, but its own foundation requires hybrid AI. This is a major step forward in delivering truly autonomous, useful and safe AI systems that can operate in real-world conditions.
Not sure which model is right for your workloads? Lenovo’s AI Advisory Services can help you evaluate use cases, assess infrastructure and build a roadmap tailored to your AI goals. Find out more.
The AI landscape has undergone a fundamental shift. While organisations have struggled to extract maximum value from AI, many are becoming more proactive with their approaches and seeing the ROI potential in agents.
Until recently, most AI use was reactive and cloud-based, with systems waiting for users to prompt them. But a 2025 EY report shows that this is changing: 64% of UK companies now let employees independently create or deploy AI agents.
As machines get smarter, agentic models have emerged that require less human interaction to proactively execute complex tasks. Yet while agentic AI is a departure from traditional, narrowly scoped AI systems, optimising the opportunity is about augmenting workers rather than replacing them. This approach is backed by guidance from the UK government, which advocates for a human-centred approach to scaling and de-risking AI tools.