
Enterprises are entering a new phase of AI. The focus is shifting from the chatbots and prompts of the past two years to agents that can act independently and collaborate with one another.
This next stage could be the most challenging for businesses. AI agents require access to high-level data and have the power to take action without supervision. Without a change in data management and security, this shift could open a world of operational risks.
The rise of the supervisor agent
Some businesses are already beginning to experiment with systems where a ‘supervisor agent’ assigns tasks to multiple connected agents. For example, one agent might gather market data, while another models it. A third then compiles the results into a final report. What previously involved an employee moving from task to task with a chatbot can now be finished without human help.
While not yet common, agentic AI is set to reach enterprises of all sizes by the end of 2026. Close to 75% of businesses plan to deploy AI agents in that timeframe, according to Deloitte’s latest State of AI in the Enterprise report.
It is easy to see the appeal for CIOs, who are under pressure to increase productivity without adding headcount. Specialised agents could automate large parts of analytical and administrative work, freeing employees to focus on strategy and customers. But with limited human interaction in the process, there is a risk of errors going unnoticed until the final document is delivered.
Inside the black box
Traditional IT frameworks assume that systems behave predictably. They rely on the fact that managers oversee decision-making. Agent-to-agent collaboration upends this structure. The AI layer can make hundreds of decisions in seconds.
The reasoning behind these decisions is not always easy for managers to understand. This is partly due to a lack of technical knowledge, but it also stems from the nature of the agents themselves. Individual agents in a loop may not understand the full scope of an assignment.
“When you give AI agents the power to make decisions without a human in the loop, you also give them the power to affect people, processes and reputations in real time,” says Roger Connors, co-founder of Partners In Leadership, a consulting firm.
The constant movement of data between internal databases, external APIs and other sources can create a “black box” of decision-making. If one agent sends faulty data, the error can compound. Agents further down the chain will treat that data as authoritative.
Data ownership and quality
While many agents exist to automate processes, fewer have been built to question data quality or ownership. If security systems are not set up correctly, sensitive information may move through internal and external systems. This creates issues for intellectual property and compliance.
This presents a core governance problem. It becomes difficult to see which agent provided the faulty data and how to fix the error. As with human error today, businesses are unlikely to spot a mistake immediately. It may be days or weeks before a problem is identified, by which point the damage could be severe.
Redesigning the risk roadmap
Existing governance frameworks are not designed for this level of autonomy. Most must be reformed. Traditional software for logging interactions tends to monitor individual units. In the age of agentic AI, systems need to oversee the entire workflow.
It may be necessary to create entirely new frameworks. This will involve tracking how tasks move between agents, recording data sources and establishing clear guardrails. Without these, agents could corrupt data through poor collection or by breaking privacy laws.
McKinsey recommends that organisations adopt a structured governance roadmap rather than treating agents like traditional AI tools. Risk frameworks must be updated, monitoring capabilities built and security testing finished before these systems are scaled.
This preparation is vital. The hype around agentic AI may lead organisations to deploy the technology too quickly. A similar trend occurred with chatbots and copilots in 2022 and 2023, many of which failed to deliver value. Early stage experiments and proof-of-concept projects are often misapplied because they are driven by buzz rather than business need, according to Anushree Verma, a director at Gartner.
The human checkpoint
Some enterprises are already experimenting with practical safeguards. These include orchestration layers that coordinate interactions and maintain detailed audit trails. Others are using strict identity controls, giving AI agents access only to specific datasets.
Humans may not be completely removed from the loop. Some organisations use human checkpoints to ensure that high-risk outcomes are checked and verified by an employee.
The key is treating AI agents like a new hire. While they have a high capacity to reduce workloads, they still need oversight. Keeping track of their work and ensuring they stay away from critical databases are smart and simple processes.
As the technology matures, some of these stabilisers may be removed. But organisations must always be wary of letting AI agents become fully independent.
Enterprises are entering a new phase of AI. The focus is shifting from the chatbots and prompts of the past two years to agents that can act independently and collaborate with one another.
This next stage could be the most challenging for businesses. AI agents require access to high-level data and have the power to take action without supervision. Without a change in data management and security, this shift could open a world of operational risks.

