
Could so-called AI agents really work alongside us to the degree some suggest? Nvidia CEO Jensen Huang believes his company might, one day, have 50,000 employees working in tandem with 100 million AI agents.
However far off that vision may be, Nvidia will be in good company if agentic AI really can live up to the hype. Picture all the Fortune 500 companies deploying fleets of their own AI agents – the number could be in the billions. But, even with agentic robot ‘chiefs’ autonomously managing many of those agents, it’s people that will still need to take responsibility for them to ensure they adhere to the standards their ‘employers’ expect of them.
Perhaps one way forward is to subject AI agents to the same kind of performance reviews that humans must undergo. AI agents could be measured against the core competencies of effectiveness, security and compliance to identify where they deliver value and where there are areas for improvement.
AI agents: managing a hundred million workers
As millions of AI agents enter the workforce, organisations need to develop frameworks that enable their teams to manage them at scale. Regardless of whether they have hundreds or millions of agents, human teams must remain in control. These human managers, therefore, need to be equipped to manage every stage of the AI agent lifecycle, from design and deployment to monitoring, retraining and, ultimately, retirement.
The human workforce must also remain accountable for the actions of these AI agents, which means being able to monitor and manage the sources of data they draw from.
Enterprises should therefore simplify how agents access a centrally managed, governed and secured source of data that has been approved for use. Such action could enable teams to deploy agents faster, while ensuring they comply with privacy and security standards.
With this in mind, a mature Application Programming Interface (API) strategy is essential to the management of an agentic AI workforce.
In the same way the human workforce is given access to the tools they need to do their jobs – through a user account – AI agents need controlled access to systems and data to function effectively.
APIs provide the tools that organisations need to support this, by granting AI agents controlled access to the data they need to make decisions – and the systems they need to complete the tasks they’re assigned.
Watch out for zombies: keeping AI agents in check
As APIs proliferate to support the agentic workforce, one of the biggest risks organisations face is the rise of shadow or ‘zombie’ APIs.
These are undocumented, unmanaged integrations that have been created to temporarily provide access to a siloed data store or by teams working outside the visibility of IT, in ignorance of the risk.
Unmonitored APIs are rapidly becoming one of the most common access vectors for attacks against applications. They can also lead to inconsistent data flows, unauthorised access to sensitive information and spiralling technical debt. In the same way that organisations have processes to block unauthorised human workers from their environments, it’s essential they can find shadow or zombie APIs – and show them the door to prevent them from being used by AI agents.
The most effective way to banish zombie APIs is by creating a central hub for all APIs and AI agents.
A central hub helps to build a single source of truth, which organisations can use to review and manage the performance and behaviour of every AI agent in their environment. As such, they can instantly identify every API that agents interact with and determine which data they are accessing. That makes life easier for uncovering and blocking any unauthorised behaviour.
Such a centralised API and AI management hub should also incorporate robust governance and security controls. Access policies can be enforced to ensure data pipelines are secured and AI agents adhere to the same privacy guidelines their human counterparts follow. Low-code integration tools are also essential, enabling teams to use standardised templates and natural language prompts to create agents with security, compliance and efficiency built-in.
By implementing these controls while the agentic workforce is in its infancy, IT leaders can get on the front foot and ensure AI impacts their organisation for the better.
Markus Müller is the global field CTO at Boomi.

Could so-called AI agents really work alongside us to the degree some suggest? Nvidia CEO Jensen Huang believes his company might, one day, have 50,000 employees working in tandem with 100 million AI agents.
However far off that vision may be, Nvidia will be in good company if agentic AI really can live up to the hype. Picture all the Fortune 500 companies deploying fleets of their own AI agents – the number could be in the billions. But, even with agentic robot ‘chiefs’ autonomously managing many of those agents, it’s people that will still need to take responsibility for them to ensure they adhere to the standards their ‘employers’ expect of them.
Perhaps one way forward is to subject AI agents to the same kind of performance reviews that humans must undergo. AI agents could be measured against the core competencies of effectiveness, security and compliance to identify where they deliver value and where there are areas for improvement.