
The race to commercialise artificial intelligence is entering a new phase, rapidly shifting from research to revenue. What began in R&D labs is now being tested on balance sheets, as companies look for measurable returns on their investments.
OpenAI is positioning itself at the centre of a market some estimate could be worth $17tn. Its latest move, the launch of the enterprise platform Frontier, shows the firm wants to move beyond chatbots and into the core of business operations – managing workflows, coordinating software agents and embedding AI into everyday processes.
Others are moving quickly to support this shift. Consulting firms, systems integrators and software providers are lining up to help companies turn technical capability into operational systems that can deliver value at scale. But one important question remains: how does enterprise AI work in practice?
There is still little consensus on how to govern AI systems operating inside critical infrastructure, how deeply they should be integrated into existing workflows, or whether they can scale safely across large organisations. These are not marginal concerns. They go to the heart of whether AI can deliver sustained value rather than isolated gains.
A small group of companies suggests that it can.
The minority seeing results
While most large organisations are investing in AI, research from EDB suggests only a minority – or around 13% – appear to be deploying it effectively at scale.
For those that are, their advantage comes not from better models, but from how they are deployed and the systems that support them.
These companies treat AI and data as part of the same system. Rather than exporting data into external platforms and attempting to layer governance on afterwards, they run AI alongside governed data within their own environments. In turn, this shapes how reliably AI systems perform, how easily they can be controlled, and how quickly they can be adapted as requirements change.
The results are significant. Organisations in this group report up to five times the return on their AI investments, alongside greater confidence in their long-term competitiveness compared to their peers. Just as important, they are able to move from experimentation to production more quickly, avoiding the stagnation that affects many AI initiatives.
Their focus is consistent: systems must be secure, adaptable and compliant at the same time.
“The future of enterprise value creation hinges on sovereignty over data, AI, and increasingly autonomous agents,” says Quais Taraki, chief technology officer at EDB. “What matters most is not the model, but the infrastructure and systems around it – and how well this reflects the realities of enterprise environments. ”
Where systems begin to strain
As AI systems expand, the limits of existing data infrastructure become more apparent.
OpenAI itself has pointed to these challenges in describing how it scaled PostgreSQL to support rapid growth. The exercise underscored both the strengths of the technology and the pressure placed on conventional database architectures.
In traditional single-primary systems, every write operation – from user activity to billing – passes through a central node. As demand increases, this can introduce bottlenecks and raise operational risk, often requiring complex workarounds to maintain performance.
In smaller systems, these constraints can be managed. At enterprise scale, they become harder to contain. Workloads are more variable, uptime requirements are stricter, and the cost of failure is higher.
AI workloads place new demands on data infrastructure, requiring systems designed for distribution and resilience from the outset rather than adapted after the fact.
At scale, even small delays can have outsized effects. Latency, failover times and maintenance constraints are not simply technical concerns; they can disrupt operations and undermine confidence in AI systems.
“The data layer is fast becoming the defining factor in agentic success,” adds Taraki. “A sovereign, open source foundation provides the control, compliance, and flexibility organizations need to keep data, AI, and agents aligned with enterprise governance and long-term strategy, unlocking combined value that is greater than the sum of its parts.”
A shift in the data layer
That is why PostgreSQL is gaining ground within enterprise systems.
Now used by a majority of developers, PostgreSQL has attracted sustained investment from major cloud providers and technology firms. For many organisations, it is becoming a default foundation for new applications and data platforms.
Its appeal lies in its flexibility and portability. As companies rethink how their data is managed, the ability to operate across different environments – cloud, on-premise and edge – is increasingly important.
The most effective AI architectures reflect this shift. They are typically hybrid, designed to keep data under consistent governance while allowing systems to run across multiple environments. This reflects a broader reality: enterprise data is rarely in one place, and cannot easily be moved without introducing risk.
In practical terms, this often means bringing AI to where the data resides, rather than moving data into external systems.
PostgreSQL can support this approach by allowing organisations to run AI workloads alongside governed data. At enterprise scale, however, doing so reliably depends on how those systems are deployed and managed. This is particularly true in areas such as availability, security and regulatory compliance.
Consequently, many organisations are moving beyond standard open-source deployments towards enterprise-grade PostgreSQL platforms built to meet these demands. These platforms provide the operational guarantees required to support AI in production, rather than in isolated use cases.
For companies operating under tighter regulatory and geopolitical constraints, that level of control is becoming a strategic requirement rather than a technical preference.
“AI and data sovereignty does not simply mean keeping data ‘on prem’ or under national control,” Taraki notes. “It means enterprises take full ownership of their data infrastructure, governance stack, and security posture. It’s a shift from renting capability to architecting it deliberately, end to end.”
Sovereignty as a competitive factor
This points to a broader divide emerging in enterprise AI. Some organisations are choosing to rely on external platforms, effectively renting access to AI capabilities within proprietary ecosystems. While this can accelerate early experimentation, it often comes with trade-offs around control, cost and data governance.
Others are building systems that allow AI to operate within their own controlled environments. This approach is more complex, but offers greater flexibility in how systems are designed and how data is managed.
The latter requires more deliberate investment in infrastructure – particularly in data platforms capable of supporting AI workloads at scale, across multiple environments, and under strict governance requirements.
Vendors such as EDB are positioning PostgreSQL as the backbone of these systems, offering enterprise-grade, sovereign-by-design platforms designed to run AI alongside governed data in production environments.
As AI becomes more embedded in business operations, the distinction may prove significant. Improvements in model capability will continue, and new tools will emerge. But the extent to which organisations can capture value from them is likely to depend on how those tools are integrated into their existing systems.
In that sense, the question is not simply which models to use, but how to build the systems that support them – and where control should sit.
The future of enterprise AI may depend less on access, and more on ownership.
Find out more about how EDB can power your organisation’s AI-driven transformation
The race to commercialise artificial intelligence is entering a new phase, rapidly shifting from research to revenue. What began in R&D labs is now being tested on balance sheets, as companies look for measurable returns on their investments.
OpenAI is positioning itself at the centre of a market some estimate could be worth $17tn. Its latest move, the launch of the enterprise platform Frontier, shows the firm wants to move beyond chatbots and into the core of business operations - managing workflows, coordinating software agents and embedding AI into everyday processes.
Others are moving quickly to support this shift. Consulting firms, systems integrators and software providers are lining up to help companies turn technical capability into operational systems that can deliver value at scale. But one important question remains: how does enterprise AI work in practice?

