Beyond the Black Box: the new ‘explainability’ rule for enterprise AI
As the June 2026 deadline for high-risk systems under the EU AI Act nears, businesses must dismantle the ‘black box’ or face exclusion from the European market.
The push to embed artificial intelligence into enterprise systems as fast as possible is potentially running into a new constraint in the European market: explainability. With the EU AI Act coming into force in summer this year, businesses are going to need to ensure that any automated decision by an AI can be explained to customers and regulators. This may seem an odd mandate, but it builds on the ‘right to an explanation’ legal principle first mandated by GDPR, which has now expanded in scope to cover AI in the new act.
This creates a new set of problems for developers and business alike, who now have to show how models have been interrogated, tracked, and how data use has been justified inside legacy systems that were not designed for this level of tracking and transparency. Many AI systems are black boxes when it comes to providing a chain of reasoning, especially the larger AI systems provided by vendors such as OpenAI and Anthropic. These make up a large amount of AI deployments, as most businesses do not have the resources to build their own AI models, and these vendors tend to market their models as one-size-fits-all, capable of working for any type of company or concept.
Embedded into enterprise workflows, especially in high risk sectors such as finance or employment, the black box becomes a legal liability. If an automated system denies a loan or filters a job candidate, the organisation must be able to explain why this happened.
Legacy systems as a compliance barrier
These systems were never intended to be observed in real-time. This problem becomes even more acute on legacy systems, which are typically fragmented, built on decade-old architectures, and have limited observability and data storage standards. These systems were never intended to be observed in real-time, never mind having to meet the EU AI Act’s explainability requirements.
One emerging approach organisations are adopting is building an explainability layer that sits in between the legacy system and the AI agent. This is a cleaner way of meeting the demands of the AI Act, as teams aren’t forced to retrofit transparency into older systems which may be impossible to do. The layer can also enrich inputs and outputs from AI systems, adding crucial metadata and context to better inform organisations. On the input side, this could include tagging each data point with source, timestamp, and transformation history. On the output, responses could include explanations that link to specific features.
Central to building this layer is a consistent identification system for inputs and outputs, which should enable an organisation to trace the full lifecycle of a decision, even if the data is transformed via AI. Without consistent identifiers and clear mapping, organisations will struggle to fully provide meaningful explanations to customers and regulators.
“If AI is misused, there is the potential of irreparable reputational and financial damage.”
Richard Farmar, CFO at Gallium Ventures
The financial risk of the unknown
From a technical implementation perspective, frameworks such as SHAP and LIME are becoming standard components of the stack. Both are model-agnostic approaches to identifying how individual features from an AI model contribute to a specific prediction. SHAP, based on cooperative game theory, lets organisations attribute importance across features, while LIME focuses on building a local system to explain predictions. These tools are part of a broader compliance architecture that needs to be operational before the EU AI Act comes into force for high-risk systems in June 2026.
The cost of implementing this ‘Act-Tech’ layer is non-trivial, and requires engineering time, skill development, and compute and storage costs. There is a growing demand in the European market for engineers that can bridge the gap between artificial intelligence and regulatory compliance. Richard Farmar, CFO at PR firm Gallium Ventures, warns that 2026 will see a pressing need for AI-driven decisions at management level. He notes that companies will need to closely monitor service quality as they rely more on automated tools.
The penalties for failure are severe. Under the new act, non-compliance with rules on prohibited AI practices can lead to fines of up to €35m or 7% of a company’s total global turnover. For the C-suite, this moves AI explainability from a technical hurdle to a risk management priority.
Bridging the talent gap
The transition to explainable AI requires more than software; it requires a cultural shift in how teams handle data. Elizabeth Wallace, chief people and transformation officer at emagine, says her team is moving beyond leading by instinct to becoming more data proficient. She is committed to upskilling in AI literacy to interpret AI outputs and ensure data remains actionable and compliant.
This sentiment is shared across the C-suite. Pearson CHRO Ali Bebo is modernising performance systems to reflect how AI transforms necessary skills. She argues that few organisations have objective ways to connect these capabilities to talent decisions. For these leaders, technology amplifies what people achieve rather than replacing them.
As conversational and agentic AI become mainstream, the ability to interpret complex datasets and guide users in plain language will be vital. Manish Jethwa, CTO at Ordnance Survey, believes adopting AI is a cultural shift as much as a technical one. He says tools must be responsibly embedded into workflows with a focus on risk management and intellectual property protection.
“The ability to interact confidently with technology, and to use it to deliver meaningful outcomes, will define the next generation of leadership.”
Rupy Malizia, COTO at HSBC Innovation Banking
The mandate for transparency
Moving beyond the black box is less about solving a single technical problem and more about reshaping systems to meet the demands of the AI Act. Organisations need to understand this is not an optional feature for customers and regulators, but a necessary system to ensure compliance. For teams in the European market, all AI systems deployed need to explain themselves fully, or they cannot be greenlit.
As Richard Farmar notes, analysts suggest 2026 could see a market correction if substantial AI expenditures fail to translate into revenue quickly enough. A sharp reassessment of valuations could tighten investment across the economy, impacting everyone from SMEs to the FTSE 100. In this environment, the winners will be those who can prove their AI is not just fast, but fair and followable.
Lessons for the boardroom
Audit for high-risk use: Identify if your AI applications fall under high-risk categories in employment, finance, or critical infrastructure before June 2026.
Invest in ‘Act-Tech’: Budget for an explainability layer to sit between legacy systems and AI agents to avoid costly retrofitting.
Upskill for literacy: Move beyond chatbot use; ensure teams understand AI governance and how to interpret automated outputs.
Prepare for penalties: Recognise that AI non-compliance is now a top-tier financial risk, with fines reaching 7% of global turnover.
Lead with ‘why’: When deploying AI, start with the purpose and the ‘why’ behind automated decisions to maintain trust with customers and staff.
The push to embed artificial intelligence into enterprise systems as fast as possible is potentially running into a new constraint in the European market: explainability. With the EU AI Act coming into force in summer this year, businesses are going to need to ensure that any automated decision by an AI can be explained to customers and regulators. This may seem an odd mandate, but it builds on the ‘right to an explanation’ legal principle first mandated by GDPR, which has now expanded in scope to cover AI in the new act.
This creates a new set of problems for developers and business alike, who now have to show how models have been interrogated, tracked, and how data use has been justified inside legacy systems that were not designed for this level of tracking and transparency. Many AI systems are black boxes when it comes to providing a chain of reasoning, especially the larger AI systems provided by vendors such as OpenAI and Anthropic. These make up a large amount of AI deployments, as most businesses do not have the resources to build their own AI models, and these vendors tend to market their models as one-size-fits-all, capable of working for any type of company or concept.
Embedded into enterprise workflows, especially in high risk sectors such as finance or employment, the black box becomes a legal liability. If an automated system denies a loan or filters a job candidate, the organisation must be able to explain why this happened.