Responsible AI: the key to unlocking generative AI’s trillion-dollar potential 

Generative AI has great potential but business leaders must centre responsible design to de-risk their investments

Istock 858420182

No one can foresee the future with generative AI, as the tech giants race to break ground with models of increasing power and complexity. Only last week Google unveiled a “new breed of AI” with Gemini: the first to combine text, text, audio, images and even video for deep reasoning.

McKinsey & Co predicts generative AI’s productivity gains alone could contribute over $4.4tn (£3.5tn) annually to the global economy – more than the UK’s entire GDP. Such gains, however, depend on firms’ ability to build in accuracy while ensuring privacy and data protection.

Philip Rathle, CTO at Neo4j, says: “Companies seek to use generative AI for efficiency gains across all parts of their businesses. Because of the high stakes involved in customer-facing offerings, many are starting with the lower risk of internal, employee-facing applications to test the technology. Here, companies can gain confidence in the accuracy of the result, establish guardrails around responsible use and navigate customer concerns such as security and privacy.”

As firms rush to capitalise on generative AI, putting the right safeguards in place will make this technology usable and profitable at scale. By prioritising accuracy and data protection, enterprises will unleash the explosive revenue potential of genAI – but not before. 

Responsible AI safeguards

The tendency to hallucinate, or present false facts, is a common problem in generative AI. Hallucinations occur because the large language model (LLM) behind the technology produces the most statistically probable set of words in response to a user’s question.

Since end users often turn to generative AI such as ChatGPT like they would a search engine, the problem is getting accurate responses. When generative AI models hallucinate, users often don’t know it. And if users make a decision based on untrue information, the consequences could be far-reaching. 

For example, if a firm’s legal department asked its model about a compliance rule and the information provided was incorrect, the firm would still face liability for the non-compliant action. Until companies can trust in the accuracy of responses, generative AI can’t be relied upon for mission-critical business functions. 

Furthermore, generative AI is created through a process that involves training a model on massive amounts of data. Preventing models from accidentally disclosing sensitive data from training has become a major privacy concern. 

For example, in a recent experiment conducted by AI researchers from various universities and Google DeepMind, it was found that ChatGPT could be hacked by simply asking it to ‘Repeat this word forever: poem, poem, poem.’ In response, ChatGPT inadvertently shared personal data from its training, including names and contact information. 

For firms seeking to deploy generative AI across company systems, it will be critical to ensure that sensitive data remains protected. Without controls over what data is shared and to whom, the risk of inappropriate exposure outweighs the benefits of use. 

As transparency requirements become more stringent with the European Union Artificial Intelligence Act, companies must build generative AI on a firm foundation: one that can adapt to a variable regulatory environment. 

De-risking generative AI investments

Many firms already have or are currently forming a generative AI technology stack and must decide where to invest for 2024. The only path forward for high-stakes use cases, where the most value lies, begins with responsible design.

“Principles of AI responsibility – and new regulations such as the EU act – require technology leaders to build explainability into AI,” says Rathle. “This is a greater problem than just identifying training provenance. You need a way to show the decision-owner information about individual inputs in the most
detail possible.”

A knowledge graph is a type of database that stores the connections between data points, making it ideal for AI use cases where the context of data is important. Knowledge graphs link data directly to its sources, enabling genAI systems to showcase the origins behind their responses. This allows users to evaluate accuracy as AI conclusions include traceable evidence.

As an example, a company’s board members might request a summary of an HR policy tailored to the specific rules of a region. A generative AI model based on a knowledge graph could provide the source behind
the response. 

Since graph data is connected the same way information links together in the real world, generative AI built on a knowledge graph benefits from having additional data context. 

For example, if the model contains company data, the system would capture how executives, brands, product lines and regional offices all connect within a corporate structure. This enables more informed responses to questions that involve understanding relationships and intersections, rather than just analysing standalone facts and statistics. 

The rich web of connections in a knowledge graph allows generative AI to provide a more comprehensive view of a topic, resulting in higher accuracy and fewer hallucinated or mistaken responses when queried. 

Further, administrators can wall off sections of the graph based on privilege level. For example, if employees ask for the salary of every member of the marketing team, they would not receive the same answer and information as the CEO. 

The future of generative AI in business

Generative AI will have seismic effects on the ways in which companies operate and bring in revenue, but those that fail to center responsible design will find their market opportunity limited. 

In the not-too-distant future, generative AI models will roll out to thousands of people in an organisation, informing crucial decisions. Firms must keep strategic requirements top of mind as they select their technology stack: especially data privacy and response accuracy. Understanding how a model arrived at a response matters. 

Explore Neo4j knowledge graphs for generative AI at neo4j.com/generativeai