Three-minute explainer on… shadow AI

Businesses must be aware of how staff are using generative AI in the workplace because well-intentioned quests to boost productivity or efficiency can result in new cybersecurity risks 

Tme Shadowai

AI is growing in strategic importance. With businesses keen to automate administrative processes, access and organise huge amounts of information, or even boost creativity, the technology is being used across a range of roles for a wide variety of purposes.

But while AI can doubtlessly drive efficiencies and boost productivity, there are also risks attached to its excessive and unmonitored use. Companies must be mindful, then, of when and where staff are using AI.

What is shadow AI?

Shadow AI refers to when employees within an organisation using generative AI tools, such as ChatGPT or Google Bard, to help with their work without the knowledge of senior management or the IT team.

Given the ubiquity of these tools – most only require a web browser – it can be hard to keep on top of who is accessing them. The ‘shadow’ element of this trend isn’t necessarily intentional. While in some cases staff may aim to hide their use of AI from their boss, it is also possible that they might just not think to mention it because it has become such an everyday convenience. 

Staff may use AI to skim through a presentation, comprising hundreds of PowerPoint slides, to find key information or to synthesise the most important points from a meeting transcript. They could also ask AI to help them draft an article or email.

Should companies be worried about shadow AI?

While attempts to streamline such tasks are understandable and, in the main, well-intentioned, there is a risk of staff sharing sensitive information about their employer or their colleagues online, which represents a new and evolving cybersecurity concern. If employees give ChatGPT prompts that include personal data or company details, for example, there is a chance those messages could be intercepted by potential hackers already monitoring the organisation.

There are also issues with the content that the AI itself generates. On drafting copy, in particular, it is important to remember that while the speed and eloquence of ChatGPT or Google Bard’s responses are impressive, neither are capable of original thought ⁠– yet. Everything that the AI generates is based on something that already exists, which means if a member of staff decides to publish any of that content there is a risk of plagiarism.

Similarly, although generative AI sites have no opinions of their own, their output is based on pre-existing, human-derived data points and rules. Asking a generative AI site to filter job candidates, for instance, could lead to skewed decision-making based on common biases.

So, what can businesses do to mitigate the risk of shadow AI? A culture of surveillance is likely a step too far. Staff still need to feel trusted and respected. But education, training and a clear code of conduct for use of AI at work are wise investments of a firm’s time and energy.