
Enterprise adoption of generative AI (GenAI) is accelerating faster than any previous technology, with organisations using it for tasks ranging from drafting content to writing code.
GenAI is increasingly embedded in mission-critical business functions. Yet as adoption accelerates, so too do risks that remain poorly understood or inadequately addressed. Research from the British Standards Institution shows that only 24% of business leaders say they have an AI governance programme in place.
Security, bias mitigation and human oversight are no longer optional considerations. They are prerequisites for deploying AI at scale in a way that is sustainable and secure.
The expanding attack surface
The most widely discussed GenAI vulnerabilities involve prompt injection, where attackers manipulate inputs to bypass safeguards, extract sensitive data or generate unintended outputs. But this is only the starting point. With open-ended, natural-language interfaces, GenAI introduces a fundamentally different attack surface from traditional software.
Guidance from the Department for Science, Innovation and Technology warns that GenAI could be used to accelerate and scale cyber attacks, including more effective phishing campaigns and the replication of malware. These risks increase as models are integrated more deeply into enterprise systems.
Security in this context cannot be treated as a “set and forget” exercise. Organisations such as Lenovo are adapting secure-by-design frameworks that evolve across products and services, with GenAI now a central consideration. This requires safeguards throughout the entire lifecycle, from initial data ingestion to deployment and continuous monitoring.
Data classification also demands renewed attention. Existing high-level approaches are often insufficient. Without granular categorisation and accurate labelling, access controls can quickly erode, particularly as large models often require broader data access to function effectively.
The challenge becomes more acute in agent-to-agent systems, where autonomous AI agents interact and exchange information. These environments introduce additional risk: every interaction creates a potential attack vector, increasing the likelihood of data leakage, privilege escalation or adversarial manipulation. Because errors can cascade across interconnected systems at machine speed, conventional monitoring may struggle to keep pace unless human oversight is maintained from design through deployment, supported by regular system reviews.
Bias, trust and governance
While data breaches attract immediate attention, the longer-term risks of biased or unreliable outputs can be even more damaging. Bias undermines trust, misleads stakeholders and erodes brand credibility. In regulated sectors such as healthcare and financial services, it can also expose organisations to significant compliance penalties.
Responsible and ethical AI must therefore be embedded across the entire lifecycle. Governance cannot be bolted on after deployment; it must shape every decision from the outset.
Effective governance rests on three core principles.
Trusted data sources: Models should be trained and prompted only with verified, high-quality inputs. The familiar maxim “rubbish in, rubbish out” underscores the importance of accurate data categorisation and labelling. Strong data hygiene reduces hallucinations and limits the risk of sensitive information being exposed.
Framework-level guardrails: Governance controls must be established at the start of any AI programme and applied consistently throughout, with validation at multiple stages including data ingestion, model behaviour and outputs. Without these guardrails, organisations risk breaching regulatory and ethical standards.
Ongoing testing: As models learn and evolve, outputs can drift over time. Continuous assessment before and after deployment is essential to detect bias and degradation in performance, both of which can damage organisational reputation and trust.
Together, these principles support a governance-first mindset aligned with the practices already familiar to security-led organisations. AI systems must be transparent, explainable and secure for both users and enterprises. Human oversight remains critical, particularly in high-impact or regulated environments, where trained reviewers must validate outputs before they are put into operation.
Closing the maturity gap
Although many organisations recognise the risks associated with GenAI, few have the maturity, training or tooling required to manage them effectively. Too often, security checks stop at launch. In practice, GenAI demands continuous vigilance across its entire lifecycle, similar to a zero-trust approach that verifies access at every interaction.
Operationalising this level of governance requires several practical steps.
Security awareness must extend beyond technical teams, ensuring leaders across the organisation understand prompt hygiene, data sensitivity and AI-related risk. Models should be tested continuously, much like software is patched, with evaluations covering every stage of deployment. DevSecOps principles must be embedded directly into development pipelines, reinforcing a security-first culture in day-to-day engineering work.
Access controls also require regular review, with least-privilege principles enforced so that systems and individuals only have access to what they genuinely need. Data labelling, while resource-intensive, can be partially automated using AI, but human validation remains essential to provide context and accuracy. Finally, organisations should rehearse incident response through simulations and tabletop exercises, recognising that AI-enabled incidents can spread far more rapidly than traditional breaches.
Trust as the foundation
Generative AI offers transformative potential, but many organisations are not yet equipped to manage the security and governance demands that come with it. Those that succeed will be the ones that embed a security-first culture across the enterprise, supported by transparent supply chains and robust lifecycle governance.
In this next phase of AI maturity, adoption alone is not enough. Organisations must secure, govern and validate systems at every stage. Innovation may drive initial uptake, but trust is what sustains it. The promise of GenAI will only be realised when security, governance and human oversight are built into every layer of deployment.
By embedding trust from data sourcing through to ongoing monitoring, organisations can innovate responsibly while protecting their people, customers and reputation.
Lenovo’s AI Services support organisations in operationalising trust through secure-by-design frameworks, bias mitigation and continuous oversight.
Enterprise adoption of generative AI (GenAI) is accelerating faster than any previous technology, with organisations using it for tasks ranging from drafting content to writing code.
GenAI is increasingly embedded in mission-critical business functions. Yet as adoption accelerates, so too do risks that remain poorly understood or inadequately addressed. Research from the British Standards Institution shows that only 24% of business leaders say they have an AI governance programme in place.
Security, bias mitigation and human oversight are no longer optional considerations. They are prerequisites for deploying AI at scale in a way that is sustainable and secure.
