The executive’s guide to responsible gen AI

Contents

Why leaders need to understand responsible generative AI

Leaders may be in a rush to innovate with generative AI but they must pause for thought on safety and ethical issues first

While artificial intelligence has been creeping out of science fiction and into the mainstream for years, the technology has dominated headlines in 2023. This year has seen generative AI change the reality of what technology can accomplish in society and business – and the results are exciting and concerning in equal measure.

As the debate over whether humans and machines can have a harmonious relationship continues, business leaders must get to grips with the evolving ethical and regulatory landscape around generative AI. Responsible innovation is becoming a vital pillar of the corporate world, and a careful balance needs to be struck between exercising caution and embracing experimentation.

In November 2023, the UK hosted the first AI Safety Summit at Bletchley Park. The assembly resulted in 28 countries signing the Bletchley Declaration, an international agreement to cooperate on the development of safe AI. Across the globe, countries and regions are also developing individual strategies and laws around responsible use of AI. Autumn 2023 also saw President Biden issue an executive order on the safe, secure and trustworthy development and use of artificial intelligence, noting that the technology "holds extraordinary potential for both promise and peril". In December, EU lawmakers reached a deal over the proposed AI Act, making it the world's first comprehensive AI law. Meanwhile, legal and copyright issues have been contentious across industries as everyone tries to make sense of exactly who owns the outputs of generative AI.

Governments and regulators are playing catch up with a technology already unleashed on the masses, so leaders can expect to see more rules and guidelines over the next few years. The smartest leaders will ensure they already have strong policies and security in place before they roll out generative AI companywide to avoid getting caught out as regulations evolve. This means understanding the limitations and transparency of public platforms, carefully considering the implications of using data to build any of their own models and staying agile and informed about developments in the AI regulatory space. The following chapters outline the core areas leader must consider when developing a responsible AI strategy.

Commercial Feature

Generative AI regulatory considerations

What do leaders need to know about legal issues such as data security and intellectual property?

Laws around AI use are evolving all the time, and the ins and outs of regulating generative AI specifically are still being ironed out. For now, the best approach is a proactive and cautious one. Leaders need to understand what is at stake, stay on top of developments in the area and ensure their policies are as robust as possible before rolling out the technology across the business.

Ensuring data privacy

Generative AI brings with it a host of potential data privacy issues. Current generative AI models don’t have a ‘forgetting’ feature for personal data, and models are trained on large amounts of data, which may include personal information. Even though regulations are still being ironed out, businesses may be responsible for any violations resulting from the use of generative AI.

Leaders can start by using their existing privacy strategy as the building block for an AI privacy strategy. This means defining what types of consent or permission are needed for data and ensuring staff are up to date with data privacy training. Leaders should lay out the company policies around generative AI from the get-go and decide how employees can and can’t use current tools. They should also consider whether they can put guardrails in place to prevent misuse and whether they need a plan for what to do if data policies are violated.

For those organisations considering proprietary ‘off-shelf’ services, there needs to be careful consideration of how data will be collected and used. For example, the type of data that will be collected, whether company data will be used for training models or shared with third parties, whether there is a data lineage that enables data to be deleted if needed and if user interaction history is stored and secured.

Addressing security

Generative AI can be used to access or generate harmful content. Leaders need to be wary of risks such as ‘prompt injection’. This is where cybercriminals insert a malicious instruction or prompt within the input text to manipulate the normal behaviour of large language models (LLMs). This can result in security issues and breaches such as generating malicious code and revealing confidential information.

Other generative AI security threats include using it to discover vulnerabilities and generate ways to exploit them, creating automated fraud or scam attacks, personalised social engineering attacks and easy access to content for planning attacks or violence.

Organisations should exercise the same level of caution using public generative AI tools as they would any other external tool and work with a trusted provider to build any models of their own.

Intellectual property protection

The law around generative AI copyright is unsettled. There have been multiple lawsuits around the data used to train generative AI models, alleging misuse or infringement of IP-protected data. 

Generative AI models and datasets, like other software, are subject to licenses that try to tell users how they can or cannot be used. Restrictions can include prohibiting outputs from being used to train other LLMs or limiting distribution.

Content generated by AI currently isn’t protected by copyright in the US. In other countries, the issues of authorship and copyright are not yet clear. Leaders should involve the legal department when using generative AI tools or developing their own models to determine what needs protecting and how the organisation can go about it.

Litigation and other regulatory risks

Leaders need to remember that existing laws still apply to new and emerging technologies. For example, automated decision-making processes that cause bias or discrimination may subject the developer or deployer to regulatory actions or litigation. False claims about a model or algorithm’s functionality or results may trigger legal action.

Countries worldwide are designing and implementing AI governance legislation. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases and voluntary guidelines and standards. Organisations must brace for worldwide regulation on AI and ensure that any of their own models are transparent and auditable.

Commercial Feature

Core ethical issues and how to approach them

From bias to handling human and AI interactions, what generative AI ethical issues do leaders need to understand?

Ethical issues come up a lot in conversations around generative AI. There are concerns about how content is generated, how reliable it is, and how it impacts individuals and society at large. So how can organisations work towards outputs that are fair and unbiased, and ensure that AI innovation prioritises people?

Fairness and bias in data

One misconception is that the larger the data set is, the better it is. This is not true size doesn’t guarantee quality. In fact, bigger models with bigger datasets are more likely to be contaminated with inaccurate or problematic data.

Examples of biases that can skew data include those related to social perceptions, stereotypes and cultural influences. Outdated data also won’t capture social changes and so may include historical inaccuracies. One of the challenges of generative AI models is that there is often a lack of transparency of input data. This means it can be difficult to trace back to original input data and the ability to fact check can be limited.

Reliability and accuracy of AI systems

Generative AI tools are known to produce ‘hallucinations’. This is when the model generates inaccurate or nonsensical responses due to limitations in understanding but presents it as fact. This becomes an issue when people use generative AI tools uncritically and believe outputs without checking whether they are factually correct. Leaders need to ensure staff are adequately trained on the limitations of generative AI so that they don’t create content or develop products and solutions based on false information.

Hallucinations can be particularly dangerous when models generate unsafe or biased recommendations or models disclose private or sensitive information in answer to an unrelated prompt.

Balancing AI and human interactions 

One of the biggest current debates is how AI will impact society. The argument for AI is that it enables personalised experiences, can help automate repetitive tasks, and can increase efficiency and productivity. Access to generative AI tools increases productivity by 14% on average, according to a 2023 survey by Mercer.

There’s also an accessibility argument. Generative AI can help make technology more inclusive and accessible by generating alternative formats, providing real-time translations and assisting individuals with disabilities.

However, perhaps the biggest fear is that AI will lead to job losses or displacement of workers. There is also concern that increased trust and reliance on AI systems may lead to unnoticed mistakes and the loss of important skills.

Following ethical principles for generative AI use

Leaders should strive to work to strong ethical principles for AI use. The first principle to follow is human-centricity. Is AI being used to help people, represent diverse groups and prioritise human experiences? Will users be informed when they interact with AI? 

Another core principle is striving for fair and unbiased AI. This means mitigating biases where possible, distributing resources fairly and promoting equitable outcomes. Finally, safety and security are vital. Any organisation using AI should put in restrictions and safety measures to avoid harm to users, be cautious about using AI in high-risk scenarios and be vigilant against threats.

Ethical considerations are governed by AI and other legal statutes. Leaders need to ensure they are up to date on the latest regulations and advice around AI use, such as worker protection, consumer protection and anti-discrimination.

Commercial Feature

Preparing for generative AI safety and auditing

What executives need to know about safety testing and auditing for generative AI tools

Regulatory safety testing requirements for generative AI are coming, and organisations will need to prepare their AI enterprise strategies accordingly. Leaders should understand the transparency of the models they use within the organisation and use generative AI-ready testing methods to fulfil compliance requirements.

Foundation model trust and transparency

Stanford University issued a report earlier this year called the Foundation Model Transparency Index, measuring the transparency of AI foundation models from major providers. Models are assessed on 100 indicators that Stanford researchers say comprehensively characterise transparency for foundation model developers.

The highest-scoring model currently has 54 out of 100 and the mean score is just 37 of 100. This shows that leaders need to exercise caution around current generative AI platforms, as no major foundation model provider is close to providing adequate transparency. However, 82 out of 100 indicators are satisfied by at least one provider, meaning the potential is there to improve models if competitors learn from each other.

Hugging Face, a data science platform and community, recently released the LLM Safety Leaderboard. The leaderboard is designed to provide a consensus on LLM safety and help researchers and practitioners gain a better understanding of the capabilities, limitations and potential risks of different LLMs.

Generative AI auditing and governance

Auditing generative AI models is crucial for mitigating the risks and challenges discussed in the chapters above. When developing models, these audits need to happen at three different points. The first is governance audits focused on technology providers, particularly major companies that supply the models. The models themselves also need to be audited before their public release. And audits at the application level are also crucial to assess risk based on use.

Leaders need to have a plan in place for any models they are developing specifically for the organisation, as well as an understanding of how safe public platforms are. 

Governance for data and AI is complex and working with a trusted provider to facilitate data governance and auditing can help organisations build and manage generative AI safely. Having a unified governance strategy to discover, classify and secure AI models in a centralised platform is critical for regulatory adherence.

How to get started

Seven ideas for how executives can get started with responsible AI:

  1. Establish clear objectives: Define specific objectives and outcomes that align with the organisation's values and ethical principles. This clarity helps guide AI initiatives towards responsible and ethical outcomes.
  2. Invest in talent and expertise: Build a diverse team with expertise in AI, ethics and governance. Having the right talent ensures that ethical considerations are integrated into the development and deployment of AI systems.
  3. Implement ethical frameworks: Develop and implement ethical frameworks and guidelines for AI development and deployment. These frameworks should address fairness, transparency, accountability and privacy concerns.
  4. Promote transparency: Foster transparency in AI systems by documenting and disclosing how AI models are trained, evaluated and deployed. Transparency builds trust with stakeholders and enables them to understand AI-driven decisions.
  5. Unify security across the AI lifecycle: Ensure that AI systems have defined security and access policies across all AI assets. This may include having strong security controls in place, including encryption, network controls, data governance and auditing.
  6. Continuously monitor and evaluate: Implement mechanisms for end-to-end monitoring and evaluation of data and AI systems to detect and mitigate any unintended consequences or biases. Regular audits and reviews help maintain accountability and trustworthiness.
  7. Collaborate with stakeholders: GenAI is an enterprise collaboration between the C-suite, technical leaders, and governance and compliance stakeholders. Engage with stakeholders, including employees, customers, regulators and advocacy groups, to understand their concerns and perspectives on AI ethics. Collaborative efforts foster a shared understanding of responsible AI principles and facilitate the development of ethical AI solutions.
EXPAND CLOSE

Contributors: Moe Steller, Laura Bithell