How activist investors are pressing big tech for transparency on AI

As companies increasingly incorporate AI into their operations, concerned shareholders are pressing them to become more open about how they’re using the technology and what safeguards they have in place

Empty Conference Room Before Meeting

Little more than a year after ChatGPT made its seismic impact on business and wider society, questions about the safety of AI have become a pressing issue for investors in the 2024 proxy season. 

Several AI-related shareholder proposals have been prompted by growing concern about the risks that rapid advances in this field pose to core institutions and the fundamentals of democracy and human rights.

The nearly 20 proposals submitted since late last year have mainly been aimed at companies ushering in the age of AI, including Alphabet (Google), Amazon, Apple, Microsoft and Meta Platforms. The signatories are seeking greater transparency regarding how the technology is being applied, as well as the disclosure of ethical guidelines governing its use. 

While these proposals have tended to come from the investment community’s more socially focused members, their concern about the ramifications of AI usage reflects shareholder sentiment more broadly, according to finance and governance experts. 

Courteney Keatinge is senior director of ESG research at Glass Lewis, a proxy advisory firm. She summarises the situation as “just a matter of investors getting a better understanding of how companies are using AI and companies being better able to communicate how they’re using it”. 

That’s easier said than done, of course. Companies don’t seem keen to meet investors’ demands by expanding on the voluntary disclosures that some have already made. But, given the growing societal pressure on big tech for greater openness in this respect, more formal reporting on AI-based activity is likely.

Piling on the pressure for AI disclosures

Investors aren’t alone in calling for better governance and more transparency. A range of authorities are seeking to create standards covering the use and development of AI.

The most sweeping of these so far is the EU’s Artificial Intelligence Act, approved in March by the European Parliament. This legislation aims to ensure fundamental rights and safety relating to AI systems. It will apply to any AI-based tool marketed in the EU, regardless of its creator’s location. 

We can expect to see increased regulatory scrutiny and, most likely over time, disclosure standards and requirements

In February, the UK government published its long-awaited plan for regulating AI. Built on core principles including transparency and accountability, the plan does not mandate legislation.

Last October, President Biden issued an executive order tasking federal agencies with the creation of guidelines for the use of AI. Scores of related bills are pending in the US Congress.

“I’m quite startled by how rapidly this is moving,” says Heidi Welsh, executive director of the not-for-profit Sustainable Investments Institute in Washington DC, which tracks ESG-related proposals. “Usually with corporate responsibility issues, things kick around for a couple of years and then a policy slowly emerges.”

Big labour takes on big tech

Yet that’s still probably not fast enough for some, including the AFL-CIO. The US trade union federation has adopted shareholder activism as a way to check the proliferation of AI. It has submitted half a dozen proposals seeking disclosures and ethical guidelines from the likes of Netflix, Walt Disney and Warner Bros Discovery. 

The role of AI in film and TV production emerged as a contentious issue in last year’s labour dispute between creative unions such as the Writers Guild of America and Hollywood’s big studios. While the final settlements included protections for workers, the stir caused by the recent release of Sora, OpenAI’s text-to-video tool, suggests that industrial strife concerning AI’s role in the creative process may well recur. 

Another focus of the recent AI proposals is the technology’s potential for amplifying misinformation and disinformation, posing a threat to democracies around the world, especially at a time when several major elections are imminent. With this in mind, activist investment firm Arjuna Capital has called on a number of big tech firms to issue annual reports on the risks arising from facilitating misinformation/disinformation and how they would address the problem.

Deflecting investor AI demands 

In its formal response to Arjuna Capital’s proposal, Microsoft indicated that it already had adequate policies and practices in place to manage such risks. Among other disclosures, it mentioned a new annual report on its AI governance practices – based on a commitment made at a White House meeting of major developers in July 2023 – set to be published by the end of this quarter. 

This is just a matter of investors getting a better understanding of how companies are using AI

Microsoft, which last year invested $10bn (£8bn) in OpenAI, also downplayed the disbandment of its ethics and society team last year, noting that it still had nearly 350 people working to ensure responsible developments in AI. 

Its reply broadly reflects those of other firms that have received AI-related proposals. The general message is that they already have adequate safeguards in place to ensure AI safety and are complying with recent government initiatives in this area. 

Two AI proposals have come up for a vote at annual shareholder meetings so far. The AFL-CIO’s call for ethics disclosures at Apple drew support from 37.5% of investors. At Microsoft, meanwhile, 21.2% backed Arjuna Capital’s AI misinformation proposal. 

Even though neither proposal gained the majority approval required for passage, Welsh says she is encouraged by the results – especially the Apple vote – given that the debate is such a new one.

Getting boards on board with AI

The issue is coming on to the radars of larger, more traditional asset management firms too. A survey of governance specialists working at such institutional investors published by EY in February found that responsible AI had surfaced as an “engagement” priority (in talks with companies) this year, with 19% of respondents citing it. 

Research published last year by ISS-Corporate, part of proxy adviser Institutional Shareholder Services, revealed that, as of September 2023, only about 15% of the S&P 500 were providing any information in proxy statements about their boards’ oversight of AI. 

Aiming to improve on that percentage, two shareholder proposals were submitted this year to Alphabet and Amazon respectively. One, from socially responsible investor Trillium Management, urged Alphabet to formally empower its board’s audit and compliance committee to oversee the company’s AI activities and fulfilment of its AI principles. The other, filed by the AFL-CIO, called on Amazon’s board to create a new committee to address the perceived risks its AI-based systems posed to human rights. 

While companies don’t yet have clear guidelines or disclosure requirements for AI in their financial reporting, that situation will change as the technology becomes ever more material to their businesses. So says Séverine Neervoort, global policy director at the not-for-profit International Corporate Governance Network.

“We can expect to see increased regulatory scrutiny and, most likely over time, disclosure standards and requirements,” she predicts. 

The recent disclosure rules on cyber risks issued by the US Securities and Exchange Commission (SEC) suggest a possible future for AI reporting, according to Keatinge, who foresees “a natural extension” of the regulator’s approach to cybersecurity matters.

Nonetheless, she acknowledges that a new set of SEC rules for AI-related disclosures is probably still some way off, given the painstaking nature of the regulatory process.