'AI blowback' angst grips ESG investors who bet big on tech

Robot hand disturbing green moss in a forest floor
Hamsyfr - stock.adobe.com

ESG fund managers who turned to big tech as a low-carbon, high-return bet are growing increasingly anxious over the sector's experimentation with artificial intelligence.

Exposure to AI now represents a "short-term risk to investors," said Marcel Stotzel, a London-based portfolio manager at Fidelity International. 

Stotzel said he's "worried we'll get an AI blowback," which he describes as a situation in which something unexpected triggers a meaningful market decline. "It takes just one incident for something to go wrong and the material impact could be significant," he said. 

Examples that Stotzel says warrant concern are fighter jets with self-learning AI systems. Fidelity is now among fund managers talking to the companies developing such technologies to discuss safety features such as a "kill switch" that can be activated if the world one day wakes up to "AI systems going rogue in a dramatic way," he said. 

The ESG investing industry may be more exposed to such risks than most, after taking to tech in a big way. Funds registered as having an outright environmental, social and good governance objective hold more tech assets than any other sector, according to Bloomberg Intelligence. And the world's biggest ESG exchange-traded fund is dominated by tech, led by Apple, Microsoft, Amazon and Nvidia. 

Those companies are now at the forefront of developing AI. Tensions over the direction the industry should take — and the speed at which it should move — recently erupted into full public view. This month, OpenAI, the company that rocked the world a year ago with its launch of ChatGPT, fired and then rapidly rehired its chief executive, Sam Altman, setting off a frenzy of speculation. 

READ MORE: 3 tips for advisors to make sense of ESG investing

Internal disagreements had ostensibly flared up over how ambitious OpenAI should be, in light of the potential societal risks. Altman's reinstatement puts the company on track to pursue his growth plans, including faster commercialization of AI.

Apple has said it plans to tread cautiously in the field of AI, with CEO Tim Cook saying in May that there are "a number of issues that need to be sorted" with the technology. And companies, including Microsoft, Amazon, Alphabet and Meta Platforms, have agreed to enact voluntary safeguards to minimize abuse of and bias within AI.

Stotzel said he's less worried about the risks stemming from small-scale AI startups than about those lurking in the world's tech giants. "The biggest companies could do the most damage," he said.

Other investors share those concerns. The New York City Employees' Retirement System, one of the biggest U.S. public pension plans, said it's "actively monitoring" how portfolio companies use AI, according to a spokeswoman for the $248 billion plan. Generation Investment Management, the firm co-founded by former Vice President Al Gore, told clients that it's stepping up research into generative AI and speaking daily with the companies it's invested in about the risks — as well as the opportunities — the technology represents.

And Norway's $1.4 trillion sovereign wealth fund has told boards and companies to get serious about the "severe and uncharted" risks posed by AI.

READ MORE: How many finance jobs will artificial intelligence kill?

When OpenAI's ChatGPT was launched last November, it quickly became the fastest-growing internet application in history, reaching 13 million daily users by January, according to estimates provided by analysts at UBS Group. Against that backdrop, tech giants developing or backing similar technology have seen their share prices soar this year.

But the absence of regulations or any meaningful historical data on how AI assets might perform over time is cause for concern, according to Crystal Geng, an ESG analyst at BNP Paribas Asset Management in Hong Kong.

"We don't have tools or methodology to quantify the risk," she said. One way in which BNP tries to estimate the potential social fallout of AI is to ask portfolio companies how many job cuts may occur because of the emergence of technologies like ChatGPT. "I haven't seen one company that can give me a useful number," Geng said. 

Jonas Kron, chief advocacy officer at Boston-based Trillium Asset Management, which helped push Apple and Meta's Facebook to include privacy in their board charters, has been pressing tech companies to do a better job of explaining their AI work. Earlier this year, Trillium filed a shareholder resolution with Google parent Alphabet asking it to provide more details about its AI algorithms.

Kron said AI represents a governance risk for investors and noted that even insiders, including OpenAI's Altman, have urged lawmakers to impose regulations. 

READ MORE: How to educate an AI model: What financial advisors should know

The worry is that, left unfettered, AI can reinforce discrimination in areas such as health care. And aside from AI's potential to amplify racial and gender biases, there are concerns about its propensity to enable the misuse of personal data. 

Meanwhile, the number of AI incidents and controversies has increased by a factor of 26 since 2012, according to a database that tracks misuse of the technology.

Investors in Microsoft, Apple and Alphabet's Google have filed resolutions demanding greater transparency over AI algorithms. The AFL-CIO Equity Index Fund, which oversees $12 billion in union pensions, has asked companies including Netflix and Walt Disney to report on whether they have adopted guidelines to protect workers, customers and the public from AI harms. 

Points of concern include discrimination or bias against employees, disinformation during political elections and mass layoffs resulting from automation, said Carin Zelenko, director of capital strategies at AFL-CIO in Washington. She added that worries about AI by actors and writers in Hollywood played a role in their high-profile strikes this year.

"It just heightened the awareness of just how significant this issue is in probably every business," she said.

What Bloomberg Intelligence says:

"The EU may regulate AI that's tied to anything from a social media platform's recommendation systems to employment management tools, like resume-sorting software, and credit and exam scoring, deeming them 'high risk' applications. Such systems would need a conformity assessment and to be registered before their placement on the EU market."

Bloomberg News
ESG Technology Artificial intelligence Wealth management
MORE FROM FINANCIAL PLANNING