Can AI do the work of research analysts?

The technology that powers ChatGPT can sift through and even synthesize massive amounts of data, though it must overcome doubts over reliability, transparency and regulatory risk before it can be harnessed to conduct useful research.
Adobe Stock

Financial institutions are already using large language models (LLMs), the kind of artificial neural networks that power OpenAI's ChatGPT, to help employees and customers find information quickly and generate code. They are considering the use of advanced AI in other areas, including back office automation and customer service.

Some observers think LLMs could transform how financial institutions conduct research and analysis — identifying trends and synthesizing more expansive and reliable data sets and insights from new research.

"Looking for information, analyzing data, interpreting data, looking for trends — all those things are completely possible with generative AI," said Darrell West, senior fellow in governance studies at the Brookings Institution's Center for Technology Innovation. "That will free human analysts to focus on more creative aspects."

Generative AI, or algorithms that can be used to create new content, could help a company cut 10%-15% of overall research and development costs, according to a study published in June by the consulting firm McKinsey & Company. The same study found that generative AI could bring banks an additional $200 billion to $340 billion in annual revenue.

Uncertainty over how generative AI will be regulated, its questionable reliability and opaque operational processes make financial institutions unlikely to benefit from that research and optimization in the near term, however, said Dan Latimore, chief research officer at Celent.

"This is where the possibilities and the creativity of … AI bumps up against the need for financial services to have it exactly right," Latimore said. "I don't yet see a compelling use case until this has been proved out a bit more robustly."

Yet, LLMs have already proven themselves capable of competing with Wall Street analysts when analyzing the cryptic language the Federal Reserve uses to communicate its monetary decisions, a March study from the Federal Reserve Bank of Richmond found. 

That study noted, however, that the LLM used — GPT-3 — might fail to capture nuances that a human evaluator would pick up on.

That fallibility, along with generative AI's tenuous math skills, make them most effective as a launching point rather than an end-product, Latimore said.

"It can be an input in this case to a researcher who could take it, with all its caveats, and perhaps use it to drive their thinking further," he said. 

'It's worth all the hype': SouthState Bank deploys enterprise ChatGPT

The same rings true for another generative AI use case, said Alenka Grealish, senior analyst at Celent: artificial data generation and collection.

Generative adversarial networks (GANs) are trained to synthesize data, and they are a budding technology that could push that innovation forward, Grealish said.

GANs could eventually help financial institutions synthesize and share more research on the likes of customer behavior, transactions and product needs without being hampered by requirements to keep customer information private, Grealish added.

"The other benefit is [when a financial institution] may not have enough data on a certain segment," Grealish said. "You can imagine there are certain customer groups that may not have as big of a profile in credit. They could essentially augment their existing data through these gaps."

AI models trained on large pools of text data could also generate realistic humanlike survey responses, upgrading the speed and scale of data collection by supplanting the need for human participants and crowd workers, according to an article published this month by a team of researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania. 

Automated data collection becoming a reality hinges on LLMs being trained on data from understudied communities, said Dr. Igor Grossmann, the study's lead author and an associate professor at the University of Waterloo. 

"You have to go out and gather new data for the training sets that would potentially be representative of these communities — which is much more difficult to do," Grossmann said.

If that information could be more robustly sourced, LLMs could provide insights into more diverse communities, even helping to reduce historical biases in data sets, he added.

That would be a doubtful advantage though, Grossmann cautioned, if financial institutions rely on one-size-fits-all AI architecture in different communities.

"Imagine you have somebody coming from the African American community in rural Nebraska," he said. "That's a very, very different type of scenario than a Latino community in upstate New York."

If financial institutions use the same AI tech across the board, they could also risk producing the kind of myopic thinking that brought about the 2008 financial crisis, Grossmann added.

The range of risks that AI-based research raises underscores the need for financial institutions to continue to employ human analysts as fact-checkers and make more of a robust effort to hire AI ethicists, who could check for biases in data collection, West said.

"Companies should be more proactive than reactive," West said. "They need to anticipate what the risks are, and figure out how to deal with them — before millions of people sue them."

Luckily for financial institutions, they can benefit from pilot work that technology companies are already trailblazing to identify AI use cases in the financial world, Grealish said.

Even great ideas cannot overcome regulatory barriers, though.

"They can break through walls, but they can't break through regulatory walls," Grealish said. 

Lawmaking specific to AI is thin in the U.S., even while more robust regulatory efforts appear to be in the works and the EU has been more bullish

LLMs are particularly imperiled by regulatory probing because they tend to be "black boxes," incapable of revealing how they produced outputs, Grealish said. That makes it difficult for financial institutions to monitor and explain AI-generated mishaps.

"They're built, but you have to tame them and have transparency," Grealish said.

In an industry where a minor accounting blunder could deny a family a mortgage-securing loan or roil stock markets, experts agree that research is a less compelling use case than areas like customer and client-facing chatbots, where enterprise AI systems can be trained on tested inputs.

Still, financial institutions may soon be forced to stand at bay if Congress passes a law on AI or bank regulators create a rule banning its use in certain areas.

The field is evolving and revolutionizing industries at an ever-accelerating pace, and AI will likely outperform human analysts, augment how data is collected and provide a strong ROI, West said. 

"Research can be instantaneous and made available online to our customers," West said. "The machines are going to do it much faster than a human can do it."

That makes it imperative for financial institutions to watch for all potential use cases, like those in research and analysis, Latimore said. 

"The field is changing so fast, faster than anything I've ever seen," Latimore said. "[Financial institutions] have to keep an eye on this and make sure that they can use what's been developed as appropriate, in really practical use cases, when the time comes."

For reprint and licensing requests for this article, click here.
Bank technology Artificial intelligence Data Analytics
MORE FROM FINANCIAL PLANNING