Here's how banks are using and experimenting with generative AI

AdobeStock_558672396_Editorial_Use_Only.jpeg
Banks are partnering with OpenAI to fine tune ChatGPT for their own uses, including knowledge management solutions that employees can use to search vast repositories of documents and data. Others are turning to companies that offer customer service chatbots trained on banks' own data.
Adobe Stock

Large language models could change how banks interact with customers and their own knowledge bases, and how they protect themselves and their customers from fraud and financial crimes, but few have released products that actually deploy the nascent technology.

That has left smaller banks that are in the learning and experimentation stages to take cues from technology leaders on where large language models — the kind of technology that powers OpenAI's ChatGPT — will become most useful in banking.

Large language models are one example of generative AI, a type of artificial intelligence that can generate content to mimic text, images, videos or other content on which it has trained. According to Michael Haney, head of product strategy at Galileo Financial Technologies, ChatGPT put this technology on many banks' radars very suddenly.

"There are very few banks who've put this into the production environment," Haney said of large language models. "Most banks may have not even been aware of generative AI until ChatGPT made headlines."

Two examples of banks using large language models in an experimental capacity or otherwise keeping its use strictly internal include Goldman Sachs using generative AI to help developers write code or JPMorgan Chase using it to analyze emails for signs of fraud.

Additionally, JPMorgan Chase trademarked a technology in May for a product called IndexGPT that could select investments for wealth management clients. The product is apparently part of a larger effort by the bank of leaning into technology investments, specifically in artificial intelligence. Unlike others, the trademark specifies that customers (not just bank employees) would interact with the model.

As banks grow more interested in adopting AI for various use cases, they need to be careful about their strategy for doing so, according to Jen Fuller, U.S. financial services lead at PA Consulting.

"One of the big risks about AI for organizations at the moment is it turning into a Frankenstein's monster of pet projects," Fuller said. "Everybody's doing their own little thing with AI, but to really get the organizational value at a strategic level, you need to build a framework where AI is part and parcel of the way that your organization does business."

One way that banks are making AI part and parcel of their business is by organizing their knowledge bases by training language models on internal documentation and allowing employees to interact with a language model that can answer questions that can only be answered by searching that documentation.

Ryan Favro, Capco
The unintended consequences of banning ChatGPT at work

Organize institutional knowledge

SouthState Bank's director of capital markets said last month that the bank has been training OpenAI's ChatGPT on bank documents and data (not customer data) to allow employees to query the system to summarize and assimilate the bank's internal records.

Similarly, in March, OpenAI and Morgan Stanley announced a partnership that was helping Morgan Stanley wealth management employees locate information within the investment bank's large repository of content. A spokeswoman for Morgan Stanley said Friday that 900 advisors now query the system.

Internal uses of large language models to organize institutional knowledge have the advantage of filtering model output through bank employees rather than giving it directly to the customer, as one of the well-known problems with large language models is that they can hallucinate — state something as fact that sounds plausible but is actually false.

This is one of the main motivations for Sydney-based bank Westpac partnering with AI company Kasisto to train a language model solely on conversations and data in the banking industry, but keeping the model for internal rather than customer-facing use. Kasisto started a similar partnership with TD Bank in 2018.

Bloomberg has also taken a stab at organizing financial knowledge, by training a large language model of its own on Bloomberg sources and public text corpuses such as Wikipedia. In March, Bloomberg released a paper on its model, which has 50 billion parameters. While small compared to the reported 1 trillion parameters in OpenAI's GPT-4 model and 1.2 trillion in one of Google's models, BloombergGPT does outperform top open source language models on certain benchmarks such as understanding dates in text and making logical deductions.

Provide customer service

Few banks have deployed chatbots that they publicly claim are powered by large language models, but companies like Kasisto and Monarch offer services to banks and consumers respectively that promise powerful chatbots by large language models.

As for chatbots overall, some of the leading customer service chatbots include Capital One's Eno, Bank of America's Erica, HDFC's Eva, and Santander's Sandi. However, these banks do not advertise these services as being powered by generative AI.

"I haven't seen anyone market their chatbot as a large language model," though banks will often market them as AI- or machine learning-powered, said Doug Wilbert, managing director in the risk and compliance division at Protiviti.

Rather than working like a language model, some chatbots work more like interactive voice response. Also known as IVR, this technology enables the automated interactions customers have when they call a company's support line. Rather than telling the caller to select from a menu of options by pressing a number during the call, IVR enables the caller to give short descriptions of what they need and redirects their call accordingly.

As banks started to release chatbots, some viewed them as replacements for IVR, according to Galileo's Haney. Rather than run the user input through a large language model to sift through the nuances of what the customer said, these chatbot systems tend to look out for keywords, which can lead to shortcomings.

"The problem is you can't anticipate every random question that the customer is going to have," Haney said of these IVR replacements.

For example, such systems struggle to interpret longer user inputs that provide context for their inquiry ("I deposited my paycheck before going shopping, but my card declined. Why did that happen?"). These systems can also struggle with inquiries that include multiple requests in one ("I want to see my checking balance and put half of it into savings").

These are the exact kinds of shortcomings the Consumer Financial Protection Bureau warned that chatbots in consumer finance can have. Specifically, the bureau said chatbots "may be useful for resolving basic inquiries, but their effectiveness wanes as problems become more complex."

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Technology JPMorgan Chase Capital One Bank of America Santander
MORE FROM FINANCIAL PLANNING