Regulators turn up the volume on calls for AI guardrails as the technology spreads across wealth management

The OpenAI logo
Bloomberg News

For each "wow moment" the rise of AI has produced among the general populace in 2023, warnings about the potential harm the technology could inflict if wielded irresponsibility have followed close behind.

As a result, the list of overseers pushing wealth managers to spill the details about their looming AI plans in the name of consumer protection is growing.

The state of Massachusetts is the latest entity calling for such clarity amid the craze. Last week, Secretary of the Commonwealth William F. Galvin directed his securities division to investigate how firms are using the technology in their interactions with investors.

Secretary of the Commonwealth William F. Galvin
The state of Massachusetts

The division sent letters of inquiry to a number of organizations "known to be using or developing the use of AI for business purposes in the securities industry." Correspondence was sent to J.P. Morgan, Morgan Stanley, Tradier Brokerage, U.S. Tiger Securities, E*Trade, Savvy Advisors and Hearsay Systems. 

Financial Planning's requests for comment from the aforementioned firms were either declined or remain pending as of press time. 

Of noted interest to Galvin is whether the supervisory procedures firms have in place regarding artificial intelligence ensure the technology will not put the interests of the firm before clients' interests. For firms that have already implemented AI, the division will be evaluating the disclosure processes in place.

READ MORE: Wealthtech experts say advisors who don't embrace AI now are 'crazy'

Galvin says state securities regulators have an important role to play when it comes to AI and its impact on investors. He believes that harm could come to clients if these powerful tools are "deployed without the guardrails necessary to ensure proper disclosure and consideration of conflicts," as he said in a statement last week.

In addition to making inquiries, Galvin's securities division is questioning select firms about any marketing materials provided to investors that may have been created using AI. Firms included in the sweep have been given until Aug. 16 to respond to the inquiries.  

A spokesperson for Galvin's office said Monday that the division is not releasing information about the specific content of the letters beyond what was in a statement issued last week. The spokesperson said each letter varied based on the recipient and considered their previous public statements on AI.

But the 27th Massachusetts Secretary of the Commonwealth made his intentions clear in an interview with Financial Planning. He said his office has a long history of consumer protection and dealing with entities that sell investments.

That history contains a lot of interaction relating to sales practices, the conduct of brokers, the suitability of products being sold and more, he said. 

"The difference here is you have a whole new technology being introduced. We're not anti-technology. But we want to make sure that in the process of this new technology being introduced, there's the same protections that we put in place in the past," Galvin said. "The only way we can determine that is to ask, and that's precisely what we're doing. We're trying to get information from the companies as to exactly how they're going to use this new technology, what biases that technology might have, what flaws it might have, what are the data points that the technology is going to rely upon and what risks are there to potential consumers." 

He adds that these inquiries are not a condemnation of the technology itself. Instead, it's an acknowledgment of this technology's potential to change the game.

Galvin also points to the disparity across the companies in how they're applying it. Among the companies contacted in the sweep, use cases involve things like development platforms, content creation and advisor dashboards.

The secretary says the first step to knowing how to protect investors comes with learning as much as possible about these uses as possible.

"There's no prejudgment here other than a desire to know that something new is coming and to know how it applies," Galvin said.

The sweep comes as other government and business entities are dispatching their own fact finding missions in an effort to create a consistent framework for AI to flourish within.

The Federal Trade Commission has opened an investigation into ChatGPT developer OpenAI to examine whether the chatbot poses risks to consumers' reputations and data. On July 21, President Joe Biden said that his administration would take new executive actions in the coming weeks to set a framework for "responsible innovation" with the technology.

Earlier that month, Securities and Exchange Commission Chair Gary Gensler reiterated that SEC staff was weighing whether new rules were needed to properly regulate the tools he has called "the most transformative technology of our time."

SEC Chair Gary Gensler
Bloomberg News

On July 26, the SEC approved a plan to root out what Gensler has said are conflicts of interest that can arise when financial firms adopt artificial intelligence. The proposal is the latest move from Washington regulators concerned about AI-driven technologies' power to influence everything from credit decisions to financial stability. 

Under the SEC plan, companies would need to assess whether their use of predictive data analytics or AI poses conflicts of interest, and then eliminate those conflicts. They would also have to beef up written policies to make sure they stay in compliance with the rule. 

"These rules would help protect investors from conflicts of interest and require that regardless of the technology used, firms meet their obligations to put clients first," Gensler said at the time of the plan's approval. "This is more than just disclosure. It's about whether there's built into these predictive data analytics something that's optimizing in our interest, or something that's optimizing to benefit financial firms."

Around the same time, a group of tech companies including Google and OpenAI were finalizing plans to create an industry body to ensure that AI models are safe. 

The effort, also backed by Microsoft and AI startup Anthropic, aims to consolidate the expertise of member companies and create benchmarks for the industry, according to a statement released July 26. 

The group, known as the Frontier Model Forum, said it welcomed participation from other organizations working on large-scale machine-learning platforms. At the urging of the White House, companies involved in the Frontier Model Forum have agreed to put safeguards in place before Congress potentially passes binding regulations.

"This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety," Anna Makanju, vice president of global affairs at OpenAI, said in a statement.

However, there is concern that these efforts lag behind the pace of AI developments spurred by competition and enthusiasm. In Europe, where the European Union's landmark draft law passed in June made it one of the first places in the world to take wide-reaching action to regulate artificial intelligence, leaders have recognized the need for voluntary commitments from companies before binding law is in place. 

One White House official estimated it could be at least two years before European regulations began impacting AI firms, Bloomberg reports.

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology Regulation and compliance Machine learning Wealth management Fintech
MORE FROM FINANCIAL PLANNING