How advisors can avoid legal pitfalls of AI use

Scales of justice
Rob Wilson/Robert Wilson - stock.adobe.com

Chad D. Cummings thinks advisors should be aware of "a burgeoning landmine" he's seeing in his litigation practice — a sharp increase in lawsuits and malpractice risk arising from financial advisors' use of AI tools.

Cummings, who is CEO of estate planning and tax firm Cummings & Cummings Law in Bonita Springs, Florida, has seen "a surge of interest" in litigation by clients of financial advisers seeking to recover damages.

"This is not unique to financial advisers, as the misuse of AI tools among lawyers and other credentialed professionals has also been widely reported as of late," he said.

READ MORE: How financial advisors can help clients who want to move abroad

This observation comes as Financial Planning's new AI Readiness Survey found that among respondents, 20% of advisors said they were already using AI for creating legal document summaries and seeing results, 13% said they were doing the same but were not seeing benefits, 31% said they were not using  AI for legal documents but were exploring it, 26% said they were not even exploring it, and 10% didn't know.

As the results indicate, there is a wide range of opinion among experts about the suitability of these emerging tools in advisor workflows. Some have embraced AI wholeheartedly, while others are taking a more cautious approach as regulation catches up with innovation.

READ MORE: What to do when a client shares suicidal thoughts

'Our clients aren't paying us to ask AI for answers to their questions'

Firmly in the "absolutely not" category is Daniel Smyth, managing director of the East Coast Division at HCR Wealth Advisors in Los Angeles.

"Although AI is an exciting tool for financial advisors, it needs to be used carefully," he said. "Despite the vast improvements in AI tools, they still are far from perfect and by no means are they a licensed legal professional."

Instead, Smyth said his firm will spend the extra time to manually read and analyze legal documents to ensure the certainty of the advice they are providing clients.

"Our clients aren't paying us to ask AI for answers to their questions," he said. "Our clients are intelligent enough to use an AI prompt themselves. They are paying us for our own personal expertise, knowledge and thought leadership. … AI is an exciting tool that we are hopeful will improve our operations in the future; however, at the moment, our use of AI is minimal."

Similarly, Lee Trett, director and mortgage advisor at mortgage, pensions, investments and estate planning platform Money Helpdesk said his firm doesn't feel ready or fully confident enough to hand over the completion of legal documents to AI.

"There is also a trust element here as I believe some of our clients might be put off if we told them we use machines to handle a large portion of our legal document completion," he said. "While AI has a key role to play in helping financial advisers save time on their legal due diligence, human oversight is still needed for trust and accuracy purposes."

Unsupervised AI inaccuracies can mean legal exposure

In the event that professionals apply AI for condensing legal or financial documents, there is a built-in danger that errors, omissions or biases will be missed.

Allowing AI to produce biased or hallucinatory outputs can expose advisers to discrimination claims and suitability violations, said Cummings. An advisor who uses AI tools to generate profile-based investment or credit recommendations without testing for bias may face liability. 

Additionally, relying on AI for core advisory decisions without human oversight can trigger SEC enforcement for breach of fiduciary duty.

The SEC's 2025 Examination Priorities explicitly identify AI integration into advisory operations — such as portfolio management, trading, marketing and compliance — as a heightened focus area, with emphasis on examining compliance policies, procedures and disclosures tied to AI use.

An advisor relying on AI-generated recommendations without robust human review creates a roadmap for enforcement interventions, said Cummings, potentially violating duties of care and loyalty under the Investment Advisers Act.

"Machine-generated logic will not shield the advisor from legal responsibility," said Cummings.

Failing to disclose reliance on AI in investment advice is a material omission punishable under federal securities laws. The SEC has intensified enforcement against so-called "AI washing," which means overstating or misrepresenting AI capabilities.

"Public companies have faced an increase in securities class actions tied to misleading AI claims, and the SEC's emerging technologies unit is targeting deceptive disclosures," said Cummings. "The same logic applies to advisors. Failing to disclose that key recommendations derived from AI models may constitute a material omission."

Reliance on AI without human supervision can be viewed as a breach of fiduciary duty, especially in cases where clients are dependent on advisors to exert prudent common sense, said Kelsey Szamet, a labor attorney and partner at Kingsley Szamet Employment Lawyers in Encino, California.

Even as the technology serves to increase efficiency, it should never be a replacement for professional scrutiny, she said.

"Just as there is a duty on the part of employers to ensure fairness and accuracy when applying AI in staffing or performance evaluations, financial advisors must ensure human accountability in the use of AI tools on client business," she said.

On the other hand, Jay Zigmont, CEO and founder of Childfree Wealth in Mount Juliet, Tennessee, said the argument around AI errors, hallucinations and giving bad advice is faulty.

"We don't really know if AI is better or worse than human judgment at this time," he said. "Many firms lack a comprehensive quality assurance program that audits planners' advice, recommendations and interactions with clients. It is more of a 'no harm, no foul' system. If there are no complaints or compliance issues, mistakes can go unseen for the life of the client and the planner."

With AI, the current best practice is to monitor systems for errors, hallucinations and more, said Zigmont. This monitoring results in more errors found, and inaccurate pictures of the issues with AI, he said.

"If humans were under the same microscope, we would likely find just as many errors, if not more," he said. "Yes, there are potential legal pitfalls, but in an environment of constant improvement and oversight, the risks can be limited."

At his firm, Zigmont said they conduct a debriefing after every client meeting. The planner provides feedback on the meeting, gets feedback from the remainder of the team and the data is analyzed for potential issues, he said.

"We are currently creating an AI model of debriefing of client meetings, and are following that with an AI model for life, financial, tax and estate planning for child-free people," he said. "In this way, we can compare humans to AI and provide guidance and support for both."

Privacy concerns

Financial advisors are also subject to the Gramm-Leach-Bliley Act (GLBA), which is a federal privacy law applicable to companies that offer financial services.

Under the GLBA, advisors must share their privacy policies with consumers in writing, and give them the right to opt out from certain kinds of information sharing, and they must also ensure the security and confidentiality of customer records and information, said Yelena Ambartsumian, founding attorney at privacy and AI governance law firm Ambart Law in New York City.

"Financial advisers may not realize it, but when they are sharing their customers' information with a generative AI tool, they are in fact sharing that information with a third-party," she said.

In a similar vein, attorneys, who have obligations to safeguard client information, use legal AI tools which are run locally, which can guarantee that privileged information is not inadvertently shared or disclosed, said Ambartsumian.

If an advisor, however, is not using a specialized tool with confidentiality and security controls, they could inadvertently be opening themselves up to liability on the privacy front, unless the information they provide is stripped of any client-identifying information, she said.

"It definitely does not seem worth it," she said.

Transparency is essential

AI-generated financial advice — like AI-generated legal advice — may fall outside errors and omissions (E&O) insurance policy coverage and expose professionals to direct personal liability.

As scrutiny of AI in financial services intensifies, courts and insurers are demanding evidence of documented human oversight and compliance protocols, said Cummings.

"Without written review procedures, disclosure logs and auditable human sign-off, advisors may face denial of E&O claims," he said. "This can result in uninsured exposure for professional negligence, breach of fiduciary duty and violations of state consumer protection statutes, including states' unfair and deceptive trade practices statutes."

Cummings said advisors should immediately implement written supervisory procedures requiring licensed professionals to review all AI-assisted outputs before client delivery. AI use must be specifically disclosed in Form ADV Part 2, client agreements and marketing materials, he said.

"Firms must test regularly for model bias and hallucinations and retain records of both AI outputs and human review. Failure to act invites career-ending regulatory, civil, and personal liability."

Where an advisor deploys AI-derived insights or legal document summaries, that reliance must be indicated to the client, said Szamet.

"AI is a useful tool, but no panacea for evasion of responsibility," she said. "Advisors who incorporate it wisely, with open oversight and transparency, will be much better safeguarded than others who substitute automation for judgment."

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Legal and technology Regulation and compliance Lawsuits Litigation
MORE FROM FINANCIAL PLANNING