Regulators including the Securities and Exchange Commission (SEC) are taking a harder look at firms' use of artificial intelligence tools in 2026, and firms that delay key steps like implementing AI policies or
That was the message from panelists during the session "AI Regulation Is Coming: What Advisors Need to Prepare for Now" on Monday at the Future Proof Citywide conference in Miami Beach, Florida. Moderator Andrew Foerch, deputy editor of Citywire, pointed to the
Panelist Alec Crawford, founder and CEO of AI risk management platform Verapath, said that language signaled to him that the SEC is taking a broad view of AI regulation in the industry.
READ MORE:
Monitoring employee use of AI
For example, Crawford said if a contractor enters client data into a public AI model, the registered investment advisory firm that hired the contractor is responsible.
To mitigate this risk, Crawford said firms should require employees and contractors to access AI through a sanctioned portal that then tracks all activity.
"If you're not keeping track of what people are doing, you're going to have a problem," he said.
Panelist Thomas Stewart, founder and CEO of compliance software firm Hadrius, said that remaining compliant with AI boils down to oversight and clarity, even when firms are using only general-purpose large language models like ChatGPT
"You have to be accurate and transparent in your reporting of how you're using AI in the firm, with your clients, with the market in general, so that you don't get caught misrepresenting how you're actually using it," he said.
Another risk area that firms should be wary about is what Stewart called "BYOAI" — "bring your own AI" — where employees introduce their own AI tools into the workplace.
"It's going to be absolutely essential that [chief compliance officers] and firm principals get control of that and provide guardrails for how their employees interact with AI," he said.
READ MORE:
Creating AI policies within firms
To make sure they stay on regulators' good side, firms should begin sketching out their AI policies now if they haven't already done so, if only to show regulators that the process is under way, Foerch said.
"It's a proof of concept that you care about using this in a compliant way, and you're thinking about it proactively, as opposed to reacting to an enforcement action," he said.
Getting a policy in place is important, Crawford said, because during an AI-focused examination, the first thing the SEC is likely to request from an advisory firm is, "Show us your AI policy."
"The second thing they're going to do is they're going to say, 'Show us how you implemented your AI policy,'" Crawford said. "And if you're not doing what you said you're going to do, then you've got a real problem."










