Monitoring employee AI use? Have an AI policy? The SEC will ask

IMG_0191.jpg
The Future Proof Citywide conference took place from Sunday, March 8, 2026 to Wednesday, March 11, 2026 in Miami Beach, Florida.
Rob Burgess/Financial Planning

Regulators including the Securities and Exchange Commission (SEC) are taking a harder look at firms' use of artificial intelligence tools in 2026, and firms that delay key steps like implementing AI policies or monitoring employees' AI use until they're under scrutiny could face problems.

Processing Content

That was the message from panelists during the session "AI Regulation Is Coming: What Advisors Need to Prepare for Now" on Monday at the Future Proof Citywide conference in Miami Beach, Florida. Moderator Andrew Foerch, deputy editor of Citywire, pointed to the SEC's 2026 Examination Priorities, which highlight scrutiny of how firms represent their AI capabilities; whether firms have policies and procedures to monitor their use of AI technologies, including fraud prevention and detection, back-office operations, anti-money laundering (AML) and trading functions; and how they are integrating regulatory technology.

Panelist Alec Crawford, founder and CEO of AI risk management platform Verapath, said that language signaled to him that the SEC is taking a broad view of AI regulation in the industry.

READ MORE: This is the biggest cybersecurity threat for wealth firms

Monitoring employee use of AI

For example, Crawford said if a contractor enters client data into a public AI model, the registered investment advisory firm that hired the contractor is responsible.

To mitigate this risk, Crawford said firms should require employees and contractors to access AI through a sanctioned portal that then tracks all activity.

"If you're not keeping track of what people are doing, you're going to have a problem," he said.

Panelist Thomas Stewart, founder and CEO of compliance software firm Hadrius, said that remaining compliant with AI boils down to oversight and clarity, even when firms are using only general-purpose large language models like ChatGPT or Claude.

"You have to be accurate and transparent in your reporting of how you're using AI in the firm, with your clients, with the market in general, so that you don't get caught misrepresenting how you're actually using it," he said.

Another risk area that firms should be wary about is what Stewart called "BYOAI" — "bring your own AI" — where employees introduce their own AI tools into the workplace.

"It's going to be absolutely essential that [chief compliance officers] and firm principals get control of that and provide guardrails for how their employees interact with AI," he said.

READ MORE: Using AI to write that client email? Think twice.

Creating AI policies within firms

To make sure they stay on regulators' good side, firms should begin sketching out their AI policies now if they haven't already done so, if only to show regulators that the process is under way, Foerch said.

"It's a proof of concept that you care about using this in a compliant way, and you're thinking about it proactively, as opposed to reacting to an enforcement action," he said.

Getting a policy in place is important, Crawford said, because during an AI-focused examination, the first thing the SEC is likely to request from an advisory firm is, "Show us your AI policy."

"The second thing they're going to do is they're going to say, 'Show us how you implemented your AI policy,'" Crawford said. "And if you're not doing what you said you're going to do, then you've got a real problem."


For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Regulation and compliance SEC
MORE FROM FINANCIAL PLANNING