Wealth Think

AI can transcribe client convos — but it fails to understand them

Artificial intelligence is increasingly finding its way into the workflows of financial planners, but more unevenly than industry headlines may suggest.

Processing Content
James Woodfall
James Woodfall, founder, Raise Your EI

In discussions with advisors in workshops on emotional intelligence I've conducted over the last year, only around 20% say they use AI on a day-to-day basis. Most remain cautious, unsure of where it fits into their practice, or are waiting for out-of-the-box solutions that don't require time-consuming customization. 

Among users, adoption is narrow. Most tell me they use AI to generate meeting transcripts and create meeting summaries. Some go further, pasting transcripts into tools like ChatGPT to get feedback on their performance or to identify themes and potential client insights. 

Here's where the risk starts. Although AI outputs look good — even impressive — on the surface, advisors universally acknowledge that meeting transcripts and summaries can contain errors. 

One reason is that advisors treat transcripts as objectively accurate records. They aren't. Accents, poor microphone quality, background noise, mumbling and interruptions routinely distort meaning. Words are missed, misheard or mistranslated.

READ MORE: How AI is changing advisor routines in 2026: Ask an Advisor

False AI flags

Even when technically correct, AI transcripts record an extremely narrow slice of human interaction. They capture words but little else. 

In reality, clients communicate through posture, facial expressions, hand movements, hesitation and silence. AI usually edits out false starts, stutters and filler words like "um" and "ah." These signals can reflect confusion, emotion-processing or distraction and provide the context for understanding. 

An advisor shared an example with me that perfectly illustrates the problem. During a meeting, a couple joked about the husband "ending up in a home" later in life. It was clearly lighthearted banter between spouses, but the AI summary interpreted this literally and flagged the husband as vulnerable with potential care needs. Nothing in the transcript indicated distress, but the AI tool, lacking the context to interpret vocal tone or facial expressions, filled in the gaps. 

I tested this myself by comparing an AI analysis of a client meeting with my own, based on watching the video. At one point, AI flagged an advisor's comment as "emotionally intelligent." The video told a different story. The client's facial reaction to the comment clearly showed discomfort. 

READ MORE: How emotional intelligence is at 'The Heart of Finance'

Proceed with caution when relying on AI

Advisors, especially those under time pressure, are inclined to accept such AI outputs at face value. In EI workshops, I train financial planners to interpret more than words. AI, by contrast, is forced to infer emotional meaning from language alone. That gap leads to confident-sounding but flawed conclusions, which it will then go on to reinforce unless challenged with new evidence. 

The answer isn't to abandon AI, but to be clear about what it can and can't do. AI does a good job of telling you what was said in a meeting. What it cannot reliably do is tell you what those words meant in context or why something mattered to a client in a particular moment.  

A safer approach is to be cautious with AI-driven conclusions. Instead of asking it to interpret a meeting, use it to highlight where transcripts are unclear or where information is incomplete. If it doesn't have enough context, it should say so. Used this way, AI acts as a prompt for human judgment. 

AI is undoubtedly a time saver for advisors. But to avoid headaches down the road, my advice is to reclaim some of the time saved to review outputs.

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Behavioral finance Wealth management
MORE FROM FINANCIAL PLANNING