Is Your Data AI-Ready?


AI's potential in wealth management is only as strong as the data that fuels it. Many firms are eager to implement AI tools for planning, prospecting, and operational efficiencies—but without clean, structured, accessible data, these initiatives can stall before they start.  

In this session, we'll explore how to assess whether your firm's data is AI ready, identify the steps necessary to prepare your data infrastructure, and outline best practices for ongoing data management to support AI-driven growth. We'll discuss common challenges wealth management firms face, including data silos across CRMs, custodians, and planning systems, and how to resolve them to enable AI tools to deliver meaningful insights and automation.

Whether you're just beginning to explore AI or seeking to maximize your existing AI solutions, this session will help you build a strong data foundation to fuel innovation and scalable growth in your firm.

Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Tim Welsh (00:08):
Welcome to our 2:10 panel. We're starting one minute early, so that's a new world's record. We're going to talk all about the data that underlies a lot of these AI themes that we've been hearing about today and the rest of the conference. My name is Tim Welsh, president of Nexus Strategy, a consulting firm. I started about 20 years ago, specifically to focus on this segment of the financial services industry. So it's a niche within a niche within a niche. And when you do that, you get to know a lot about a little. Joining me on the panel today is an all-star group. I'll have each of them introduce themselves, and then I've got a million questions in the iPad, but we'd love yours as well at the end. Jeremi, please.

Jeremi Karnell (00:48):
Yes, Jeremi Karnell, head of data solutions at Envestnet.

Stefan Ludlow (00:52):
Stefan Ludlow, Chief Technology Officer of Cerity Partners.

Oleg Tishkevich (00:57):
Oleg Tishkevich, CEO of Invent, an AI data cloud company.

Geoff Moore (01:02):
Geoff Moore, Chief Information Officer for the Valmark Financial Group.

Tim Welsh (01:06):
So you can see we've got a wide range of different enterprises and companies and practitioners up here that are actually using this stuff and deciding on how it's going to play out. So really, if you think about data, it's always that same old statement, right? Garbage in, garbage out. There's a lot of data issues I think that we're going to discuss here to get going. We did a little pre-call and I got a bunch of questions for each one of them, but I'd love you guys just to dive in each time. Stefan, we like naming like Steph Curry. You said the biggest risk firms face isn't that AI will hallucinate, it's that their data will. What does AI-ready data really mean and how should a wealth firm assess its current state?

Stefan Ludlow (01:50):
It's unfortunately somewhat boring in that we've all been to these conferences. We've all walked away from them with a hundred action items, and so many of them often come back to our data. If you're trying to use AI, for example, we saw some MCP servers in use. We saw a bunch of presentations where you're going from a client record down to financial account records, summarizing other data about them. If data in your underlying platforms isn't connected, you can't do those linkages. You can't take AI to actually create the insights that you're looking for because it's just going to be like, "Oh, John Smith, I know nothing about him." The best loops that we've seen as we've developed AI prototypes and built some AI products internally are the ones where there's a tight loop between the data and the question. If I'm asking for a summary of what's happened with a client over the last year, if I can't pull together some demographic data about the client, about the relationship with Cerity Partners, about recent communication such as text messages, emails and what have you between us and that client over a period of time, plus some portfolio information, I essentially have to be able to construct that information, provide it to an AI, and then it can give me wonderful insights about that.

(03:11):
But if nothing's connected, if the data in your CRM has no connectivity to the data in your portfolio reporting system, if the data in your eMoney or financial planning platform has no connectivity to that system, you're just hobbling yourself. So it's unfortunately the old boring story of you have to have good data hygiene and good client data setup between CRMs and other platforms. At that point, you can do a lot of unlocking with AI. Not that you can't get fantastic use cases out of it without doing that, but the best use cases are when you can construct a really nice set of data to give to an AI. So it doesn't hallucinate. It's like talking to your own data. That's really the point.

Tim Welsh (03:52):
So where do most firms fall short? Is it structured, unstructured? They have no accessibility or data governance. Where's the problem?

Stefan Ludlow (04:00):
I find it's often just in the base connectivity. For example, if your CRM doesn't have a reference to financial account data at all—like a list of financial accounts even—so it can hop to another system, you're already hobbled there. If you don't have a consistent definition of a client between systems—for example, I see all the time folks who are using Adapar, Orion, Tamarac, what have you—if the client in Orion doesn't match the client in Salesforce with the same financial accounts, you're crippling yourself because you're going to get inconsistent answers back. Just that foundational operational process for onboarding a client: are we creating that data across our different systems consistently so that then we can get a lot of the productivity boost of using AI on top of that?

Tim Welsh (04:54):
Great. Geoff, along those lines, you recently rolled out a knowledge base for your advisors. Any lessons learned there, getting your data ready?

Geoff Moore (05:03):
Yeah. I'll share three data lessons I learned. We have a portal for our advisors that they can now ask questions of the AI, but the hardest part of the project wasn't necessarily the AI, it was all the data cleanup. In our portal, we had about 5,000 different documents. If AI is going to look at all of those, it might get the wrong answer. You want to make sure we were giving a good answer to our advisor. So we went through and I looked at basically: is this document on a page in the portal and has anyone even looked at this document in the last year? That was my first criteria. We went through and we eliminated 80% of the documents. Just that first pass, gone. Then the next thing we noticed is there were some documents that needed to be on a page, but we didn't necessarily want the AI to look at that document.

(05:52):
A good example is webinars where we list all of the archives of all the old webinars so people can see them, but they might have correct answers from three years ago, but not the correct answer today. We created this distinction of: this is data that we want, but this is data that we want AI to answer. The last thing we did was we just made sure that when people were asking questions, it was getting logged so that we could have feedback. We just had our conference last week, we rolled out our AI search, we had 200 people put questions in, and then we just used AI to analyze those questions. It came back and said, "Well, you need to write three new articles because what you had didn't have it." Basically cut out your old stuff, see what AI wants versus what you want. Lastly, just have some sort of feedback loop to keep improving your documentation and your training set.

Tim Welsh (06:45):
Wonderful. Great use cases there. Oleg, this morning you mentioned that people don't even know what a household is across systems or what a client is. Why is standardizing that definition so important for AI? And what's the process for doing that? What's your best advice having worked with all these companies?

Oleg Tishkevich (07:07):
So, great point about connecting AI closer to the data. There's also the H-word in our industry. It's the worst one: Household. Think about how you think about households. This is probably the biggest thing for AI to figure out because it can't; I don't think too many humans can. We work with firms that have seven different household type definitions. We're talking household types. Then business rules within those households are going to be very different from firm to firm. How you group people, accounts, businesses, trusts, retirement business, your private wealth business; what's your allocation across household from your portfolio perspective? Do you believe in Morningstar or what is the overall overarching answer?

(08:14):
When you're asking AI about your portfolio, which household should I be considering? Which asset allocation and asset classes should it be looking at for that particular answer? How am I going to bring it all together to actually provide a real answer to my client? I'm getting deep into the data, but these are the real questions and real problems. I saw a few people nodding their heads here; I'm sure you've experienced these types of challenges because just taking household as one little example, you've got your household in Orion and Black Diamond and Adapar and Tamarac, you've got your households in your CRMs—Salesforce, Redtail, Wealthbox—and then you've got your planning software with all kinds of fun planning households there. Imagine a conglomerate of all of these different groupings. You've got to bring it onto a single common denominator in order to really make sense out of it yourself first.

(09:19):
We have a lot of different firms trying to figure out and force everybody to one standard. That's one way. "Okay, we're going to use one type of household." But then you run into specific business use cases where that doesn't really work. Sorry to get deep into the data.

Geoff Moore (09:39):
That's a good example. The household example is really good because we haven't talked about it a lot at the conference yet, but this whole idea of context and knowledge management and having a subject matter expert explain what those seven different household types are and put it in the system. When somebody says, "Give me the households," AI says, "Well, which one?" And then it knows and understands what those differences are.

Oleg Tishkevich (10:01):
Exactly. For different use cases, you would have a different household type that you want to query. You need to train your AI agents to reply with, ultimately, knowing the sentiment of what the client is asking in what context to actually come back with the answer and with the data that's relevant in that particular use case. Great point.

Stefan Ludlow (10:22):
And the core challenge is that you come to a conference like this and it's like AI is going to solve all the world's problems. Then you're just like, if you as a human being can't navigate through a client graph—meaning the household, the underlying legal entities, the children who are getting a fee breakpoint because they're part of that family relationship—if you yourself can't do that within your own technology, you've unfortunately still got some homework to do. Because of the lack of consistent definitions within our practices—what is a household, what is a legal entity, how are we doing this—if you're not doing that consistently, it's very difficult to get consistent and good answers as you build systems to scale your business.

Tim Welsh (11:11):
Jeremi, along those lines, should I manage my data in-house or should I give it to you at Envestnet to do it? What are the risks of giving away my "data gold," as people call it?

Jeremi Karnell (11:25):
The way I look at it is if it's your data gold, if it's the source of your insights and of your governance, you don't outsource that to anyone. You keep that. If it's infrastructure related, then that's a whole different story. For example, Envestnet is in a really unique position. We've been doing institutional wealth data, aggregation, enrichment, making it trade-ready for the next day for 20 years. We have such a strong muscle at creating good data out of the gates. It's given us this unique position when we start thinking about how we leverage that data with client and advisor knowledge graphs, how we apply deep neural networks and machine learning, and how we use that to help fine-tune large language models. We're already in a really good place and we'll never outsource that. I highly recommend you don't either if you know the value of your data—if that's your gold, if that's your oxygen, keep that close to your vest.

(12:31):
On the infrastructure side, very different. I have no issue whatsoever taking that data, getting it replicated daily into Snowflake, being able to process our models with that data on top of Snowflake and sharing that data out to our enterprise partners that need that on either a semi-real-time or daily basis.

Tim Welsh (12:57):
Fantastic. Love that point of view and that perspective. Stefan, I believe you've described AI as a non-deterministic engine. What does that mean? How can firms take advantage of understanding that?

Stefan Ludlow (13:13):
All that means is you're not guaranteed to get the same output every single time. And that's the wonder. That's why AI is so good at being creative when you're talking to it and brainstorming—you can ask it the same question over and over again and you'll actually get something different each time. That's really wonderful when you're trying to do something creative, like create a wonderful marketing email. It's really bad when you're trying to open a new financial account or run a trade. As we think about AI and deterministic versus non-deterministic and what AI is really fantastic at, it's just something to keep in mind. If we're building agents that we're unleashing into the world that have opportunities to send emails, read our data, and make updates in underlying systems, there's a tremendous amount of risk in just doing that, from prompt injection attacks to simple inconsistency in business processes.

(14:14):
Now you look at our own staff and we're just like, "Oh, well, humans have variability too. Why isn't that okay within AI agents?" There's this expectation when we build technology that it's consistent each time—that when I ask an AI to do a thing, it's going to do it the same way every time. Unfortunately, that's just not the case. What that does is changes the calculus as we're building AI to be like, "What are we building?" Are we building AI tools that just have a grid of buttons that they can press and they're figuring out which one to do, or are we creating smaller segmented agents or smaller segmented bits of functionality with more rails around them? For example, "I'm doing a client meeting prep agent that has specifically the tools to do that client meeting prep that's reading from the transcript of our last meeting," versus, "Here's everything that you could do." It's all about narrowing and giving those agents limited context for execution. Give it just the data it needs to do the job, make that job as small as possible, and your success ratio compared to just, "Hey, go off into the world and figure it out," is very, very different.

Tim Welsh (15:25):
Anyone else have a point of view on that?

Oleg Tishkevich (15:27):
I wanted to add to your point, Stefan, that it's also very important to have a single source of truth for these types of situations. You could have a lot of different information in different places. If AI, especially if it's a deterministic type of approach where you know exactly where the data's coming from, you want it to come from the right source every time. Very simple real example: you have your Orion data and you have your Orion setup, and then you have your Salesforce with information in both places. You ask AI, "What is my current balance?" Salesforce is an integration to a custodian directly to Schwab; Orion processes the data and sometimes updates it during the day. You are always going to have a disconnect between the balances whenever you ask that question during any given day. Think about that. It's a nuanced situation, but that means you really need to understand and know what answer you want to serve to what client or even internal staff. I love the question of: how many clients do I have? Anybody really knows? Because it's different depending on which system you're going to look at and how well you track that information in that specific system.

Tim Welsh (16:59):
How do you reconcile that, Geoff?

Geoff Moore (17:02):
How do you reconcile the difference?

Tim Welsh (17:03):
Yeah, or you make sure that you have a mechanism to understand how many clients you have.

Geoff Moore (17:08):
You have to have good data authority. I think it comes back to that context question of writing good context for the systems and the experts. John was talking a little bit about how everybody's worried about all these jobs going away, but there's all of this work that needs to be done to explain to the systems and create the context and the knowledge bases so they know where to look, how to look, and under what circumstances.

Jeremi Karnell (17:28):
I wonder how much you do need to reconcile it. I think this is a hang-up in our industry. One of our cornerstone offerings that we offer at Envestnet is our insights engine. It's 25 million insights we process a day—all the next best action, business rule generated, and predictive generated insights. We generate those currently daily. You could get an insight that says you have a client that has maybe a high stock concentration in this equity. That could change over time; maybe they bought more later that day. There could be a variable between that percentage of concentration risk, but does that change the insight? There's still a concentration risk and it shouldn't paralyze advisors from saying, "Well, it's not 100%." At the end of the day, the next best action is correct. You've got to help them change this, even though it's not maybe perfect because of the data sync.

Oleg Tishkevich (18:31):
I think it comes down to what purpose the data is used for. My former background in financial planning—for a short amount of time I was at Envestnet as the CTO in charge of financial planning—if you're talking about the financial planning domain and you're talking about a client about their goals and what they're trying to achieve, you don't need daily trade-ready data to be able to make that recommendation to a client. In that use case, 100% agree. But if you are actually looking at the portfolio and you're setting up an account and you're doing something right now—did the account get funded, did I get this ACH or ACAT that came through—that is so essential to be able to do intraday. Depending on the type of use case, I think the answer could be different.

Stefan Ludlow (19:21):
The nice thing is many of us in this room are okay with building those productivity tools internally with a human in the loop, which means the barrier to entry has never been lower to build some of these workflows. Even if you have some issues with your data, some issues with householding, some issue with how frequently you're pulling the data, you can get really far with just some base-level integrations into a unified source of truth and be asking questions on that or building some automated workflows. I find a lot of people are just scared to even start with some of that automation. Just start with something small, something easy—summarizing recent activity or summarizing changes in the market based on portfolios. These are not particularly difficult things to do in Power Automate or other tools within your tenant. Just get started. Now, there's a world of work beyond where you'll probably call one of us at some point, but the barrier's never been lower.

Tim Welsh (20:27):
My advice is always to hire really expensive consultants.

Oleg Tishkevich (20:32):
One fun fact since we're on that subject. I just watched that YouTube video—I think it was EY or PNC—showing how they determined that one of those consultant reports that cost $430,000 to build was completely built by AI. They got in trouble for that. Deloitte, right? That was Deloitte.

Tim Welsh (21:00):
Along these lines, we heard Amy talk about the origin of AI—it's only been three or four years—but the fact that there are early adopters and then people who are laggards and afraid, is there a compliance issue here? Geoff, you start, and then Oleg, I know you have an issue about who owns the data.

Geoff Moore (21:19):
I'll just say this. To your first question of: are people scared, do they not know what to do? Absolutely. I saw that within our own organization. I saw a real lack or just apprehension because people didn't want to get fired. They didn't want to put customer data in the wrong hands. I felt like I needed to do a good job of creating a safe space for people to do that. So we got a ChatGPT enterprise agreement and it just took off. I saw innovation happening. We got this insurance marketing group and the manager for the insurance marketing group started an AI hackathon. It was nothing more than creating a custom GPT, but once he felt safe to do that, then we started seeing a lot more activity. When people feel safe and they know what the guardrails are—in fact, there was a study just came out that said the number two item for advancing AI use in an organization was having a governance policy just so they knew what the guardrails were. "What am I safe to do so that I can actually start to try some things?"

Tim Welsh (22:18):
How about that compliance stuff, Oleg? We've talked about being very vocal about data ownership and the risk of sending proprietary information to somebody else's LLM who could expose it or steal it.

Oleg Tishkevich (22:32):
There's definitely major providers that give you that SLA that says, "Okay, we're not going to train our models on your data." You can do that. We see a range of firms that say, "Yeah, that's good enough for us if it's Microsoft or AWS or Claude and I've got an enterprise agreement; they're by contract saying that we're going to have your data all to yourself and we're not going to train on it." It's good enough. And there's also this other camp, the "Big Brother" camp. "Yeah, we know you say that by contract, but we know these big companies." It's all about if I sue, how much money is it going to cost me to sue versus get the new technology implemented. Firms are looking at, "Well, can I go back to a private cloud use case?" Now more than ever, I think we see—it's funny how we all went to SaaS and went to cloud—we've all gone through that transformation and we're seeing a number of banks or large financial institutions now going back and saying, "Well, can I have all this bank AI infrastructure within my control? I know this data center, I know they have top-notch secure military-grade security, I know my data is backed up, I know exactly where it is, I know nobody's touching that."

(24:38)
Depending on where you land on that map, if you're somewhere in between or on one side versus the other, there are opportunities right now. We're working with firms that want to set up this private cloud. It's much more cost-effective now than you would think because if you follow the stocks of all of those great cloud companies that sell compute, that stock price keeps going up because you're paying for compute that nobody can compute realistically. With the private cloud, you know exactly what you're spending on this rack of servers and it's pretty much a flat amount. If you need another one, you know exactly how much that's going to cost. If firms are looking to grow and scale and they're really betting on AI and they want to be able to own their data and set up an environment where all of these cloud systems actually could run pretty much private cloud or on-prem with the secure environment, that is now very much possible. Some of our clients are making that a priority so they're safe from a security perspective, geopolitical reasons, and a bunch of other stuff that's happening right now in the world.

Jeremi Karnell (25:23):
And Oleg, I'm sorry to put you on the spot. They think that's safer than the existing infrastructures that exist with Azure or with Snowflake?

Oleg Tishkevich (25:32):
about control. It's not our firm. You guys are all familiar with a Rackspace or any of those systems that are nationally known for hosting services. Heck, any of those providers are hosting with them as well. It's literally cutting out the middleman and going directly to the infrastructure provider. We're not hosting any of that stuff; we're just helping them set up with those major providers that provide hardware essentially and management services on the lower level, and then you put your own software on top of it versus leveraging something that's already wrapped up nicely in the cloud with a compute cost.

Geoff Moore (26:09):
I would say I feel like we're a little bit in the middle. There's a lot of new AI startups in terms of risk, and some of these folks may not have the same background in cybersecurity or SOC 2 compliance. I've met some of these startups that are like, "Oh, well, the hosting provider has SOC 2." They don't have SOC 2. I can see depending on the type of data you're using, you may fall on—because I feel like sometimes I fall on both camps. Sometimes I'm like, "Oh, for this data set, we're going to keep it private and we'll work with this new AI startup, but we're going to do it so that they can't see it." But then for the bigger ones, maybe we'll use them. I think options are good.

Oleg Tishkevich (26:53):
Yeah. And to do that, if you do want to do it in the cloud, what I would recommend is you leverage obfuscation. Basically, there's capability like on the Invent platform where you can say, "I don't want to send any PII to any AI provider." You can create a pipeline of whatever that data is, structured or unstructured. Whether it's a table from your client list where all your client names, social security numbers, and email addresses are, they're all going to be stripped out and replaced with smart tokens that you understand. That's what the AI agent gets. So you know for sure, even if they train or not train on your data, none of your client information is at risk where you're going to get a nice letter from the SEC with a big fine.

Jeremi Karnell (27:41):
Because what retrieval augmented generation is meant to solve for...

Oleg Tishkevich (27:47):
Well, not really. RAG is...

Tim Welsh (27:50):
We can't have any debates up here. It's all good. Well, along these lines, we can get into the weeds here, but we'd love your questions too. I know Joel, you probably have a few for us in a minute, but next question up is: let's get a tactical plan. Everybody, I want to hear your point of view here. If you're advising a mid-size RIA or broker-dealer that wants to get prepared for AI—they're still in the decision phase—what's your best advice? What should they do over the next 90 days? Jeremi, and then we'll go down the line.

Jeremi Karnell (28:22):
Do nothing with AI in the next 90 days. Get your data house in order, first and foremost. Maybe your first 30 to 60 days is that. Make sure you've got good data and that you've got good data governance first and foremost. Again, you don't want to—that's where everything fails because that's the oxygen of AI. Then think of your use cases and think of the way you're going to govern those use cases. Everything will work itself out. But if we're talking about a 90-day time span, don't do anything with AI until those steps are taken first.

Stefan Ludlow (28:59):
I'd say two pieces of advice. First, I wouldn't follow "do nothing with AI" in that we won't play with some of the cool toys that are out there, but I would spend some time and audit your client onboarding process. How are you creating the client between each of the platforms that you're working with and how are your colleagues doing that? If they're doing it inconsistently—if one team's doing the CRM configuration, one team's doing the portfolio reporting, another team's doing eMoney—it's going to be inconsistent and you're just creating pain for yourself in the future. A first step if you've not done this is just do that flowchart. Is it consistent? Are people entering the same data and is that data flowing cleanly? Second is make sure that you have an actual paid account with one of the providers of LLMs. If you're just going out there and riffing with the free version of ChatGPT, you're going to stick something private in there and OpenAI is going to train on that data. It's going to be out there in the universe. We've literally seen people accidentally create public links to—not in our firm, but this was a Morgan Stanley issue, thank God—where they created a public link to a ChatGPT chat of a client's financial plan and that was scraped on the web and came up in a Google result. Absolutely insane. If nothing else, please sign up for a paid plan with data protections, preferably an enterprise plan. If you've done nothing else, experiment, enjoy using an LLM. Claude comes with wonderful MCP connections to a bunch of existing platforms, and get your data house in order.

Oleg Tishkevich (30:35):
I definitely agree with both of you guys. Jeremi, that's the one that you and I really see eye to eye on because I can't stress this enough: data is really important. Getting that done as a preliminary first step is so essential. I love your comment about the paid version. You'd be surprised how many people, especially dealing with somebody who's not familiar with this technology, they're like, "Wow, why would I do this? I just cut and paste into here." There are also settings even on your free version where it would not train on your data, so you can configure it, but I'd rather please just buy one.

Jeremi Karnell (31:23):
Did you see the results of the Schwab survey that was just released two weeks ago around AI adoption? They surveyed 900 of their advisors and they were like, "Yeah, 57% are adopting AI." Then you look at the use case: 70% ChatGPT. It's probably not. 30% training, 20% policy. You're like, "Oh, it's not 57% at all that's doing it the right way." It's probably more like 25%. Exactly. Geoff, what's your point of view?

Geoff Moore (31:50):
I'll go the opposite. I would say start experimenting because you may think your data's really clean or you may think your data is AI-ready, but you don't really know until you start testing and trying to do some of these things. If you've spent a bunch of time perfecting your data for AI but you're not testing it with AI, how do you know you're really setting your data up for AI? I would say start it, see what you learn, adjust it, and learn from there.

Tim Welsh (32:15):
Okay. Audience participation. Anybody want to try and stomp our panel? We're ready. There we go. We have a mic coming, so we will sing a song while we wait for the mic to come.

Audience Member (Michelle Feinstein) (32:30):
Hi, Michelle Feinstein, Salesforce. A lot of the conversation we're having lately with some firms that are a little bit more advanced is they don't want to go all-in with one AI solution. They want to start dabbling with different AI providers, and that gets into multi-agent orchestration. Do any of you have a point of view on that? Also, we are hearing this term MCP. Is this the next amazing solution to make that easy?

Tim Welsh (32:55):
Who wants it? Oleg, I'm starting with you.

Oleg Tishkevich (32:57):
I can take it. Absolutely. Where it's going with all of the AI—I think the theme was in the previous couple of sessions—it's really becoming more commoditized from a standpoint of, okay, if you're a startup (advice to any startups here in the room that are AI startups and you think you're going to build the next AI desktop where everybody's going to use it): I'm going to break it to you, it's not going to happen. People are not going to switch to a brand new AI thing. Maybe I'm wrong. Please, if five years from now that's the case, that'd be pretty weird. But people would like to see things on their terms. Advisors working within practices have certain tools and certain capabilities. You want to be able to create those agents that are much more integrated and connected to whatever environment that this AI is deployed to. That's the vision and mission of Invent: we're trying to help this community of different solutions and systems—maybe some competing with each other—but we need to have a common ground. We need to have a way to bring all these things to all of your desktops without you spending $430,000 on an AI report from a consultant. Those are good ideas.

Stefan Ludlow (34:18):
I would say one of the deepest challenges that we had personally as we were first embarking was that every platform is coming out with their own agentic framework. Box.com: "Here's our agent." Salesforce: "Here's our agent." eMoney: "Not yet." But other financial planning tools: "Yes." And they don't talk to each other. Figuring out a strategy where you're picking a platform to run agentic workflows or just LLMs where there is either interop or there's data from a number of different platforms is key. Salesforce is frankly a wonderful platform if you're bringing in the financial account data, the financial planning data, and what have you. If you just run an LLM, either Salesforce's Agentforce platform or just hooked ChatGPT up via Flow, you can do this and you've got access to the underlying data and you can get pretty far. Having a perspective with multi-platform data accessibility is critical.

Tim Welsh (35:19):
Thank you, Michelle. Joel, stomp our panel, please.

Audience Member (Joel) (35:23):
I'm not going to stomp them. I want their opinion on something. This morning, Michael Kitces was talking about AI being just a giant word processor and the only applications are—what did he say?—content and email and note-taking. What do you guys think of that?

Jeremi Karnell (35:42):
I disagree with it. I think this industry's obsession with AI note-taking, with research and synthesis, is an indication of a lack of maturation with AI. I really do. It's episodic, it's self-report; it's the difference between an advisor calling their client and being like, "Hey, this is what we talked about," versus having a reasoning fabric that's taking all of the real-time data that's coming off of your brokerage accounts, your managed accounts, the DTCC and Dazzle insurance and annuity data and being like, "This is what we're seeing and this is what we have to do about it." That's the difference. I just think that—and that Schwab survey said the same thing—everything that the advisors are focused on right now is note-taking, research, et cetera. As far as I'm concerned, your investors are using AI for that same thing. That's parity. If you're showing up to that dance at parity with your investors, I don't think that's going to be a long-term relationship. I think with AI, especially with the data we have access to, you need to show up asymmetrically. That is tapping into those feeds that allow for you to give daily next best action and really drive their portfolio forward versus being focused on this. I'm not taking it away from note-takers—I think that unstructured data is a really important part of a bigger stack—we just as an industry seem so hyper-focused on this one solution set. Again, I think it's a lack of maturation.

Geoff Moore (37:24):
Well, I think some of the MCP stuff is starting to get more matured too. The concept is out there, people get it—AI is going to talk to different systems—but it's still hard to leverage everywhere. Not a lot of systems are fully supporting it yet. As you start to see more and more of that, we'll see better use cases to talk about and share with each other.

Tim Welsh (37:42):
I think, Joel, to your point, Michael's a wonderful guy that knows so much, but he doesn't know everything. Next question over here.

Audience Member 1 (37:50):
Getting your data ready as you described it, how does that look for an RIA that might be 500 million versus two billion or 10 billion plus? Second, how is it important that the software providers that we obtain have open APIs to allow for this all to flow? Because some of them will connect with anyone—those are the younger ones—and some are the bigger ones. You have a bunch of firms you work with, right?

Stefan Ludlow (38:29):
I would say that number one, it's a whole lot easier to get your data AI-ready now than it will be if you're growing and you started in six months or 12 months. There's never a better time than now. If you are a solo advisor or a small practice or modest practice with 500 million or a billion, my God, it's so much easier to get your data house in order.

Geoff Moore (38:56):
And you can use AI to help you. We had a data cleanup project a year ago. A young team tried to do it more manually. They used one of our AI tools and they were able to do it in a couple of days just because the AI can now help you.

Stefan Ludlow (39:09):
Doing that for a book of 500 or 1,000 clients is so much easier than doing it for 50,000 clients. Secondly, on the API front, I think it's absolutely critical. As we're doing vendor evaluations, it's actually become a core part of our due diligence in making sure that as we're making an enterprise purchasing decision, that's coming with an open API. We've been burned a couple of times with a provider where we got a big enterprise contract and we've gone down the road and been like, "All right, we're ready to hook her up to all of our systems." "Oh, that'd be an additional..." It feels like an additional 50,000. I think that's critical. Just like looking at the SOC 2 Type II, look at open API specs; make sure it's in the licensing.

Tim Welsh (39:51):
Okay, we have one minute left. Mark's got a question. Mark, go ahead.

Audience Member (Mark) (39:57):
Mark with Napsak, also AI for Advisors podcast. For the advisors here who are different sizes, at what point is a data lake appropriate for an RIA? What is the minimum size where that's a good investment versus just plugging straight into MCPs across different providers?

Oleg Tishkevich (40:15):
A data lake, historically—and I was just on another panel where some of the experts on AI said, "Well, data lake takes two years to build and it's like hundreds of thousands of dollars." That is so 2024. 12 months have gone by and technology has evolved exponentially. On Invent, you can set up a data lake pretty much within two days. The tricky part comes then when we start cleaning your data; that takes about two months of going into all the different source systems and making sure that data is actually your single source of truth. That does take time. Standing up a data lake could be done within two days now. From a size per se, our minimums for a full data lake deployment is like 2,200 bucks per month. It's very accessible compared to the hundreds of thousands of dollars that it used to cost firms to stand up. Technology is improving and moving really fast and it's really accessible even for firms that are 500 million or one billion, two billion, et cetera. But it is an essential catalyst for growth. If you really want to grow, if you want to put your house in order and grow efficiently, that's what you need to start with.

Tim Welsh (41:37):
The clock is blinking at me. I apologize, we're out of time. So really want to say thank you very much to our awesome panel. Thank you.