Fireside Chat: Agentic AI Simulation for Client Growth (With Compliance Built In)

AI-powered simulation sandboxes are transforming how wealth managers and RIAs engage clients. These agentic AI tools let advisors safely preview likely client reactions to marketing campaigns, portfolio recommendations, and even sensitive fee conversations—before risking trust or introducing compliance exposure.



From a business strategy standpoint, simulation turns compliance from a hurdle into a competitive advantage. Firms can:



- Accelerate client acquisition by testing outreach that resonates with specific investor segments.



- Enhance ongoing engagement through hyper-personalized nudges and education that feel timely and relevant.



- Embed compliance and risk controls directly into the design process, reducing regulatory friction without sacrificing speed or innovation.



The results are tangible: higher conversion rates, stronger retention, and scalable personalization that traditionally required costly human touchpoints. For wealth managers and RIAs navigating tighter margins, rising expectations, and evolving regulations, agentic AI simulation sandboxes offer a powerful combination of growth acceleration and built-in compliance confidence.


Transcription:

Brian Wallheimer (00:13):

Welcome back if you were here for our first session. If not, if you're just starting out today, you've come in at a great time. We are going to be talking about Agentic AI simulation for client growth with Sameer Munshi. He's head of behavioral science and simulation at EY. Good morning. Well, good morning, good afternoon, depending on where you are. How are you today, Sameer?

Sameer Munshi (00:34):

Hey, I'm wonderful, Brian. Great to be here with you today.

Brian Wallheimer (00:37):

Awesome. Great. So we're going to jump into this. You were talking about Agentic AI simulation. Talk to me a little bit about the technology here. When we talked earlier in the week, we talked about the sandbox technology and the ability to simulate quite a few things. Give me the broad strokes real quick.

Sameer Munshi (01:00):

Yeah, this is the exact right question. So think of simulation sandboxes as crystal balls. In each sandbox or crystal ball, you can recreate any population that you're interested in the world. So that could be something like high-net-worth investors in the US or in a specific metropolitan region. It could be CFOs of energy companies in Europe. Each of these populations is recreated using breakthrough technology called synthetic data through Agentic AI. What it's doing is actually recreating the parameters of a human. So you can think of it as decoding human behavior based on how you describe it. That allows you to, within each sandbox—so we'll go back to the high-net-worth investors in the United States—you can now interact with these people the way you would in a qualitative survey, in a qualitative conversation, a quantitative survey, and you can test what resonates in terms of messaging. How can I engage? What if I raise prices? The use cases are really limitless. So I'd say in summary, simulation changes the historical approach of, "Let's go try something in the real world and then ask, did it work?" Now you can actually say, "Let's test it in the sandbox. Will it work in the real world?"

Brian Wallheimer (02:37):

So how do you know it works, right? I mean, that's the thing, right? Because I think one of the knocks that people have on AI these days is we talk about hallucinations, we talk about getting these—I had a colleague the other day ask me a question, and I went through and I started poking around trying to find an answer for her. Google's AI told me something that I found out minutes later was absolutely wrong. So if we're going to go out and do this, and we're going to make decisions based on these simulations, how do you know that it works?

Sameer Munshi (03:10):

Exactly. I think if that isn't the first question, "How do we know it works? How do you prove that it works?" if you don't have at least some healthy skepticism there, I've got a bridge to sell you. That was exactly my reaction, Brian. This tech—and by way of background, I've been in the industry 18 years, I've trained, done a business school, I've become a behavioral scientist—this technology has been talked about for a long time, but it always felt like maybe one day. In the last six to 12 months, there've been a few breakthroughs on the Agentic AI side that are allowing many providers to actually bring this technology to market. So how do you know it works? I think until you see it with your own eyes and you're able to validate it against data you actually have, it's just a tough sell.

(04:05):

That's exactly what we did at EY. So we work with numerous Agent AI providers. What we did is we had them run a simulation of a survey that we had just run globally for wealth management investors. This survey is something like 50 questions across 3,500 people in 30 different countries. Here we're trying to understand what is investor sentiment, preferences, how are they thinking about service offerings in different countries? Are they thinking about switching advisors, consolidating assets, sort of the full spectrum. It's a huge effort here at EY. We gave some of these providers the question lists, so 50 questions and the audience. So we described, "Hey, we want at least 3,500 people, and here's the asset tiers, the distribution we want between men and women, and all the sort of demographics you would do in a normal survey." Within 24 hours—and they didn't have the responses, they weren't published yet—

(05:08):

within 24 hours, they returned a beautiful 400-page report that could have just been published instead. But more importantly, we looked at the underlying raw data itself. Across the 50 questions globally, the correlation—so how similar was it to the real human responses?—was 85%, which in statistical terms is about as close to matching as you can be. What's most interesting, though, Brian, is where it wasn't correlated, so the 15% of the questions where, "Hey, this is coming out very different." The example that stuck out to us: we asked in both the human population and in the sandbox, we asked humans, "How likely are you to retain your parents' advisor when you inherit assets?" 82% said they'd likely retain.

Brian Wallheimer (06:10):

That's not data, that's not the number I would've expected.

Sameer Munshi (06:14):

Exactly. In the simulation where we've created these crystal balls, they're replicas of humans. By the way, these replicas, they can't lie, they don't get tired. If they're worth a billion dollars, you can still reach them and talk to them. Because they don't have the biases of humans, we know from behavioral finance and psychology, they answer more honestly. So 42% across the 10,000 agents in this crystal ball, only 42% said they'd retain versus 82%. As we all know, the data that's really published on this is roughly 30 to 40%, give or take, in the US.

Brian Wallheimer (06:59):

So what you're saying is the AI version of this, the agent in this, actually got much closer to what the actual outcome would be than what people said there they would actually do.

Sameer Munshi (07:14):

That's exactly right. On top of that, let me tell you just quickly about the process to run a survey like that. It's about 18 months. You're going to spend about $500,000 of hard costs, probably another 500,000 of marketing. You've got a lot of people working on this that we're not even accounting for in costs. We distributed that survey to the human respondents at the end of November in 2024,

(07:41):

and we published a beautiful report, a whole marketing blitz. We published in May, so six months later. What happened between November and May? Trump took office. We had tariffs, rates, wars. By the time that's published six months later, the data is almost not valuable anymore. So we think about the power of this technology and understanding what investors or consumers actually want. Even if it's not a perfect correlation to what exists today from a research perspective, the fact that you can get it in days instead of months is going to completely change how we think about research.

Brian Wallheimer (08:23):

Sure, sure. I would imagine that as you go, right? You're probably going to continue to feed these models data, and it's going to learn from the past. How much do you think that 85% being statistically quite good? Do you see it getting better as you enter years, even decades, worth of data to continue to feed this? What do you think is coming from that?

Sameer Munshi (08:55):

Absolutely. If we're here today in 2025, and we're already getting correlations this good, it seems inevitable that as the technology advances—and it's really hard for us humans to think in exponential terms, but that's effectively what's happened and how we've gotten here—there's every reason to believe that it's only going to get closer, which, Brian, by the way, is a little bit scary to think about. People could be simulating us at any point.

Brian Wallheimer (09:24):

Sure, sure. So who's using this, or how are they using it right now? Who has these sandbox environments, and what are sort of the early use cases?

Sameer Munshi (09:38):

Yes. So, as any good industry consultant, we've been talking to everyone both inside of wealth management, financial services, and outside of it. It's actually shocking at the initial uptake across different types of entities. So whether it's traditional corporation or RIA, governments, we're seeing nonprofit institutions, Department of Defense, because they all have their own crystal ball they want to create. Just because I know this is new, and it took me and many of us many conversations to sort of come around to this, I want to talk a little bit about some non-FS use cases just quickly here, Brian, if that's okay. So if you're running for office at any level, wouldn't you want a crystal ball that can recreate your district or your voting population so that you can test your campaign messaging, value prop, your position on policy? You can test that versus another candidate's and you can see who you can sway.

(10:50):

That application is already happening. You don't hear a lot of people talking about it, but if you Google a little bit, you can find some interesting stuff. If you are a luxury jewelry brand that everyone's heard of, you want to understand how to sell your highest-priced products to ultra-high-net-worth women that have at least $30 million. It's extremely difficult to go find those people, as we know, Brian. Even if you can find them and somehow persuade them to have a conversation with you, are you really going to ask somebody like that if they need permission from a family member or spouse when they make a purchase of this size? All of a sudden, that company can instantly get data that helps inform their marketing. And hey, if this decision typically requires a partner's consent, let's market to that partner. Tariff responses: if you're any country in the world, and you have to perhaps unexpectedly think about how to respond, you can actually create a crystal ball of a population of one, which is the President of the United States, because there is an immense amount of data that exists about him, and you can simulate your negotiated response to see how the president might respond. These are all real-world applications that are happening today. Again, we don't see too many people talking about it because my hypothesis is that it's so powerful.

Brian Wallheimer (12:41):

Sure. It reminds me in the last session, someone brought up Star Trek, and it reminds me of the holodeck on Star Trek, being able to go in and—I mean, in those, they were for vacations and things, right? But here, I mean, you could see going in and doing some sort of training and having a really robust discussion or getting back valuable feedback about these things. So how specific can they get? Because if we're talking about people who are interested in having a wealth manager who have a million dollars or more, that can be a pretty large number. But do you have advisors, or is there an opportunity for an advisor to tailor this far more specifically?

Sameer Munshi (13:29):

Certainly. I think the way that I've been thinking about it as we're all trying to rationalize how do we use this and sort of make sense of it, if we start at the macro level, population levels, high net worth, you can feed data and, sorry, I shouldn't say data. You can describe the population, and at the aggregate portfolio or population level, it will be robust. Now, as you go from a population of 10,000 down to a hundred or 200, unless those a hundred or 200 people are posting everything they do on Twitter and public social media, or there's a ton of articles out there about them, they're public voices, it's not going to be precise in a way that's usable, at least from the technology today. However, you can get down to that level. You can even get down to sort of a board of directors level, a CEO of a company level.

(14:38):

We talked about the president level, as long as you're providing the Agentic model with data on the behavior of those individuals in the group. So for financial advisors, a sandbox one day will look like a crystal ball, a sandbox of 200, 300 of their clients in their book. The model was fed with non-PII data and engagement history. It's layered in with sort of temporal awareness of what's going on in the world. It's compliant because the firm has set this up and put its right rules in place, what's acceptable or not acceptable to test. Now, as an advisor, you're able to mimic everything from what's the right medium, timing, message to reach out. How do I have this conversation about external assets with a prospect, or sorry, with the current client? How do I think about the next-gen conversation? Anything that you're going to do in the real world, you can now either—we can think about it as practicing if you're sort of newer to the industry, or you're optimizing so that you're spending your time and getting the best outcome out of that time invested.

Brian Wallheimer (16:05):

Sure. Let's talk about use cases, right? I mean, because when it comes down to it, I think the best way to understand this is to understand how, if there's a specific goal in mind, how you go about this and what the opportunities are. So when we talked earlier this week, you mentioned a few, one of them pricing. Talk to me a little bit about that. How might pricing come into play for advisors who are using this sort of simulation technology?

Sameer Munshi (16:35):

This is probably my favorite example of how it's actually being used today. This is happening at the firm level, just to be clear, not quite at the advisor sandbox level yet. But if we think about pricing and everything we've heard in this industry about fee consolidation, well, we need to take a step back, right? First, I would say outside of the whole Agentic AI conversation, is that from a behavioral psychology lens, abstract pricing doesn't actually mean anything to a human being. We think in sort of relative terms, and we think about the perception of value. So yes, generally as an industry, we see fee compression, but that doesn't mean that we have to extenuate that trend because if you can articulate the value of a fee increase, you have now justified for that client why they're paying, and they're not going to look at the raw number. They're certainly not going to be thinking about the basis points and the calculation the way the industry is set up today. So how are firms using this? "Okay, well, what if we were to raise our fees, let's call it 10 basis points, Brian,"

(17:58):

in the sandbox, we can simulate a few things. One, what happens if you just increase the fees with your standard disclosures? Don't have a personalized conversation. Who reacts? Is it 10% of your book? Is it 50%? Do you retain them, or do they leave? You can ask these questions in your sandbox, and you can watch what happens. If only five or 10% of your book leaves, but you're earning 10 basis points in perpetuity, you're going to be on net more profitable. I'm not saying that's the right thing to do, but you're able to test that. And that's true for advisors. That's true in every industry because this is how price optimization works. You can also message, "Okay, well, it looks like 50% of my book noticed, right? Okay. What's the best way to have this conversation to talk about the value of this? Should I emphasize with this cohort

(19:00):

the tax planning aspects, the estate planning? Is it just general financial planning? What do I need to talk to this person about so that they perceive value at that higher price point?" That is something that's not only extremely profitable for any organization, it's also the only way now that we're all going to have to think about pricing because we're in a world now, outside of the sandbox scenario, we're in an AI world where the AI is just smarter than us. So it's going to detect whether you test it, or whether it's Delta Airlines now doing sort of personalized pricing for the exact same flight you and I take. This is happening.

Brian Wallheimer (19:48):

Yeah. That's fascinating. The idea that we know that there's so much data out there already, and the second I search for something online, I'm getting Amazon ads, and I'm getting emails, and my Facebook feed is full of ads for these things. So we know that it feels like this is sort of a next step in a way of collecting that data and allowing people to simulate what kind of choices I'm making. To your point on Delta, if I keep looking at the fare, and I don't pull the trigger, at some point, do they lower the price a little bit for me? So are there opportunities there to—it looks like there are opportunities there in wealth management to say, "I might be able to attract more clients at this price point," or it seems like you could even get into sort of the niche opportunities and trying to figure out exactly where your client base is based on your needs as a firm.

Sameer Munshi (20:46):

Certainly, I think that is all—if we continue down this path, that's all going to happen. I will sort of restate that today where the technology is, is it's really, really powerful at the aggregate level. So we're not seeing today people use simulation to price that at a personal level or individual level.

Brian Wallheimer (21:13):

Let's talk, what's another example? Where else might maybe the early use cases beyond this that we see crop up in wealth management?

Sameer Munshi (21:23):

The other very prevalent use case is marketing. Brian, we spoke a little bit; you and I have both had some experience in investing in ads. The whole thing is effectively an experiment. You are coming up with creative and copy ideas. You're thinking about which platform, LinkedIn or Meta or even TikTok these days, and you're spending a lot of money to learn which ads succeed. Is it based on the timing? Was it the platform? Look, I think digital marketing optimization is sort of the best we've had to date. It's still very expensive if you compare it to what you can now do in a simulation.

(22:10):

In the simulation—and this is stuff we're actively working on—we can take existing ads, we can run them in the simulation to the same targeted audience. You even describe in this sandbox, is it LinkedIn, is it Twitter? What time? And then you ask or create a funnel and say, "Okay, how many people saw it? How many clicked? What was the click-through rate? And did they schedule a meeting? And most importantly, did you fund an account?" You can track all of that data in the simulation, and then you can create hundreds, if not more, variations of different creative and different—you can do different timing and different platforms. So it's really endless combinations to see who wins in the simulation. Can we get a higher click-through rate all the way to funded accounts? You could do this, I don't want to say for free, because there's going to be some underlying cost of that infrastructure, but effectively you're doing all of that before you pay for a paid-placed ad that will reduce acquisition costs 50, 70%. That's significant, significant dollars. The reason that we're seeing these two use cases sort of prevail as the dominant early use cases is because it's not just a, "Trust me, the AI, trust me, bro, the AI works." It's, "Okay, let's now place these ads in the real world versus your traditional ad as a control, and let's measure, did it outperform?" You can prove that it works. You can do that with pricing as well.

Brian Wallheimer (23:52):

Okay. We have a question out here, and I want to remind the audience that if you have questions, go ahead and put them in. We'll see if we can get to them. But this one kind of goes right along with the discussion where we're at, and it says, "If I have a crystal ball, I have advantages. If thousands have crystal balls, aren't the advantages completely gone?"

Sameer Munshi (24:13):

Yeah, that's a great question. That's something that's been keeping me up at night a little bit. There's two ways to look at this, right? One is, "Hey, this is an exciting new technology, and we can be a first mover, and this will give us an advantage." The other way to look at it is, "This is an exciting technology. There are already first movers, and I am already at a disadvantage." I don't have a wonderful answer for what's the advantage if everyone else has it besides you're left disadvantaged, if that makes sense.

Brian Wallheimer (24:54):

Sure, sure. I mean, you've got to keep up. Keeping up is huge, right? But I mean, that said, as we go, what people are using today is not what people will be using in six months, a year, or five. I mean, we have to think about just how quickly this technology grows and scales. Correct.

Sameer Munshi (25:15):

Certainly. And it's really hard to think about where it's all going because it's all moving so fast. Even myself, I wonder, will this be a periodic time in the larger scheme of things where one day it's not the advisor directly interacting with the client, it's actually an agent of the advisor interacting with the agent of a client? I don't know, Brian, have you heard any of these stories about dating?

Brian Wallheimer (25:48):

Are you talking about where the people go to the bar and talk about their friends?

Sameer Munshi (25:56):

Is that what dating used to be?

Brian Wallheimer (25:58):

Well, no, but I've heard these stories about how people will go to these dating things, and they'll have a presentation on, "You should meet my friend Joe," and they'll tell you about Joe, and then the people there that want to—and I'm like, that's crazy, but they're—people love it. But in a way, that's an agent acting on behalf of someone, right?

Sameer Munshi (26:20):

Yes, a human agent. I like that a lot. That's sort of the referral technique. That's excellent. So there's this other concept in more of the Agentic world where instead of—and I am 40 years old, so I was past the whole dating app thing before I got married—but what I'm hearing is if you're on a dating app, you'll have an agent. So if two people want to have a conversation on a dating app, it'll actually be their agent sort of screening each other at first so that the human is only spending their time with sort of screened humans later on.

Brian Wallheimer (27:05):

Oh, interesting.

Sameer Munshi (27:07):

A little far out.

Brian Wallheimer (27:09):

Interesting. Yeah, I have been on dating apps, and maybe I just didn't get enough hits to need an agent.

Sameer Munshi (27:18):

I don't think the tech is—I don't think it's out yet, but that's sort of the rumor is, "Hey, instead of all of these back and forth, let's have the agent handle it. It can figure out who might be a good match." So then you take that concept and you put it into the advice world. Is the agent actually figuring out things for you? Maybe you're not going to need the simulation and crystal ball, but I'm talking 10-plus years right now.

Brian Wallheimer (27:47):

Sure. So let's talk about some other use cases. You mentioned some things that I think for the most part, advisors are going to be comfortable with, right? With AI, we're still in this sort of world where there's, "I'm comfortable with it taking notes, I'm comfortable with it creating summaries, maybe even sending out some automated emails to some clients." You've got to be careful there with making sure it doesn't sound too AI. People don't necessarily want to know that they're not getting a little bit of a personal touch. But what's down the road? What are some areas where maybe advisors aren't a thousand percent comfortable with it right now, like portfolio rebalancing or investment decisions that you think this could play a role in?

Sameer Munshi (28:37):

The way that I see this playing out—and it's because some of these conversations have already started—is really creating that advisor-level sandbox. That can be for tenured, experienced advisors as well as newer trainees and everywhere in between. We've been talking about sort of next-gen wealth transfer since I've been in the industry. It's happening, and everyone wants to build relationships with the next gen. Well, how? What's the best way to do that? I don't think anyone's sort of cracked that. Well, now if you have a sandbox that's tailored to your exact client base, you're able to simulate or test what is the best way for me to ask for this relationship? What am I asking for specifically? Is it an introduction? Should I offer them some kind of event? Am I asking them to open or fund an account? Am I asking for some kind of commitment?

(29:39):

All of those things that are nearly impossible to test in the real world without potentially upsetting your client base you can now optimize for. So take that same thread of client conversations or—I don't want to necessarily say script—what's the best way to have conversations with the prospects that are in my sandbox that we have data on, and how will they react, and how do I open the conversation in a way that's going to build trust throughout? We think about retention and sort of attrition risk. We have a lot of great ways, I think today in the industry, of identifying who might be an attrition risk. What's the best way to prevent them from getting there to begin with? If they're there, how do I retain as many of these clients as I can? On the newer advisors, the way we see it is training programs will use this type of simulated sandbox so that you can get all your reps in, just like a flight simulator. If you're—I'm not a pilot, but I know a little bit about it—now you can sort of go through all of those scenarios and practice for all effective purposes here, and you should come out with more skilled advisors at the end of it.

Brian Wallheimer (31:06):

Sure, sure. That's so fascinating because we've seen numbers—it's been a couple of years since I saw it, but I can't imagine it's terribly different yet—that about three quarters of advisors don't make it past the first year, and we have shortages, right? We're talking about how there aren't enough advisors, especially as we have the upcoming wealth transfers and all of these sorts of pressures on the industry. So I could see this, correct me if I'm wrong, as a way to not only find the right people for the industry and find the people who have an opportunity to accelerate and grow in the industry, and then also train them in ways that keep them—that keep them in the fold and push them toward doing well, right? I mean, is that

Sameer Munshi (32:02):

along the lines? That's a great point. Yeah, that's spot on. Actually, I hadn't quite thought of it even to that extent, but effectively recruiting the next gen of advisors, who are the folks that are most likely to succeed? You can, again, not quite today, but certainly as things unfold the way that we've been discussing, you can identify what are the characteristics of great advisors today, and what does the prospect pool of new advisors look like, and what value prop will resonate most with them? So that's a great use case.

Brian Wallheimer (32:43):

Someone has what they're calling a basic question here: "How do you get a sandbox?" So how do you get involved in something like this? Is this something that's actually available yet and ubiquitous? Who will have this down the road?

Sameer Munshi (33:00):

Yeah, so at the sort of macro enterprise level where you're trying to understand populations, or even as X, Y, Z wirehouse firm or large RIA, "I want to simulate my own client base at the firm level. I want to understand what services do they care about and what tiers and what pricing," that is available today. It's not tailored; it's not going to be an individual level, but it's at that population. It could be a smaller population that's available today. Many firms are using this to help inform their strategy. "Should we build out a hybrid wealth offering?" I know that's at the home office level, but those types of questions are being answered, like where to invest, how to think about communicating with the advisor base. Available today.

(33:58):

The customized sandbox at the advisor level. That's something we're seeing. I think by next year, we'll have pockets of that for sure. We're already halfway through, more than halfway through, this year.

Brian Wallheimer (34:10):

Sure, sure. Other use cases that we should talk about? Where else might we use something along these lines? A lot of this is focused on client interactions, right? We've talked a lot about that. But are there opportunities to use this on a level to understand potential investment opportunities, or is that on your radar? Are there opportunities to simulate out global risks and things of that nature?

Sameer Munshi (34:43):

Yeah, definitely. The portfolio one I think is an interesting, interesting one, like the investment decision, because it's simulating populations. It would probably be tough to trust

(35:04):

because there are so many factors that would involve the pricing of any security or asset. I'm sure there are ways to do it, right? You could start tracking at different population level sort of sentiments around certain industries and effectively predict what's going to happen policy-wise globally and make some smarter bets. So I don't know if this is the ideal use case for Agentic simulation in the sandbox. However, that said, I think there's a lot of other interesting use cases as we think about in the RIA world and the sort of rollup world where we're constantly acquiring and then consolidating. How do you think about the diligence? We can now create a sandbox of potential acquisition target of their advisors, of their end clients, and let's simulate, "Okay, if we use X, Y, Z value proposition, different offers," all of your typical pre-deal process, you can simulate. I'm not here to proclaim this replaces human diligence, but it certainly provides an additional data point, and almost immediately it helps to inform those types of decisions. What else on the,

Brian Wallheimer (36:35):

I mean, that's the thing. If you've come up with others, it's definitely fine. But I mean, that's the thing is right now, I think everyone is still in this, "I want to use AI to guide me, but I still need to make sure that I'm the human behind it, making choices and things." And clients—one of our reporters, Rob Burgess, wrote a story the other day that said, "If AI is writing client emails, be careful." Because once they figure it out, all of a sudden there was a dollar figure they put on it that they actually had clients put on it, and they said, "You're less valuable. I think you're worth less because you're not actually sending me an email. You're having a robot send me an email." At a certain point, if you feel like it's like going to the grocery store and having to check your groceries yourself—I mean, some people are happy with that because you can get through quicker if you have three things and you don't have a standing line. But at a certain point, some people are like, "No, I want a human to do these things for me or with me." So there's that. Are there areas you think that are going to be just fully automated someday with this by using this technology to make these decisions? Or are we still at a point where there's—are we always going to be at a point where this is guidance but not necessarily decision making?

Sameer Munshi (37:56):

Yeah, this is such a fascinating question to me. Even pre-Agentic conversation, we're just talking AI a couple of years ago, and what's the role of the advisor? I've worked with advisors for over 10 years. I firmly, firmly believe that AI only advances the talents that humans have, right? It will never replace at all, but I do think it completely shifts the day-to-day away from the monotonous, inputting and chair-swivel, compliance—all of these just frustrating tasks that you have to do—and frees up time to do the thing that advisors are so talented at, which is developing and maintaining, expanding human relationships. As it relates to simulation, I think there's always going to be that human check, because at the end of the day, it's a relationship, and it's very possible when you're getting down to the individual level, you're going to know as an advisor more than the simulation, right? At the population or macro level, though, I know human really knows what's the best marketing ad to send, so that's going to be powerful.

(39:18):

I think as we're moving towards everything is AI, yes, certainly clients—I noticed this just on LinkedIn. When you see the M dash, the long dash without the space, you could tell that's a ChatGPT post, and you just start to notice it, especially once you use AI. It does, in my opinion, reduce the perception of quality, whether that's fair or not. I think certainly clients, if they see an email and they can tell it's AI generated. So now we have to go back to the conversation around perception of value. So what is it? If it's the same fees, what is it now that you're doing that you aren't doing for—what am I getting out of this same fee structure? It's an area where we're really going to have to think as an industry, "Okay, how does this evolve? Do we try to strengthen each individual relationship? Are we adding more of a—I hate to say—coaching or therapeutic type of element to this role or something where it's the true human skill?" Because otherwise you're right. Clients are going to say, "Hey, I'm paying the same price. I'm getting the same sort of service and experience from you as the human advisor, and now you've got AI doing everything from emails to managing my portfolio. What gives?"

Brian Wallheimer (40:43):

Yeah. Someone here has a question. They say, "In anticipation of using a sandbox like this to run simulations, what should we be doing behind the scenes to have data required to get solid results? Or are you buying data from places like Facebook and Google outside of your own firm?"

Sameer Munshi (41:02):

Yeah, that's a great question. So internally, I think because of AI, this is happening sort of organically where every enterprise is cleaning their data so that it's usable in all of these various AI models. So I don't know if there's anything in particular that needs to happen that's not already started, but that is certainly a long road. Many firms are in the earlier stages of that process, EY included. After that, it's really all of the external data that's available. The providers of this technology are already harnessing, so it's everything from social media to census data to available purchase data, like credit card transaction histories, insurance logs, anything that you can sort of buy is already feeding this model. Then it's again, temporally aware so it understands what's happening, trends, news, macro level, so that when you simulate, it's not based on the last time the model's been updated, which if you ask ChatGPT right now, "When's your last update date?" at least the last time I checked, it was sometime in 2023.

Brian Wallheimer (42:22):

Sure. Sure. We've got just a couple minutes left. I don't really see anything else in terms of questions there, but I had one last one myself, and it's what are the challenges to get into this for advisors, and what are the concerns that you're hearing? What keeps you up at night here? I think you already mentioned one of those things, but what else is sort of on your mind for making this live up to its potential?

Sameer Munshi (42:45):

Yeah. I think first, I'll say at the macro level, it's been reassuring that so many enterprises in all different parts of the world and all different types of organizations are already using this, given that it's about a six-month-old technology when we really, really think about it. So that is very promising that people are already seeing the value from an aggregate population level. Certainly as we get down to these more tailored book-level sandboxes, I think a lot has to go right for this to work at scale. One, you need to be able to seed those agents, let's call 200 in a book, and really you'd get the most value if you can—it's not just the one primary house, it's the spouse. Can you get their offspring? Can you get the data? Can you do it in a way that it will be compliant from a firm perspective? But there's still the natural human feeling that you and I both share, which is, if I'm a client, do I really want my wealth manager to be recreating a twin of me and sort of simulating, even if it's in their best interest? So there's some disclosure that has to happen there.

(44:14):

I think the other thing is we're talking about when you shift from macro to micro. At the macro level, yes, there's initial skepticism, but then you can sort of prove it enterprise by enterprise. Maybe there you're talking about thousands of enterprises. When you get down to the advisor level on custom books, you're talking hundreds of thousands, millions of advisors. So it's the same process of a new technology, but we know adoption of a new technology at that type of scale can be difficult. Sure.

Brian Wallheimer (44:47):

Well, I think we're up against time, Sameer, but I mean fascinating, and so much to think about and so many things that I'm sure we're going to be seeing in practice and in use sooner rather than later. So I want to thank you so much for your time and for your expertise and for being with us today.

Sameer Munshi (45:06):

Thank you, Brian. Really enjoyed the conversation. Look forward to keeping it going.

Brian Wallheimer (45:10):

Awesome. Thank you so much. We're going to take about a 10-minute break, and after that, we'll be back with our final panel of the day: "Building an AI Tech Stack That Works for You." Thank you so much for being here.