Leaders with Oleg Tishkevich

WATCH NOW

Invent CEO Oleg Tishkevich sits down with Financial Planning reporter Rob Burgess for this Leaders session recorded at ADVISE AI 2025 in Las Vegas.

Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Rob Burgess (00:10):
Hi, it's Rob Burgess here, a reporter at Financial Planning. We're here at Advise AI talking with Oleg Tishkevich, CEO of Invent. Oleg. Welcome. Thank

Oleg Tishkevich (00:19):
You. Thank you, Rob.

Rob Burgess (00:20):
Yeah, we've been hearing a lot about AI obviously here at the conference, and Oleg has been on stage several times talking about a various number of topics. But for people who don't know who you are, can you go ahead and talk a little bit about yourself and what you do there at Advent?

Oleg Tishkevich (00:35):
Absolutely. So my name is Oleg. I'm the CEO in vent, and our company is really AI data cloud for wealth management. And what that means is we help people solve the data problem. We have worked with a number of RIAs, broker dealers, custodians ask managers, and every firm comes to us to help create their data strategy and create a single source truth for the organization. They can run better and everyone says that our data is great. I tell you for the record, not single time when we got into that process, have we seen that? That data is great. So it's a fun job. Anyway, so that's kind of what we do.

Rob Burgess (01:24):
So why is data so important to people's use of AI and what are these common problems that you're running into specifically?

Oleg Tishkevich (01:34):
So if you think about what's happening in industry in the last 12 months, because let me tell you, 12 months ago we had what, six AI companies and now it's close to a hundred. The amount of innovation that happened within the last 12 months is just mind blowing. It's almost like we have 52 weeks. So every Tuesday there's a new AI company that started

(01:58):
The

(01:59):
Last 12 months. So with that level of innovation and capabilities, it's really hard for advisors to understand and unwrap this whole new technology. So what I would say, AI could really elevate your practice and save you time and deliver all these great results, but at the same time, it's not used properly. It can actually elevate your headaches and problems to a significant degree. And obviously with any kind facing AI technology that creates an issue with liability ismo for the firm. So the approach that we take when we talk about AI implementation is really the best advice. Start with your data. Creating that foundational layer that connects all the different systems that firm is using, whether it's a smaller RIA or a big enterprise. They have very similar problems just at a different scale, all using different tools. Everything is very much siloed. You have data sitting in multiple different applications. The data flows are not really going from place to place really nicely. You have some integration in the space, which is great, but it's almost like trying to put the puzzle with pieces that don't always fit together.

Rob Burgess (03:24):
How difficult is it to own your own data? I've talked to so many advisors in a series I do called Show Me Your Stack, where we go through their tech stack and I always try to ask them where is this data stored? And a lot of times they're like, I'd like to switch to something else, but it's so hard to get my data out of these various systems where it lives. How difficult of a process is that?

Oleg Tishkevich (03:47):
So again, there are definitely some challenges in data ownership, especially using multiple different solutions. Their software providers are really good at providing APIs and providing access to data, whether it's a file based data export or an API level access, not just to data, but also different workflows and capabilities, which is awesome. And that's what we're kind of proposing we're big proponents of, but there's also situations where you're kind of getting stuck. So what I would say is anybody is looking to change the stack or when they're evaluating a vendor, definitely ask those questions upfront. If you look in a new provider, get answer in writing exactly what kind of data you're going to get. Is it available through API? Is it available as a data feed? And then how much of the data is actually available? Is it one third or is it a hundred percent? So that's also a very important consideration because you don't want to be in a situation where you don't know where the data is, even though it's secure with this particular vendor, but you want to make sure that you have a way to get it out. But ideal case scenario, really have the control of that data on your terms.

Rob Burgess (05:03):
Right. Talk a little bit about the various, because I know there's data lake data warehouses and data lake houses. What is the difference between all those and what is the infrastructure and how much time does that take to set those up too? That's something you talked about on stage.

Oleg Tishkevich (05:23):
Just good question. Great question. Yeah, just asked that got asked that question on stage a few minutes ago. Yeah, there's a lot of confusion. So Data Warehouse is more of a older approach where back in the day you'd have multiple different databases and some people still have them today and they, in order to create centralized reporting, you would try to create a reporting layer on top of all of these different databases so that you can have executive reporting across different systems. That's what traditionally was called a data warehouse. So you create ways to bring all these different solutions from a data perspective together and create consolidated reporting.

(06:07):
Primarily this was relational based data, so your SQL databases or Oracles or whatever, what have you. When Data Lakes came about, the concept of a data lake is not just the storage of rows and columns of your data, like an Excel spreadsheet. Well know connected with different dependencies for different tables, but also be able to store data that's not structured in rows and columns. It's called on structured data. So that unstructured data could be a document, could be text, could be a PDF, and that data also very important because sometimes you didn't extract the information from a file or a PDF DF document or an ask you document of some sort. It's sitting somewhere on your system on your drive. So ability to harness that data and then map it and link it to your more traditional relational data on your client's accounts, households, et cetera. That's what allows that capability is what Data Lake allows you to do.

(07:17):
So think of it as like a massive storage with all kinds of data, a bunch of files. That's what the data lake is right now. That's great. I have ability to store all the stuff in a data lake that has all these different desperate files, but now how do I add reporting querying AI capability on top of that? That's what you call a data lake house. I say. So Data Lake House is what creates interfaces on top of that data connectors from different systems to pull the data in or send the data back to the system to synchronize it. So that is the work of the data warehouse,

Rob Burgess (07:54):
But how difficult is it to set up? I heard you say that you could do that in a couple of days, but I've heard people say that this has taken far longer than that. What goes into setting up the infrastructure of one of those that you mentioned?

Oleg Tishkevich (08:10):
So I think the challenge, there isn't perception that what is, so sitting up a data lake

(08:17):
And

(08:18):
Plugging in the feeds, that's what you can do on Invent platform within a couple of days. It really all depends on, okay, you want to get data from this custodian, that custodian from performance reporting tool, et cetera. So getting all the approvals, that's usually what is the actual time that it takes. It lapses, but that's usually fairly quick. The challenge becomes after you connected everything, because what it does when you bring all the data in one place is going to shine the light on issues that now you start seeing in that data and that what could take weeks, sometimes months to actually clean up depending on the size of the firm and the number of systems that they want plugged in and the types of workflows they would like to connect to. That data is part of this data lakehouse that is really variable and I think the perception that it takes a long time to set up a data lake, it was like that maybe a couple of years ago, but now with systems like Invent, that could be done fairly quickly, then our firm also helps essentially cleaning up that data. Once you stand it up, you really need to understand what is your kind of single source of truth,

(09:34):
What

(09:34):
Is your Householding logic, et cetera, et cetera. So that's something that also is part of that essential process.

Rob Burgess (09:43):
What about security? You've got all this sensitive data that you're collecting and it's being used for all these applications. How can people who are using all this data make sure it's staying safe?

Oleg Tishkevich (09:57):
Great point, great point. So first of all, you have the SOC two type two. That's pretty industry standard audit. It takes about nine months out of every year, let's say with Invent. We have auditors come in, check all the procedures and policies and security of the data storage, all of that good stuff. So one thing to point out though, you'd have providers that say, well, I store data on Amazon and Amazon suck too. It's not good enough. Just be very clear that the firm you're dealing with who's handling the data also has sucked to certification themselves. The reason is is because it's not just about where the data is stored, it's about the policies of how data is handled,

(10:46):
And

(10:46):
That's part of that audit suck to type two audit that you have to go and do ongoing in order to qualify in order to ensure that the data is safe.

Rob Burgess (10:57):
Right. Did you follow that? There was a breach of Salesforce recently. Did you hear about that? It was not Salesforce itself, but it was like dozens of firms. But they were all plugged into Salesforce and they were the weak link of that's where they got in and was through the integrations or whatever. So it's kind of like you're saying the chain is only as strong as the weakest weakest link basically. Exactly. Yeah. Now I hosted a virtual summit panel featuring you were one of the panelists and we had another person on the panel who had a little bit of a different perspective. Talk a little bit about the difference between your perspective. It seems like, correct me if I'm wrong, but it was more about they were doing an AI overlay on top of everything and you were talking about that's not good enough. We need to know at the base level. Okay. Can you talk a little bit about the differences there? Sure.

Oleg Tishkevich (11:55):
So there's a lot of AI providers that would come out and I'd like to see them prove me wrong. They're basically saying, look, you don't really need data lakes. You don't need data lake houses. You can just do all of that using ai you to consolidate the data. You can pull the data, you can provide the prompts and basically do all the data reconciliation, aggregation and deduplication from multiple systems. I'm not talking about performance reporting right now. Say more like from pulling data from different systems to answer the client, what is my portfolio? What's the average return that was there for the last two years or whatever. So those types of questions, if you have multiple systems that may have different answers to different types of questions, is very difficult to consolidate. So with AI being not basically probabilistic, so it's probable that it's going to get the right answer.

(12:58):
But

(12:58):
If you have multi-source situation getting to that single source of truth in terms of data, you need to be sure, especially in our industry, which is highly regulated, you really need to make sure that whatever's information going to the client is a hundred percent correct. And it's the same every single time. And with AI often that I'm sure you've seen where you're asking the same question maybe a different way and you get a different answer, or logically it should be the same,

(13:26):
Right?

(13:27):
So that's where I guess the argument was that essentially, oh, you don't need a data lake. You don't need to have a single source of drip. But my point was, look, it's garbage in garbage out.

(13:41):
You

(13:42):
AI could really create a great way to trace where it got the data,

(13:47):
But

(13:47):
If the data was bad in the source in the first place, you didn't put the address information right, or last name is misspelled. You cannot, AI cannot fix that for you,

Rob Burgess (13:59):
Right?

Oleg Tishkevich (13:59):
It's really something that you need to make sure that the underlying data in the source systems is correct before you start harnessing this stuff.

Rob Burgess (14:09):
We've all seen the kitsis map and we've seen how it's expanded greatly over the last few years. What do you think that map is going to look like in the next few years? Because I see two different ways it could go. Like Michael Kites today talked about, it's so much easier to develop a new app than it was a few years ago. You can vibe, code and whatnot to get what you want, whereas you couldn't before. And also on the other flip side of it, we have existing players integrating AI into things that are already in the tech stack. So which do you think is going to win out? Is there going to be a bunch more solutions? Is everything going to kind of consolidate to these? What is your view?

Oleg Tishkevich (14:54):
There's definitely going to be a bunch more solutions because Michael was right pointing out that in the last 12 months, there's tremendous, tremendous opportunity that came about from just technology itself of software development. What used to take a year to build now could be done within months or less.

(15:19):
And so those that really know and understand how it's done, you can't also make it sound like, oh, I just go in and tell the system what to build is going to build everything I need. Unfortunately, that's not the case. So there's also an art in that science to really create the right type of output from a coding perspective. So it's scalable, it's reliable, it's secure, it's definitely doable. And that's what pretty much all the firms now are leveraging some type of AI and code creation, which is great. It optimizes the time and speed of developing new software. But at the same time, you have legacy code that's been there, which again, firms that have been there for a long time, it's not always as easy to augment or completely rewrite that body of code. But definitely AI capabilities that are coming in into the existing vendors and RIA is actually building new apps themselves

(16:22):
Or building new agents. I think we're at the time where you're going to see a lot more apps and agents built by the actual firms that are accustomed to them. So if you think about personalization and scale in terms of investment management, people like, okay, pick from six portfolios or create your own kind of, now you're able to do this with technology right now. You can create your own experience. You can create your own agent or capability, and we definitely facilitate that on the invent platform with the invent store. Or if you do build it leveraging some of the invent tools, we would even make it available to any advisor. And the cool thing about it, it's all connected. It's all integrated

(17:05):
Because

(17:05):
The last thing you want is build something on the island and then try to see how now the map that's exponentially growing of all those different apps, how do you integrate with everything? So we continue down that path of trying to reinvent the wheel of pun intended for every single connection integration with the number of connections and integrations and companies expanding pretty substantially. That becomes an impossible task,

Rob Burgess (17:30):
Right?

(17:31):
Yeah, definitely. And another thing that Michael talked about this morning, and I'm doing a panel on tomorrow as the note takers. We've seen an explosion of use in the meeting prep and integration with the CRM and things like that, and it feels like there's been a lot of momentum to do that. One thing I haven't seen as much of yet is on the front end, portfolio management, investment management, more client facing things. I think Michael mentioned that was one of the lower adoption uses. Was that, so where do you see that going? Do you see the AI moving from the back office to the front office going forward?

Oleg Tishkevich (18:13):
Absolutely.

Rob Burgess (18:13):
Okay.

Oleg Tishkevich (18:14):
So the challenge of moving AI to the front office though is the client communication

(18:21):
Regulation

(18:22):
Rules. So whenever AI says something back to the client, you better check twice before you do it. So I think that probably was one of the reasons why this AI technology that's client facing is not taking as much adoption because just advisors are worried that it's going to say something that is going to get advisor in trouble. So I think there are definitely ways through data and through AI to creating more deterministic type of outcomes or outputs. And what that means in instead of kind of AI determining what you're going to do from probability personality, this sounds like the right answer, right? I'm going to give that answer. You tell AI to specifically, if this answer is this, then pull this number from here, my source of truth.

Rob Burgess (19:19):
I see. So

Oleg Tishkevich (19:19):
That kind of approach is the one that's going to yield the best results because it's going to provide consistent output in every single case. So almost in this case, AI becomes more of a workflow orchestrator as supposed to a true AI engine for some of those types of essential client facing reporting needs.

Rob Burgess (19:41):
And you touched on this earlier, but the ag agentic AI is definitely seeing more adoption. What do people need to know about that and how do you see that developing into ag agentic?

Oleg Tishkevich (19:52):
That's another revolution. Yeah. Right. So ag agentic ai, I've seen out there companies, actually, there's a company that was started completely with AI agents. So you have ai, CEO, ai, COO, AI developer or whatever. So the entire company is actually all AI agents, all agentic ai.

(20:11):
Wow.

(20:12):
So not just AI being used to kind of help with operations, streamlining data entry or gathering of information. But now AI is being used to help make decisions with ideas, figuring out what the next steps should be, what I should work with a client. So those are the types of agent flows that are very important as well as in quality control. You literally could have an agent, AI agent that's going to be double check somebody else's work or double check actual another agent's work. As you're chaining these agents in multi-agent kind of changes what they call 'em for magenta, you're now creating a much more sophisticated infrastructure that's really mimicking our hierarchy within the firm, if you will.

Rob Burgess (21:02):
And I

Oleg Tishkevich (21:02):
Think that's only going to grow. But the key issue is there is you still need to have a human audit capability or double check before it goes out to the client. Because from a compliance perspective, what the SEC and FINRA are saying is we're going to basically make sure that whether you use AI or not, the rules are going to apply the same it were a human, right. So you're still on the hocus an advisory essentially.

Rob Burgess (21:33):
You can't just be like, the robot did it,

Oleg Tishkevich (21:35):
Robot did it, doesn't

Rob Burgess (21:36):
One me doesn't. It doesn't cut it. Gotcha. Well, most of the questions that I had, was there anything else we didn't touch on that you think it's important for people to know? What are you working on that you're excited about that you can talk about?

Oleg Tishkevich (21:49):
Absolutely. Absolutely. So I think number one important thing that I can maybe tell listeners today is really take a serious look at your data and get your data in order. That's a number one prerequisite for using mini ai. There's a lot of AI really cool tools out there. It's all over the map, but if you don't have your good data foundation, it's a dangerous to be able to playing with them. And B, you are setting yourself up for sometimes disappointment because it's not doing what you want it to do. So that's number one. And another thing, if you do play with ai, definitely experiment. I would recommend getting a paid subscription because from a security perspective, you certainly don't want to use a free chat GPT, paste client information and have it come back with an answer. That's a serious problem. Yes. So if you do experiment, experiment safely, use a subscription that doesn't train on your client's data. Very importantly, there's a lot of different solutions for that. And just get your data in order before you actually roll it out to production.

Rob Burgess (23:04):
Yeah, don't be a cheapskate. Don't get that free version. Got to upgrade. Alright, thank you so much. Appreciate it. Thank you.

Speakers
  • Rob Burgess
    Rob Burgess
    Reporter
    Financial Planning
    (Host)
  • Oleg Tishkevich
    CEO
    Invent
    (Speaker)