Modernizing market risk management

In today’s environment, financial institutions face challenges that include:

● New and more complex calculations due to regulatory demands (i.e., FRTB) and climate risk
● Explosive data growth and the need to move and analyze this data
● Limited on-premise compute capacity
● Need to improve time to value by accelerating risk model deployment to production

To help solve these challenges, Red Hat has worked with industry experts and market leaders to design a forward-thinking solution that allows financial institutions to better address current challenges and prepare for future needs.

In this presentation, based on real-world use cases, we will show how Red Hat’s market and credit risk calculation architectural framework:

● Scales the risk management platform by leveraging the power of the hybrid cloud to perform new and more complex calculations in less time at a lower TCO
● Accelerates development and deployment of risk models via modern DevSecOps practices
● Responds in real-time to market data and alternate data
● Improves risk management by orchestrating the many pre-calculation and post- calculation steps as automated business processes
● Reduces risk exposure by automating the handling of risk management events
● Supports incorporating AI / ML for high value decisioning and actioning
● Improves auditability and transparency with flexible rules-based decisioning

Transcription:

Aric Rosenbaum: (00:09)
Hi everybody. My name is Aric Rosenbaum. I'm here with Marius Bogoevici. We're gonna talk to you today about modernizing market risk management, so here, I'm in the right. I serve as a global chief technologist in Red Hat's FSI practice, our financial services practice. I've been with red hat about two years prior to that, I spent about five years in the investment management division of a tier one bank. And prior to that about 12 years, CT co-founder FinTech startups in FX and my partner here, Mariius...

Marius Bogoevici: (00:44)
My name is Marius Bogoevici. I'm a chief architect in the financial services team, north America at Red Hat as well. I work with mostly the, kind of the top 30 banks in north America and Canada. And a lot of it has to do with investment banking. So what we're here today is to talk about modernizing market risk calculation. And this is the result of the work that Eric and I have done with clients have done with partners. It reflects some of our work, it reflects some of the kind of industry trends that we detected. So what we're gonna do in what's following next is give you a kick introduction to red hat. Talk a little bit about like the problem that we're trying to solve and then formulate a number of recommendations in terms of technologies and processes that you can adopt to improve risk population.

Marius Bogoevici: (01:41)
So very short red hat is the leading provider of open source enterprise. It solutions. What does that mean? It means that, red hat is known for a large variety of technologies. It's started practically the notion of an open source as a commercial product. More than 20 years ago, has been successfully enough. We've been acquired by IBM a few years ago, we still kind of retained our independence and we are currently supporting, you'll see, there are a number like 90% of fortune 500 companies use red hat products, actually, when it comes to financial services, I think the percent is right at a hundred percent. So we're present in a lot of places. We've worked with a lot of customers.

Marius Bogoevici: (02:32)
You can see some of the logos here of the customers that we've been helping with. Probably we recognize, I hope, some of you are on this page, if not come talk to us. But the point is that, this kind of experience like kind of working in this industry has given us the ability to come here and be here today and talk about, about this topic. So I'm gonna hand it over to Eric to start kind of introducing what are trying to do and how.

Aric Rosenbaum: (03:02)
Thank you Mars. So just to level set, what we're gonna talk today about is modernizing market risk, manage manufac. So for those familiar or not familiar, we there's two typical ways that we track risk in an organization. One is value at risk bar. We might be doing something let's say at a confidence level of 95% with a one day bar, we expect our loss not to exceed thousand million. What have you very mapping? We also do Montecarlo simulations where you take tens or hundreds of thousand different simulations, put it together. And again, within a interval, what your expected boss may make. OK, on the right, we talk about agent based modeling, which is more about assimilation, where you take multiple actors and understand if this one does this and this one does this. And therefore this one has an effect on something, doing something that's agent based model is interesting like climate where you rise. So with everything going on this conversation,

Marius Bogoevici: (04:19)
Of course and as a reflection of what Eric has been saying, right, we came up with this target architecture, which in many ways reflects the art of the possible. It presents a holistic view of risk management, not just focusing on one aspect, which is, how do I calculate risk and how do I model it, but, how do I easily bring data to calculate this risk? How do I solve the problems of integrating this with, as Eric has mentioned earlier lines of business, How do we put together a complete view of what needs to be the kind of the end state of this process? What we also recognize is, and this is again coming from the discussions with our partners from kind of working with our clients is that this transformation never happens wholesale.

Marius Bogoevici: (05:10)
So, it is always a specific aspect of the risk management process that needs to be improved. So while we maintain this holistic view of the architecture, and I'm gonna walk you through it very quickly, we also recognize that there are a number of themes or recommendations that we like to make in order to get to this end state and give a little bit of insight into strategies that you can adopt or your organization can adopt to modernize their risk management process. And we're gonna break this down in a moment, but just kind of looking at this from a holistic standpoint it's a classic risk calculation process with data kind of through the system being loaded, into compute environments, crunched produced like, transformed results kind of stored into target databases as you see at the top, for example, retain for audit.

Marius Bogoevici: (06:17)
So that's all classical. Like that's more like a that's how our traditional risk calculation process works. But adding that layers of business automation rules based processing, the ability to understand those results and completing that with a comprehensive, modular architecture, complete with aspects such as artificial intelligence and observability, the ability to understand not only to have a system that functions but understand how well does it function. Now, these are critical pieces here, and they're all part of this architecture. So, to kind of break it down a little bit and understand like what this does, we're gonna walk through a number of themes or recommendations that we're making for this modernization process.

Marius Bogoevici: (07:11)
And the first one I think and this is one of the, I would say most typical transformation that financial institutions go through when they're modernize their risk management process is the adoption of a hybrid cloud architecture. So in a nutshell, what is hybrid cloud, right? It's essentially having the ability of running workloads, running the calculations, running the different processes that make up this entire architecture, either on premises, your own data centers or one of the clouds, right. And bursting, you've probably heard about bursting. You've probably seen, this is a problem to be solved. This is bursting and acquiring this additional compute capacity by moving workload to the public, probably one of the first moves that happened. It's less important. And we can talk about in more detail about this, if you want, we're not kind of here to discuss the technicality of the process, but I want to touch a little bit on the business benefits of this move and what does the business gain out of that?

Marius Bogoevici: (08:12)
And of course, having more compute capacity, having being able to run more jobs in parallel gives you the ability of producing results faster and actually improve your SLAs. Also having more compute capacity means that you have, you can increase the precision of your calculations. You can run multiple scenarios as has been mentioning. You can try different things. You can also, customize the scenarios that you're running and for example, run specific scenarios for specific clients, you can also introduce new products with their associated risk models faster, because now you have a platform that can take different types of workloads, different types of solutions, and run them consistently, unless that not least having that, having that consistency between the way in which the different types of risk calculations run, also allows you to minimize the risk of failure.

Marius Bogoevici: (09:14)
You know, having five things that's work in five different ways, you can get it wrong, much easier than having one way, one unified way of doing things and managing it like that. So, in a nutshell, the adoption of this hybrid cloud platform allows you to do all these things in addition to increasing the capacity of, in addition to getting more compute, you also have the opportunity of doing things in a much smarter way, and this is why we have this intelligent orchestration piece. That's really about automating the business the way you do business and introducing things like artificial intelligence, introducing things like insight driven analytics to be smarter about the way you process the results and the BA about the way you interpret, what you're doing. One thing is like one of the quickest ways to get to improve this is to introducing business process automation. So not only having those risk calculations kind of living somewhere in the back office, but actually having, business processes, applications that are visible to business people to the wealth managers to everyone that kind of wants to understand how well they're faring from their business standpoint with.

Aric Rosenbaum: (10:53)
So please, for example, we'll come back to the, so at 80%, it's yellow, I'm going notify the wealth manager. OK. So, or she knows 85%. I might do a push email out, or a push alert to an application running on client phone at 90%. There's an escalation to the wealth managers. And then it gets a hundred percent, maybe there's automatic, Hey, maybe there's information. Have you, the fact that all of it is documented in code makes internal audit. There's very regimented process. It's very, to the there bees, and it gives a consistent performance across what's happening in the market. Okay. So that's one example of how you could automate the workflows by automating the workflows

Speaker 3: (11:53)
For the benefit of pharma, as well as

Marius Bogoevici: (11:56)
Thanks, Eric. And, in addition to that, like in addition to these automated workflows automation comes hand in hand with applying things like artificial intelligence, what does that mean? You can do various things. You can, for example, analyze input data better and derive for example, insights from what's happening in the market and understand if those are relevant to the business and relevant to the risk process, you can understand, for example, the results you can look back at and sift through the, kind of the volume of calculations of data that's produced by these risk calculations and understand what is relevant for you and what isn't, you can record and track your past decisions and create next best action processes that kind of allow you to react in response to certain changes.

Marius Bogoevici: (13:00)
For example, in risk profile, as Eric has been saying, like everything that he described, for example, can be qualified as a rule, but as you move forward, you can actually create better. You can create better rules by looking at your past decisions and trying to learn from that automatically. And of course, Eric has already mentioned transparency and auditability because everything is in code because everything is formalized because everything is written somewhere it's on the one hand reproducible. On the other hand, an auditor can look at it and understand what you've been actually doing. So again, this part, like this intelligent automation process gives you efficiency, visibility, and helps to make better decisions.

Marius Bogoevici: (13:49)
And of course, having a stronger data foundation is the other major piece. Like what we hear very often from these various conversations that we have is yes banks, for example, institutions have a problem of acquiring more compute capacity because they have to do more calculations and they want to try new models. So compute is one big part, but hand in hand with that comes, how do you manage your data? And moving data is basically 80%, like in terms of effort. And as you know, the quality of the data that you're using for these processes is critical for the success of this process. So, having capabilities for cashing this data, storing it having data services that can bring easily data from the transactional systems into the risk calculation process is a critical piece here.

Marius Bogoevici: (14:48)
And what does that give you is, again hand in hand with, like I have the ability to move jobs to different environments to run them here, to run them on Amazon or on Azure or whatever I want that goes hand in hand with, I also need to have that data that these processes are using to have it present there. So, having this proximity of data to compute gives you this additional performance. Also, you can, you are less exposed to risk to the risk of failure, right? If for whatever reason your risk calculation process goes down, you can always restart it and your organization can always restart it somewhere else with the same data and just make sure that it completes rather than having to deal with with a failure.

Marius Bogoevici: (15:43)
So disaster recovery is actually one of the big advantages here. Of course, having data replicated in different environments is great from a disaster recovery perspective, may not always be what your regulator wants you to do. You can't take data from one jurisdiction and move it into another easily. Like a lot of jurisdictions won't let you that. So complimenting that ability to replicate data with policies that very clearly state where the data is moved, allowing you to make choices where you put it, how you put it, what do you, anonymize is actually a critical piece here as well. So, these kind of these storage and caching mechanisms come with this baked in capability to give you that how

Speaker 4: (16:29)
Transform the data as well to remove PII. Cause you may not need someone social or last name to calculate the risk.

Marius Bogoevici: (16:36)
Exactly, exactly. And that goes hand in hand. What you describe goes also hand in hand with like the data integration piece that we're gonna discuss later, but it was spot on like these increased controls. I think what's we want to get to is having this mechanism of making data available with the additional increased controls that allow you to be regulatory compliant, give you the benefit of having kind of having resilient performance calculations, That also give you increased security and the ability to stay, compliant with regulations that you're subject to.

Marius Bogoevici: (17:22)
And kind of the last piece, I would say the last piece that's kind of tied to that architecture is adopting, an event driven approach and by event driven we can think of it event driven as a technical at a technical level, there are different solutions. For that like streaming you've seen, probably you've heard about that, but that's not what I'm talking about necessarily. Yes. The kind of the event driven from a technical perspective is important, but also thinking about the entire system as a modular one and designing the business, different business processes, as connected through business events, also facilitating data integration and what Eric has described moving data from transactional systems, doing the right transformations, doing the right data cleanup, removing the personal identifiable information. That's kind of part of it. So having that package, that integrated as a whole through an event platform, that kind of has that marries this two concept, asynchronous communication, but also the concept of business events is critically important. Maybe

Aric Rosenbaum: (18:39)
I give an example, what business market closes, that's an event, let me trigger a risk. Maybe it's something like someone posted something on Twitter and I'm analyzing presentment, whether that's plus or whether it is the CFO reporting on, in his quarterly call. And I'm reading through that his speech then him listened to the reflection in his voice. That's giving me sentiment whether he's positive or negative that may drive that analysis, that artificial intelligence through language processing that is so it's, those are the types of business events that can drive and trigger analytics or risk or something like that.

Marius Bogoevici: (19:23)
That's great. That's a great example. And here's one thing that I would add, right? That not all of these events are singular, right? Like some of these events, for example, ha can be produced by observing specific movements in the market specific sequences of events that are happening out there and adding this ability to trace the evolution over time and decide, okay, this thing has moved in this particular way. This sequence of things is happening in this sequence of tweets has been going on and this other institution has done this in response to that is a risk relevant event. And I would like to recalculate the risk for my organization. So you have the ability of actually doing some very sophisticated things with that. The kind of the other aspect of this, that's very important is besides kind of being able to pull in like the outside world and understanding it and pulling in the events there and making them relevant to your business, you have the opportunity of using your data as an asset.

Marius Bogoevici: (20:30)
And this is something very important. So instead of just thinking of data sitting there, and I have to pull it up whenever I need to do something, and this piece of data belongs to this process and this piece of data belongs to that process. You can start thinking of sharing the data that you use and use it in variety of ways for tra doing different, different type of scenarios, right? The same data, for example, you use for calculating value at risk or expected shortfall, for example, you can start kind of running these experiments in different ways because you have this data sharing mechanism. And of course, what we have been saying earlier, which is we have a holistic view of the business. We have a holistic view of risk. We can like all these risk calculation processes, for example, traditionally run everything overnight and do like produce those reports.

Marius Bogoevici: (21:28)
But, you know, with this integration with a more event driven approach, you can start kind of narrow it down at the level of a specific customer. So something happens for example, in the market. Maybe it's not relevant to the entire business. I can just recalculate the risk for a specific customer, right? So that's kind of the overall benefits or, you know, of introducing an event driven approach. Now, there is one more thing, one more theme which is not necessarily present in the, kind of in the architecture. It has less to do with the tools that you're using and the pro and the kind of the capabilities that you're requiring, but more to do with the way you're doing things, As Eric has mentioned earlier, DevSecOps, right, DevOp is a very technical term that describes the collection of practices used in enterprise software development.

Marius Bogoevici: (22:32)
Now, software development had a very long time, the reputation of producing things that have not been necessarily very reliable. Everybody jokes about bugs. Like it's not a bug, it's a feature, we know that already. But the point is that the software industry has spent a lot of time to correct these things as adopted practices around, how soft, how code is written, where is it stored? How is it built? How is it, how do you create artifacts out of it, exactly in order to minimize the impact of those of those problems. And it actually a very managed, very streamlined process that tries to connect the different parts of here, developers, it people, and users in a similar fashion, what you do with the work of data scientists and Juan has a very lot, has a lot to do with the work that for example, developers are doing, it's a similar process.

Marius Bogoevici: (23:37)
So, kind of having practices that give you an end to end process of deploying and monitoring the models that you're built, it's actually a critical piece. And what that gives you is essentially, you know, the ability to do this kind of cycle that you see in the diagram on the right, like getting data out of the transactional systems, actual systems, moving it, giving it to the people that actually have to develop it to develop models, getting those models easily, and build them in a way that can be easily integrated with the applications that the developers are building. And then having making it easy to take those applications and run them and make it easy for the cloud and HPC engineering people to build to deploy them and run them at scale, right.

Marius Bogoevici: (24:39)
And having the ability to get a data that these applications are producing and moving them back into the lines of business and making it relevant to business stakeholders, having this cycle go on and on and on without friction actually gives you a lot of speed, a lot of agility, and the ability to react better and serve clients better. So what we're advocating here is really for the establishing of this end to end process, and putting together this kind of unified platform that brings together all the different personas, right. And, this gives you all the benefits that we kind of listed, right. Agility also, but the ability to meet the data, the scaling data and volume, but also in kind of computation capacity, but being able to and this is kind of repeating some of the things that we kind of talked about earlier in a different form, right? How do you make sure that this data is actually handled with respecting regulatory requirements around privacy? So having such a process in place and making, putting it at the hands of the different personas is critical. Now I'm gonna hand it over to Eric for,

Speaker 5: (25:57)
Well, again, one more be honest, the model development for

Marius Bogoevici: (25:59)
Me. Okay. I can detail a little bit. The critical pieces of this of this process is as I mentioned, automation the ability to not only have a process, but actually have tools that, automatically for example, take code, build models, deploy it, make it available to deployable as services that can be integrated with applications that move data from one system to another. I mean, those are critical parts of this cycle that you've seen on the previous page. Also very critical is the ability to access specialized hardware, like all these kind of very intense, computational intensive, operations require kind of more traditional resources as CPUs, but can require more specialized resources such as GPS, for example, for calculation in parallel. So having a way to kind of make sure that the right job goes to the right resource is critical here and last but not least, the ability to take the result of these, take these models, whether they're risk models or artificial intelligence models and deploy them side by side in the same way. As, traditional applications, again, gives you a lot of flexibility and a lot of consistency in the way the system is built.

Aric Rosenbaum: (27:36)
In conclusion, Hey, thanks for your time. What we've done here is we've gone out, over about a year time spoke to a lot of different people across, tier one, tier two tier three organizations spoke to consultants, spoke to experts and risk analytics, talked to people that are involved in calculating climate risk and where that's gonna go from a regulatory point of view. What we came up with was really a real time system that has been driven expense said, or many different things, but the ability to calculate your risk in a real time way to integrate that with the line of business applications. So it's not just end of day bar calculations, happy we're available today. And tomorrow specific questions, you please find us right after this or

Marius Bogoevici: (28:30)
For your.