Trading Tomorrow - Navigating Trends in Capital Markets

Putting AI Governance Into Practice in Financial Services

Numerix Season 5

As AI spreads across trading, risk, compliance, and client interactions, financial institutions are grappling with how to bring governance and oversight in line with the pace of innovation.

In this episode, David Trier, Vice President of Product at ModelOp, joins host Jim Jockle to discuss how firms are extending long-standing model risk frameworks into the world of AI and agentic systems. Drawing on more than two decades in analytics and risk technology, he talks about where the industry really stands on AI governance today, how regulations are influencing practice, and what changes as models become more dynamic, data-driven, and autonomous.

SPEAKER_01:

Welcome to Trading Tomorrow, Navigating Trends in Capital Markets, the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, Jim Jocko, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools, and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. To help us unpack what AI governance in practice really means, we're joined by Dave Trier, Vice President of Product at ModelOp, the leading AI governance platform for enterprise. With more than two decades of experience in analytics and risk technology, Dave helps firms and regulators bridge innovation and accountability. At ModelOp, he guides some of the world's largest financial institutions in building frameworks that make AI explainable, compliant, and scalable. So, first and foremost, Dave, thank you so much for joining us today.

SPEAKER_00:

Yeah, pleasure. Thanks for having me.

SPEAKER_01:

So let's jump in. Um, where does the financial industry really stand on AI governance today? Are are we still experimenting or is oversight becoming part of daily operations?

SPEAKER_00:

Yeah, so just a little bit of background, as you well know in the financial industry, they've had model management, model risk management for many, many, many years, right? Many decades. And that's just been a part of their daily business. Now, AI governance is a little bit different lens on that, if you will. So I would say it's not, it's beyond experimenting, but it's not quite part of daily operations yet. It's getting closer. So it's somewhere in between those two. Again, they're trying to take some of the principles and foundations that they had in model risk management and extending those to be able to account for and accommodate AI as well.

SPEAKER_01:

So the overarching uh rule for model risk governance is you know, Fed um SR 11-7. Is that the overarching rule for AI governance, or is it um kind of built off of that?

SPEAKER_00:

Yeah, so that's a great point. So those are definitely used as a foundation because that's obviously what financial industries by you know law are required to adhere to. So that has always been the foundation. And like I said, they've looked to accommodate and really extend what they've done with SR 117 and their MRM program to account for the nuances of AI overall. Um, yes, I would love to see a little bit more rigor from the different regulatory, federal, etc., to have an SR 117 for AI, right? But that again, it they've they've taken the principles there and financials are obviously very, very smart and risk averse. And so they're making sure that they're carrying those forward to AI as well.

SPEAKER_01:

So how how do you define the term AI governance and what makes it different from that traditional model map monitoring?

SPEAKER_00:

Yeah, so uh obviously there's a ton of different definitions for AI governance, but for me it's really it's the policies and the procedures for overseeing the safe and responsible use of AI. In short, the way I like to put it in layman's term, are you using AI to do the right thing for the company, for your customers, for your employees, and for the community? Very simply, right? Just are you doing the right thing? But over as you can imagine, that there's a number of different facets to AI governance, and I'm sure we'll talk a bit more about those. Many of them, again, extend from the basic MRN principles around having the right policy and procedure in place, the right inventory, risk tiering, uh, effective challenge, i.e., validation, ongoing reviews, attestations, et cetera.

SPEAKER_01:

Financial institutions don't model risk very, very well. Um, you know, the the the SR 117 came out in 2011. It's it's it's well defined. I think uh probably uh all the consultant firms made a lot of money uh, you know, as uh all these firms are implementing uh their their governance oversight. But AI brings a lot of new challenges, uh explainability, uh data drift, uh even to some extent, ethics. Um how were the risk teams adapting to these core differences?

SPEAKER_00:

Yeah, no, and as you can appreciate, the the risk teams are just starting by looking at and analyzing what are the new risks, as you pointed out a few of them. What are the new risks that AI introduces? It's not like the traditional regression models out there that you build it once and they kind of don't need to change, really. It might make a few tweaks, but they really don't need to change very often. To obviously, AI is very centered around data, is constantly changing, the technology's changing quickly, and then really the output is uh you know something that is constantly evolving based on the data itself. So the first thing that risk teams are doing is just saying, okay, what is different about AI and therefore what are the different risks, the new risks that AI are bringing to the table? And it's things, as you just mentioned, around transparency. Uh, transparency of and explainability of the model itself, but it's also transparency into, if you think especially around generative AI, what data is being used itself, right? Are you using data that is potentially copyrighted, right? Or are you uh potentially using a vendor because uh financials use a lot of vendor-based models and vendor-based AI systems. So is do you require the you know the vendor uh to send data out to the vendor? And are they using that? Are they then incorporating that into their model? So there's that kind of transparency into what's happening with your data overall. And then as you rightly pointed out, there's things such as drift, there's in the gen AI world, there's hallucinations. But also the security comes in much more focus, especially as you get into more agentic AI, right? Because when you get into the agentic AI, this is where you're using things like MCP tools and the like, which potentially are opening up to external parties, right? As you're using and talking to different systems via these agents that are outside of your four walls. So again, back to your original question, it just comes back to they are analyzing, okay, what are the new risks that are being introduced? Okay, great. How do we then bake that into and update our policy and procedures to be able to account for those? And then ultimately we protect ourselves by identifying the risks, putting the right controls in place, and making sure that all of the different teams and parties are actually following those policies, procedures, and have the right controls in place.

SPEAKER_01:

Would you say at this point all the risk factors have been defined? Or as the technology is evolving, are new risks emerging? Or is it just same flavors of past risk?

SPEAKER_00:

Uh I wish it was just, you know, same flavors of past risk, because then it'd be a lot easier, right? But no, it's as the technology evolves, there's new risks. As you think about, you know, with generative AI, you never even heard of hallucination, right? From an MRM perspective. And with the GenTIG, now you're talking about, well, how do you have the right level of control around uh, you know, the protecting the inline execution for prompt injection and still more, right? So unfortunately, the risks are evolving because the technology and the innovation around that technology has evolved very, very quickly.

SPEAKER_01:

So regulations are moving very, very fast, but arguably it's very uneven. You know, for example, you have the EU AI Act and as well as state level US rules. Um how can global firms stay consistent when the rules aren't?

SPEAKER_00:

Yeah, that this comes back to the fundamentals, Jim, as I mentioned. So even back to SR 117, which is you know a couple uh over a decade old now, right? But they had the most of the fundamentals in place. Do you have the policy and procedure? Do you have the right accountability structure across the different teams with multiple lines of defense? Do you have the capabilities such as the inventory, the risk hearing, effective challenge, uh reviews, et cetera? So I would just say to your question that these regulations use those fundamentals. They use those principles and then they put some slight nuances around it, right? The UAI Act specifically went into, you know, looking at the developers and the users of it. It looked at uh how do we have uh an understanding of what they call it unacceptable risk, high risk, which again is just kind of the risk tiering part that was part of SR117. So what I would say is that the fundamentals are very similar to what we had before. There's just nuances that unfortunately, yeah, each global firm has to kind of keep abreast of what those new regulatory frameworks are. Are there nuances that aren't already accounted for? Okay, let's make sure we have those in place and just unfortunately roll with it. But that that's the reason why you know you you work with a company like Monolop that just has those as part of their day-to-day business and say, okay, great, Monolop's got me covered, right? You can help make sure that I'm staying uh on top of those and then more importantly, enforcing those across the organization.

SPEAKER_01:

In data privacy world, um, you could argue GDPR is kind of uh kind of a foundational um uh piece of legislation that's out there uh where others are kind of copying it or different uh different components of it. Um while there's none in the United States, California has its data privacy rules, Japan, Germany, um, in terms of uh where data can leave jurisdictionally. Uh are any of these regulations, like the EU AI Act, something of uh kind of a foundation uh that kind of spans across all of these uh types of regulations, or are they or are they mostly vastly different?

SPEAKER_00:

No, they're they're all they're the uh EU AI Act does have the foundation for sure, right? It has uh some of the foundational components. And if you look at at regulatory frameworks like the Colorado AI Act, as I call it, right? It has some of the same principles around it. You look at what the California Attorney General put out uh, you know, uh actually probably a couple of years back now, look at Texas House Bill 2060, they all have the similar principles. So, yes, I believe that the to your question, the EU AI Act did lay down some of those principles and foundations. Unfortunately, there's still the nuances that come out that there's very specific things that that each um not even just regulatory bodies looking at, it's even in the industry. Like if you look at the NAIC and the insurance side and the AI bulletin that they put out, right? They're looking for some very specific items. You then jump over to Canada with the E23 and the AI extensions that they have, and the slight tweaks to it. But the foundations, the principles, as I said, are are pretty similar across.

SPEAKER_01:

So I when I look at my own life and my my own organization, you know, I I think back to 18 months ago where um like my team was using one I uh AI-based solution. You know, now we're using 15, um, and every department's got, you know, hundreds and hundreds of different flavors of of of AI type uh solutions going on. You know, once a firm scales from that handful of AI models to potentially hundreds, uh, you know, what breaks first?

SPEAKER_00:

Yeah, so it's one of two things. I'll answer it from the financials first, because most financials are, as you can imagine, very, very risk averse. So what breaks is is actually the time to market, right? You have each of these different teams that want to use AI, and because of the risk-averse nature, as they should be, they slow that down. They slow it down because there's a lot of manual reviews, manual approvals, ad hoc, back and forth, and it just slows down. So the the risk piece doesn't break because they're risk averse. What breaks is the time to market. Now you go on the other side of probably outside the financials, where they're not as, you know, they don't have ingrained that MRM type mentality, if you will. And so what breaks is that something goes wrong. Just, you know, whether they have uh you know IP leakage, or you probably saw in the headlines where somebody downloaded an NCP tool and that caused them to hack all of the emails, right? So something goes wrong, whether it's IP leakage or a security breach, or uh, you know, you have just a hallucination that causes some sort of of just negative feedback to customers or some sort of uh regular or sorry, brand exposure type event. So you the kind of two and the legs of the stool there, right? The financials typically what breaks is the time to market because they're just gonna do everything manual, right? They're okay with slowing things down because they want to make sure they're doing the right thing, versus I would say the non-traditional, traditionally regulated industries, they might have things just break.

SPEAKER_01:

One could argue while generative AI has been groundbreaking, game-changing uh in the ways we work and come to market, et cetera, agentic AI is a whole nother thing. Um and the questions that are getting raised now are really around control and accountability. Um, how are companies drawing the line between autonomy and oversight?

SPEAKER_00:

Yeah, so I think there's obviously the huge buzz around autonomous agents and they're just gonna do everything and act on their own. They can reason and just you give them a problem and they just go and solve it, right? What I'm finding though, especially in financials, and I would just say the global 2004 at large, is that it's nowhere near that. They're not they're not going down the autonomous route yet. They're going down what I call more the guided route, the guided of saying, okay, well, here's the specific use case, business problem that I'm trying to solve, and we're going to use an agentic solution, but we're going to guide that. We're going to keep it within the guard, the guardrails or the fence posts, if you will. So it doesn't give full autonomous, but rather here is the path. Sometimes they use workflows themselves to say, okay, do this and then this. Yes, you can take some uh you know some decisions as part of that process, but this is the path that we're gonna follow, uh follow, if you will. So in that way, again, just where it is today, Fortune 500 Global 2000, it's more of that guided as opposed to full autonomous. Now, as part of that, naturally you can get some more oversight because it's guided, right? You can have some oversight into what's happening. The other way that they're dealing with oversight is just a lot of transparency, a lot of making sure that they're capturing the audit trail around it, both the audit trail around the process. Did we go through the right steps? Did we get all the approvals? Did we get all the reviews and tests, etc.? As well as the audit trail around the actual usage, right? So as that ethics system is making decisions, log what those are, making sure we know and have that if there are questions later, we can come back from an audit perspective and understand when it made a decision, what was that decision, et cetera? So just a couple areas that uh that I've seen in practice, a little bit more of the oversight.

SPEAKER_01:

Well, there is also a perception, and unfortunately that perception is probably also rooted in experience, uh, but that more governance means less innovation. So, what separates firms that manage to move fast and stay compliant?

SPEAKER_00:

Yeah, it's it's it's really simple. It's it's around having an automated process, right? Where those that have the perception that governance slows things down, it means they have a manual and disjointed process. What I found and just in practice and having done this over six years within Model Op is that governance can actually be an enabler of innovation when done correctly. It can speed the time to market AI solutions when done correctly. When it's slows things down, as I said, is that the process is not defined. You have 10 different teams that need to be involved and they're pulled in ad hoc, and there's not a consistent and streamlined effective way to do that. And so, yeah, it slows things down to a halt. What separates them to your question? What separates though the winners in this space are those that took the time and said, like we do with our model op customers and said, hey, here's the process. Let's get legal risk, compliance, data, IT security, uh, et cetera, the business team to the table. Let's agree on the processes that can go and help to shepherd from idea through usage and retirement. Let's get those processes defined and then let's use something like a model op to go and enforce those policies using automation. In that way, you have built-in inherent trust because everybody agreed to this is the process that will make sure that everything is trustworthy, reliable, accountable, auditable. They agree to it up front. That way, if you get a new AI system through the door, it goes through the process that, yep, we're good because we it's the process we all agreed upon. It'll pull us in at the right time, it'll make sure that we're doing all the right steps in the process, it'll make sure it's auditable. So using that becomes streamlined first and foremost, but then you use automation. Use automation to make sure that you are, uh it can automate many of those steps. But when you need a human in the loop, it pulls them in at the right place at the right time, give them the right information, make their decision, and away you go. So we we routinely see, as I said, when done right, governance is an enabler. We have customers that use model op that reduce their time to market from six months down to a couple weeks because we've got that automated streamlined process. So for me, that's what separates the winners in the space from those that are gonna be, well, laggards and fall behind.

SPEAKER_01:

Well, the best part is if you don't have a defined process, just open up a copilot and it could do it for you.

SPEAKER_00:

Yeah, there you go.

SPEAKER_01:

So, you know, coming back to regulators, and and we all love regulators, but the main thing they want is proof and not promises. You know, what what to new tools or practices are helping firms actually show compliance?

SPEAKER_00:

Yeah, so I mean, I selfishly I'll just start with what we do with model op. Our tool is about enforcing the process and enforcing the policy. The first thing regulators want to see, hey, do you have a process? Second thing want to see, show me evidence that you're following the process. And so that's what we do at ModelOp. We take what process has been defined, as we just talked about, and then we enforce it. But we have the audit trail behind it. So that gives the regulators really the assurance and uh understanding that the process is being followed first and foremost. And if they have any questions about it, it they have the audit trail behind it. So that's just an example around following the process and policy with model law. But of course, there's other pieces of it. There's the the security side, there's some of the data uh access and policies for which, of course, there's existing security and data, um, privacy and those types of tools as well, that are a part of the big picture.

SPEAKER_01:

So as AI spreads across trading, risk, compliance, and as well as client interactions, what kind of relationship should regulators and innovators really aim for?

SPEAKER_00:

Yeah, it it's just all about the trust and transparency, right? You know, as you rightly pointed out, the regulators want proof, right? They're not trying to get in the way. They just want to make sure that that the financials are doing the the right thing, right? That they're following the process that's been laid out, they're following the regulatory guidance, if you will. So I would say that is a relationship built on trust, but verified, meaning, you know, hey, I want to at time to time, you got to give me some of the evidence that you're following the process, that you're that you, if I have specific questions to go and test that you're following the process, that you can answer that. So I think it's one around, as I said before, the transparency uh and the trust that it's being that these financials are following.

SPEAKER_01:

So everybody talks about trust, but what distinguishes firms that just comply from those that lead with credibility?

SPEAKER_00:

Yeah, I think it just starts with, I mean, everybody, every large enterprise, Global 2000, and even small mom and pop, want to say that uh we're an AI-powered business, right? And we are we are uh differentiated, we're the the leading edge, you know, enterprise or financial, if you will, because we're using AI. We're changing the game with AI. But what really happens is that if there's ever a case of something that went wrong, whether it's something minor that happened with a customer or something major like a security breach, you instantly lose all trust with the with your customers, the consumers, and and really the market at large. So what I to your question of what distinguishes firms is those that are more forward around here are the measures I'm taking to make you trust that I'm doing the right thing with AI, right? Here is what our policy is, whether they call it responsible AI or AI governance, but here's the policy that we're following. Here is the option for you as a customer and consumer to opt out when appropriate, right? But just making it very clear and transparent to again customers, consumers, and and market that we know what they're doing with AI. It's very clear what they're doing. And I do have some options around it. Like if I don't want to interact with it, they let me do that. So that's how you build that you know level of trust, if you will.

SPEAKER_01:

Well, there's definitely a couple of companies where uh I'd I'd like to get a human on the phone every once in a while and not have to go through very long automated AI messages uh to get there. But uh unfortunately, Dave, we've made it to the last question of the podcast. We call it the trend drop. It's like a desert island question. And if you could only track one emerging trend in AI governance of over the next few years, um, or it's AI over the next few weeks, because everything's changing, what what would it be and why?

SPEAKER_00:

Yeah, it would it would definitely be around the autonomous portion of Gentec, right? That's an area where huge amounts of potential opportunity, but also really high potential for catastrophic events, right? So that's an that's an area where, again, it's all the buzz. Everybody talks about you know Terminator and all that, right? But at the end of the day, there is an area for that to be used, but it shouldn't be used all the time. So it's it's the autonomous agentic space that I'm gonna be watching very, very closely, have been and will continue to watch.

SPEAKER_01:

Dave, I want to thank you so much for your time, your insights, and uh uh yeah, give us a lot to think about.

SPEAKER_00:

Yeah, thanks so much for having me, Jim.

SPEAKER_01:

Thanks so much for listening to today's episode. And if you're enjoying trading tomorrow, navigating trends in capital markets, be sure to like, subscribe, and share. And we'll see you next time.