Trading Tomorrow - Navigating Trends in Capital Markets

Fighting Against Financial Crime with Jennifer Arnold

Numerix Season 3 Episode 30

In this episode of Trading Tomorrow – Navigating Trends in Capital Markets, host Jim Jockle sits down with Jennifer Arnold, co-founder and CEO of Minerva, to explore the future of compliance and the evolving role of technology in combating financial crime. Jennifer shares her journey from anti-money laundering expert to RegTech innovator and unveils how Minerva uses technology and automation to revolutionize financial crime detection. This episode delves into the intersection of innovation, regulatory challenges, and the human role in technology-driven compliance.

Speaker 1:

Welcome to Trading Tomorrow Navigating Trends in Capital Markets the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, jim Jockle, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. Today, we're excited to have Jennifer Arnold, co-founder and CEO of Minerva, joining us.

Speaker 1:

Jennifer is a visionary in the regtech space with a passion for leveraging advanced technology to combat financial crime. Her career began in communications before transitioning to anti-money laundering, risk and compliance. As an expert in AML, she has led large-scale transformations at major financial institutions. She is now at the forefront of using AI and automation to drive innovation in financial crime detection and compliance. Minerva, her brainchild, is setting a new standard for efficiency and effectiveness in AML by leveraging deep learning and predictive intelligence to stay ahead of bad actors. In today's episode, we'll discuss how Jennifer is changing the landscape of financial crime prevention and exploring the broader trends in regtech and compliance. Jennifer, welcome to the show.

Speaker 2:

Thank you for having me.

Speaker 1:

Perhaps you can share with Spark the idea behind Minerva and how your background in AML influenced its creation.

Speaker 2:

Oh, my goodness, yeah, so really it started. The idea for Minerva started percolating probably back in 2014, 2013. I was working on an implementation of a product called Oracle Mantis for capital markets teams, so this is a very large transaction monitoring case management tool, and I was spending a lot of time with the investigators working on mapping, you know, current process into new process, et cetera, et cetera, and the investigators were reluctant to really engage in the project of the design of this new tool that they were going to be using and as I, you know, kept digging with them to figure out what it was. Why were they holding back on me? It was really this notion that we were spending, you know, millions and millions and millions of dollars to buy this tool that was going to create alerts and cases more quickly than they've ever been created before for this particular team, but nothing else in their world had changed. So they still consulted, you know, like five to seven internal systems and vendor solutions. They still executed multiples upon multiples of Google searches. They still spent most of their day copying, pasting data from various sources back into a case document, and so, as they sort of looked forward into what was coming for them, it was way more work being produced much more quickly, but nothing that they were using to actually do.

Speaker 2:

The investigation and the risk assessment piece was evolving at the same rate as the tool that was coming in, and so they were very anxious about being overwhelmed. For me, I was like, oh, this is the right right, this is an incredibly painful problem. And then, as I spent more time in the space and moved to my next bank and spent more time with investigators, the problem seemed fairly universal. There was this truckload of manual and fairly menial work that needed to be done in completing an investigation, including everything from the risk assessment to the documentation and ensuring that you've complied with your internal policies and you've met your regulatory requirements, but most of it didn't have much to do with the risk assessment risk analysis part of their job, and I just started thinking about ways that we might think of giving them more time to be risk professionals and do risk analysis and do higher order work than do all of this busy work that comes with the investigation process.

Speaker 1:

So now you've mentioned in the past that compliance teams have historically had to choose between efficiency and effectiveness in their tools, so how does Minerva address this challenge?

Speaker 2:

Yeah, so I think we think about it a couple of different ways, right?

Speaker 2:

So if we can move the accelerate the investigation process for that analyst, they will get through more work much more quickly, with the emphasis being on risk assessment versus data gathering and copy and paste.

Speaker 2:

So that's already a win in terms of the effectiveness of the program, because their time is being spent on the effectiveness of the analysis that they're doing. Are they making the right decision about the customer in that moment, given the information that they have in front of them? And then I think when we can move them closer to working in a near real-time type paradigm, we have a much better chance of affecting real change Instead of. You know, as many of us do in the financial services, we're sitting on several months worth of backlogs, of alerts and cases that won't get adjudicated for another few months. So you know, an incident could have happened but it might not actually make it anywhere into the investigator's hands from six months to a year afterward. So being able to help them go faster, being able to take away the busy work and have them focus on risk assessment, tackles both the productivity and the effectiveness piece in our view.

Speaker 1:

The systems are now producing so many more alerts, right, that require follow-up and you know, and all of these other manual processes have not necessarily caught up. You know how are investigators prioritizing and you know to what extent are. You know, are there any false positives or are there? You know what's the prioritization escalation look like.

Speaker 2:

Yeah, that's a great question and there's a lot of threads in there to pull on. So if we take the first thread, being false positives, of course that is a challenge, like industry-wide. That applies, you know, to name screening, to transaction monitoring, et cetera, et cetera. For us, when we do risk screening on the entity or individual, the client itself, right, and taking a look at their risk, we are using a very, I would say, complex matrix of data to better and more accurately identify the client in the first place. So we can avoid some of those false positives just avoid producing them by actually getting better at identifying the actual client in play instead of everyone with a name who sounds alike, spells alike, etc.

Speaker 1:

So, jennifer, you know, one question I have is you know we're dealing with real-time data. You know one could assume vast amount of data, but within those things that are getting flagged and elevated and the volume that you're speaking about, you know, to what extent are you potentially seeing false positives or how are investigators potentially prioritizing with such an increased volume?

Speaker 2:

Yeah. So there's quite a few threads in there and I'll just tug on a few of them. So the question around false positive every transaction monitoring system, every name screening engine, is spitting out false positives and that has a lot to do with the parameters and thresholds that are set, you know, low enough to let the to to gather up, like statistical material, data and and not leave anybody out who you might want to look at. I think we just started thinking about that problem in a different way, which is how do we get better at identifying the actual customer so that when the analyst is looking at their information on the screen, they actually know that's their customer and that this is the work worth doing versus a false positive and it's somebody who has the same name but is not the same DOB, is not the same address, has no other identifiers. So we really think about the accuracy and the frankly, the cogency of the profile that we're providing back to our users to help them better identify their own clients so they can get through the work more quickly.

Speaker 2:

False positives are a challenge, I think, everywhere. You know. In some legacy systems those are based on keyword and name matching, which of course you know would naturally have to generate a lot of false positives if they're doing their job. But you know it's problematic because the volume work that gets created, you know transaction monitoring is the forever job. Right, you observe customer patterns and transactional behavior and you tune your rules, but you have to keep going back to the data and seeing if the parameters are moving to make sure that you're actually looking at the behavior that is material for whatever the product or service it is that you're monitoring. So that's, I mean it's a whole lot of work. We really tackle it by trying not to create false positives, through better identification of the target and really, you know, acting as a co-pilot to that investigator or analyst so they can get their work done. And then you talk about prioritization. Again, it depends on the organization and the complexity of the systems that they're using.

Speaker 2:

In my perfect sort of nirvana world, an alert arrives fully born into the hands of an analyst and they see the transactional behavior that caused the alert to be triggered. And then they see all the contextual data around the client that tells me who they are, how long they've been a client, who are they connected to, where does their money come from, and then I'm able to, either manually or in an automated fashion, risk rank those alerts and tackle them that way. That's what I would like to see is to see those data sets come together in a really material way.

Speaker 2:

Right now it's incredibly difficult For some organizations. A manager will go in and try and triage some of those and they'll use transactional data to try and identify if a transaction is higher risk than another. So it would take priority in the queue. But there isn't a great solution out there. I think our answer is getting the whole picture of the client, transactional behavior, contextual information and their KYC data in one place so the analyst can go wow, does this make sense for what I know about this client, or is this utter nonsense?

Speaker 1:

So you know, Jennifer, you spoke about your start and almost a resistance. Right, you know, now you're getting transactional data, new speeds, new velocity as it relates to data information yet the industry is still dealing with multiple systems, manual processes. Fast forward to today, 2024, 2025, where is the industry now?

Speaker 2:

Oh well, that's actually an excellent question. Where is the industry now? Oh well, that's actually an excellent question. So I think it is. The industry is moving forward, but I would say that, like the industry from an AML professionals and financial services provider, exists on a spectrum right Some who are still fully 100% legacy providers, some who are really looking ahead and trying to figure out how to future-proof their businesses by employing more advanced technologies like an applied AI to help them understand their data more quickly so they can make better decisions more quickly about their clients.

Speaker 1:

So, Jennifer, can you share a real-world scenario in which your software has actually combated crime?

Speaker 2:

Yeah, actually, I think we're just coming out with a case study and I think I'm allowed to talk about it.

Speaker 2:

We have a client that we share with our partners at Equifax and they're one of the largest sort of like luxury leasing automobile folks out there and you know their challenge was trying to avoid all the manual work that was slowing down their ability to sell, and part of that was meeting these compliance requirements that they had to respond to prevent losses from happening in the first place by simply looking at adverse media for some of the leasing applicants that were coming in to get some very high value vehicles.

Speaker 2:

Another one which I think is like near and dear to me is we work with an organization, an anti-human trafficking organization in Canada and another one here in the States, but the one in Canada we ran an operation because when we have big events like Super Bowl, film festival et cetera, it attracts a lot of people from a lot of different places. Unfortunately, it also attracts a lot of traffickers and through the use of our data and some really great volunteers, we were able to help extract two young women, two girls, who had been trafficked during the film festival over the border.

Speaker 1:

Wow, that's amazing. It is Probably feels good to go to work every day, knowing that you're changing lives.

Speaker 2:

You know what? It's really really cool. It's really overwhelming sometimes because we often talk in the abstract about this industry and what a pain it is and the regulators, and it's a checkbox exercise and blah, blah blah. But the purpose of the work. The purpose of the work is to protect our financial infrastructure and the purpose of that work is to protect everyone who lives in these countries. So you know why not do that work? Well, Amazing.

Speaker 1:

Thank you for sharing those stories. So you know I'd be remiss in this podcast if I didn't mention AI it's. You know our listeners are probably like wow, he made it 12 minutes without mentioning it. But AI is becoming more prevalent in compliance programs. You know what are some of the unique ways that Minerva is using AI in deep learning.

Speaker 2:

Yeah, so Minerva is an AI-native platform. When we built her, she was built as an AI to solve exactly this problem. So Minerva's AI really comes into play in three places, and we call ourselves an applied AI, which is our customers, their regulators, et cetera can take a look at Minerva. They can see all the data that goes in, they can see the data transformation activities and they can see the outputs. And we provide data lineage for every single piece of information that we tap into when we're doing a risk assessment, and that's because that's what regulators need to be comfortable and, therefore, that's what our customers need to have in order to be able to use AI inside their four walls.

Speaker 2:

So we use different types of AI. We use, obviously, a lot of natural language processing to help us understand things like context and sentiment at risk. When we're looking at profile information, we use what Damien, our CTO. We have an insane entity resolution engines that really help Minerva understand which pieces of data belong to which. Jennifer Arnold, right, like, if you Google Jennifer Arnold, you'll find out that there's millions of us, but if I'm only looking for one, how do I do that? And that's what Damien has built. That is the big, I think kind of the big win on the Minerva side is, instead of having an analyst go through a fraction of the amount of information trying to figure out which, jennifer Arnold, the data belongs to, we can do it in a few seconds for an analyst and get them on the right track.

Speaker 1:

So what would you say are some of the biggest trends that you're seeing right now in rec tech?

Speaker 2:

Okay, I'm going to sound like an old grump, but I'm just going to say it. I know everyone is very excited about Gen AI and I'll be honest, when Gen AI blew up last year, we removed AI from our name because people were getting really excited, but possibly not for the right reason. So Gen AI can be really powerful, but I have two concerns One, which is our regulators are not yet comfortable with most Gen AI because it's fairly black box. They can't see the data going in, they can't see the transformation, they can't relate it to the output. That's a problem from a regulatory perspective and so it will be a problem for our clients.

Speaker 2:

The other part is there's some really cool stuff going on where the Gen AI is being used to help create the narrative part of a regulatory filing called a SAR or an STR, and this is really where the analyst has to tell a story. I am filing a SAR today because and they talk about the activity and they talk about the profile, et cetera, et cetera, and sort of put together all the information in narrative form for the regulator, and so Jenny and I are being deployed into doing this piece of work. Why? Because it can be time intensive, I think where I'm challenged is it's at this moment, where the investigator sits down to think, to tell that story, where we need their critical thinking skills the most, and so bypassing them here instead of accelerating, you know, the kind of grunt work that needs to happen up front.

Speaker 2:

I'm not sure. I'm not sure it's the right move. It makes me a little nervous and, you know, like, the reason there's a human in the loop is because we need the critical thinking skills there to say, well, actually, that doesn't make any sense and it's not right. So that's how I, that's how I feel about that, and we see a lot of that going on in the industry. And then there's just the, you know, data privacy and security concerns.

Speaker 2:

Like, if you're using an open source model, how are you protecting your customer's data as it gets sent back to something you know outside of your four walls as an organization, being processed by a machine that, in theory, many other people, hundreds of millions of people, have access to? Right, so how do we protect against that? And private LLMs are a great solution for that, and there's some amazing private LLM organizations out there. But that's how I think about that, right, like, I love the idea of AI, accelerating the process, identifying real risk, being able to differentiate between low and medium and high, et cetera. There's a lot of value in doing that, and that's the part that I'm most excited about.

Speaker 1:

You know, one thing that always fascinates me is the people component on this, and there's some people out there that AI is going to take away everybody's job and automation and robots and great build factories in the US, but you don't need humans to work in them, but you raise the critical thinking of humans. You know how is the job of an investigator evolved or the skill set required to do that job with. You know the data and information, that's coming at them now.

Speaker 2:

Yeah, like, the best investigators will be those with the data science background, right, like that's. I think that's the, that's the magic sauce there. You know, the the human in the loop for the investigation process is really important. You know, one, for some optic type reasons, like it makes regulators more comfortable if they know that a human has looked at it. A human has looked at it. And two, I just there's. You know, ai, in my view, is an augmentation to human capacity. It is not a replacement of human capacity. So, if you know, minerva can process four and a half billion disparate data points into a single profile in under 20 seconds. A human brain can't do that. What a human brain can do is look at that profile that Minerva has assembled and say, yo, that's nonsense, this doesn't make sense, this doesn't make sense. Oh, but this is the gem right here, and be able to start their investigation that way. That's meaningful.

Speaker 1:

So how do you balance embracing innovation right now? I mean, obviously there's so many new technologies. The pace of evolution is rapid. The changes in AI seem, you know, exponential month over month in terms of what's coming to market. You know the speed of which transactional data is being processed. You know how are you balancing innovation within your own solution, but also, you know dealing with complex regulations. You know organizational movement, you know, and demand in terms of, you know, keeping pace with that innovation. How do you manage the balance?

Speaker 2:

Yeah, this is going to be. It might be disappointing and a bit pedantic of a response, but I kind of go back to standard program management protocols when I think about this, which is innovation for the sake of innovation, fun, not super useful. So, in our space specifically, what is the sandbox that we're allowed to play in? What are the guardrails that we have to stay in to not get ourselves into trouble with the regulator or create any unnecessary risks for our customers? And then what is the primary use case for that innovation and what is the value add? Is it 10x, is it 100x?

Speaker 2:

Then let's talk about what that actually means and what that actually looks like. And then let's start talking about not just the comparison to the before and after of the use of whatever the innovation is going to be, but then what does it do for future state? What are the other knock-on effects good and bad for that organization or for us if we proceed down that road? I kind of run it through that little matrix in my brain of what's its purpose, what problem is it solving, and is it big enough? Is the problem big enough to be solved this way? Am I taking a bazooka to go after a mosquito, because then that's just comedy, right Like that's hubris, and we'll see some of that.

Speaker 1:

So you know. The one thing I do wonder is you know so? Every fall a new iPhone comes out right and you know there's people who will be online at the Apple store day of release. And then there's people who are on generation eight and they're perfectly fine, and and they're either fine because the tech is good and it works and and I make phones and I get texts, or there's a barrier to cost which is preventing upgrades. You know where? Where's the client base right now? Are they? Are they? Are they standing outside the applesauce store or do they got flip phones and saying yeah, yeah, I'm good, so interesting.

Speaker 2:

It really depends on the client. So if I look at our, you know the folks we spend the most time with. So let's think, like mid-market fintechs, neobanks they are a mix of the people standing outside of the iPhone store and then their slightly jealous friend who's looking over the shoulder thinking maybe they should make a move. When we talk to certainly older and more established financial services providers financial institutions you know there's a lot of knowledge and experience there and they are much slower to move because the cost of the change can be, or is perceived to be, prohibitive. It's not that the new solution would cost more. It's that changing from the system that they have is a high-risk move. When you're moving data you know that old from one system to another it can be quite, quite challenging.

Speaker 1:

Got it and you know. One thing I do have to ask is you know, obviously financial regulations, you know, are continuing to evolve. Some would say rapidly, some would say too slowly, but there's different camps on all of that and we'll see, with you know, new regime changes in presidential politics, what happens there. But we know technology is changing. So you know where do you see the compliance landscape heading over the next five years?

Speaker 2:

Well, you make a really excellent point about the US, specifically because you know, if I think, if Trump is to be believed, trump, I think, if Trump is to be believed, the US might be moving in the opposite direction of many of its peer nations no-transcript skills to build the kinds of regulatory frameworks, et cetera, that they need. My primary, I guess, focus or concern right now is I hear a lot of organizations saying, well, we're just going to wait to see what the regulator does, and I get that Like it's a solid business choice, because why would you go and make an investment in something if you don't know you're actually going to be forced to do it. And then I think about it from the other side, which is if we agree that money laundering is actually attack on the nation's sovereignty, which it is right If we look at Russia, china, iran and the cyber activity that we see there. So, if we agree that money laundering is an attack on our sovereignty, why can't the industry, why can't tier one banks, why can't credit unions, why can't you know fintechs, crypto, defi, as their own communities, move in lockstep on some very simple, very low cost improvements that would strengthen the integrity of the financial system overall improvements that would strengthen the integrity of the financial system overall.

Speaker 2:

They don't need to be told by a regulator to do it, but if they agree among themselves that there are some things they could be doing better, for example, if everyone agreed that, yes, they would do adverse media at onboarding, then it's not a competitive situation. Right, then it's not. We're not. It costs us more to onboard than it costs you to onboard. We don't end up in that kind of discussion, though. No one ever really talks about the expense of offboarding a client, especially after they've been found to be laundering money through your organization.

Speaker 1:

So you know, sadly, jennifer, we've come to the last question of the podcast. We call it the trend drop. It's like a desert island question. So you know, if you could only watch or track one trend in reg tech and AI, you know what would it be.

Speaker 2:

Oh man, the regulatory thinking around how data is being used to perform AML risk assessment and how it may or may not be in conflict with privacy law in some jurisdictions, and how will we resolve that?

Speaker 1:

Well as someone who's in marketing. We talk about privacy laws every day. I bet you do?

Speaker 2:

Yeah, because some of that bank data like if you're looking for a bank and you're in marketing, they've got tons of information. They can share a sliver of it with the marketing team, right, and that's appropriate.

Speaker 1:

And even, as you were saying before, in terms of how certain transaction data is getting processed. I'm like oh my God, this is PII.

Speaker 2:

Yeah, sorry. And often organizations will say, oh, we can't do that, it's a privacy issue. Oh, we can't share that information as a privacy issue. They really need to go talk to their legal team, because often it isn't a privacy issue, it just feels like one. And so you've got to check your facts.

Speaker 1:

Well, that's good advice to any listener right, Bringing in an angle we've never discussed before. So, Jennifer, I want to thank you so much for your time, your insight, really enjoyed our conversation.

Speaker 2:

Thank you so much for having me and thank you for letting me ramble on about AML.

Speaker 1:

Thank you so much. Thanks so much for listening to today's episode, and if you're enjoying Trading Tomorrow, navigating trends and capital markets, be sure to like, subscribe and share, and we'll see you next time.