
Trading Tomorrow - Navigating Trends in Capital Markets
Welcome to the fascinating world of 'Trading Tomorrow - Navigating Trends in Capital Markets,' where finance, cutting-edge technology, and foresight intersect. In each episode, we embark on a journey to unravel the latest trends propelling the finance industry into the future. Join us as we dissect how technological advancements and market trends unite, shaping the strategies that businesses, investors, and financial experts rely on.
From the inner workings of AI and ML to the transformative power of blockchain technology, our host, James Jockle of Numerix, will guide you through captivating conversations with visionaries who are not only observing the future but actively shaping it.
Trading Tomorrow - Navigating Trends in Capital Markets
Agentic AI Shaping the Future of Capital Markets
Agentic AI is moving from buzzword to experimentation in capital markets, but is it ready for large-scale adoption? In this episode of Trading Tomorrow – Navigating Trends in Capital Markets, host Jim Jockle speaks with Kieran Garvey, Head of AI Research at the University of Cambridge’s Centre for Alternative Finance, and Dr. Prateek Gupta, a postdoctoral researcher at the Max Planck Institute for Human Development in Berlin. Together, they explore what sets agentic AI apart from generative AI, why financial services demand near-perfect reliability, and how accountability and regulation must evolve.
Welcome to Trading Tomorrow Navigating Trends in Capital Markets the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, jim Jockle, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics Within financial services. Agentic AI has rapidly shifted from speculative theory to concrete experimentation. Today, we see early deployments in areas like algorithmic training, automated compliance and real-time fraud mitigation. Yet while excitement builds, so does skepticism around the overuse of the agentic label as a marketing buzzword. Still, the potential is undeniable. Agentic systems are being woven through the front, middle and back office workflows in finance, enabling smarter, faster and more autonomous decisions. To help us unpack what agentic AI really means and where it's heading, we're joined by two leading voices who co-authored a piece on this topic for the World Economic Forum and who deeply understand both theory and finance operational realities.
Jim:First we have Kieran Garvey, head of AI research at the University of Cambridge's Center of Alternative Finance. Kieran has over 15 years experience at the intersection of fintech, ai and regulatory innovation, having helped build technical capabilities for more than 300 financial authorities across 120 countries. He's designed machine learning programs for quant researchers, holds degrees from Imperial College and the LSE, and has published widely on agentic AI and finance. Joining him is Dr Pradeep Gupta, a postdoctoral researcher at the Max Planck Institute for Human Development in Berlin. His work dives deep into how AI systems, especially large language models, can help us understand complex systems from the physical world to social dynamics. He earned his doctorate at the University of Oxford, where he's focused on machine learning and combinatorial optimization, and has worked with organizations like DeepMind and Milia on projects spanning climate negotiations, pandemic response and mathematical discovery. Together, they'll help us unpack what agentic AI means for the future of finance and whether the era of truly autonomous financial systems is already here. First of all, kieran Pertique, thank you so much for joining us today. Great to be here.
Dr. Prateek Gupta:Thanks so much for the invitation.
Jim:Yeah, so why don't just to kick us off? How would you define agentic AI and how is it fundamentally different from generative AI?
Kieran Garvey:So, going back to the machine learning time, pre-llm time, the term generative modeling was used to model how the data has been generated. So there's been a whole lot of work in that area. It goes back to the time when people were working on probabilities and they started coming up with statistical models and then there was a whole machine learning side of giving the data. How can you model this data so that you can generate more and more of it? And that's where the whole term of generative AI comes from. So when all these models started coming around, like diffusion models, which takes in text and generates images, or just the LLMs, which takes in text and generates a lot of text, these were all generative models. They were trained on lots and lots of data and their job was to generate just this, like similar amount of similar data.
Kieran Garvey:And that's the whole generative AI. It's just content generation, that's all that it does. But then if you take this generative AI and if you give it lots of tools and you do a lot of context engineering, that becomes identic, because then it can pact on the environment like you can call, like, if you give it access to some functions, let's say, uh, call this ap API and get the weather of some area, then it will respond in that format and then you can call those tools and give it that information and then you can ask do more and more and more? So agentic AI is just basically giving tools to generative AI and just letting them work on the objective that they give it.
Jim:So what capabilities can agentic AI bring to financial services that gen AI can't, and where do you see the biggest potential breakthroughs?
Dr. Prateek Gupta:happening.
Dr. Prateek Gupta:So I think at the moment we're still in a really early phase of lots of POCs, lots of testing, lots of experimentation.
Dr. Prateek Gupta:So I think where we're going to see the biggest impact is where you have lots and lots of rule-based processes and you can combine those traditional kind of robotic process automation-driven tasks and combine that with more kind of fluid and more flexible agentic processes. I think where the real value is going to be is where you're able to stack lots of kind of deterministic step-by-step processes that have been clearly defined and chained together, and then how can you use agentic tooling to be able to interact with all of those different processes, and that, I think, is well-placed for agentic AI at the moment to be able to interact with all of those different processes, and that, I think, is well-placed for agentic AI at the moment to be able to deliver value. I think over time, as these systems mature and become more reliable, then they will themselves be able to do those end-to-end processes reliably. But I think at the moment we're going to really see a lot of value where you've already got those step-by-step processes clearly mapped out.
Jim:So, in terms of these pilots, like in areas like trading or fraud or compliance, you know how close are we moving from testing to truly large-scale deployments in the capital markets.
Dr. Prateek Gupta:So I think there's definitely a lot of interest, obviously a massive amount of hype, lots of people experimenting and trying out different approaches.
Dr. Prateek Gupta:I think it's still going to take a long time before that turns into something that are fully scaled and deployed applications. There was an article a couple of weeks ago from MIT showing that some, like 95%, of Gen AI pilots were not delivering the value that was expected. So I'd expect agentic AI to probably be even less than that, given that it's less mature than a lot of the gen AI applications. So you have lots of businesses that are experimenting with different agentic applications, but it is really still early days. I was talking to a leader within one of the big tech companies and they identified a single top tier bank that had really scaled up agentic applications. I think everyone else is in a similar boat of seeing what these capabilities can do, where they work, where they don't work, how you can replace existing processes and do they deliver reliably and in a way that that you can, that you can really deliver value. So I think it's still still early days, but probably, I think it's.
Kieran Garvey:It's probably good to understand what are the reasons behind these agentic systems not being reliable. The industry talks about a lot about how maybe these systems are 99% reliable, but I think financial services have a very high bar of reliability which makes it even harder for the financial services to adopt Identity AI in its full flavor. The failure modes of Identity AI is really hallucinations and them getting off their given objective and since, because these are generated models, there was no big guarantees on what they can output or what they cannot. And that has been the major concern of any application which wants 100% reliability, which wasn't really the case in rule-based systems which were like hand-coded, manually coded by people like software engineers, and you can get guarantees on what they can and they cannot do. So those are the failure modes of agentic AI which are probably inhibiting a lot of adoption in the market.
Jim:You suggested agentic AI can make financial services more accessible. Can you walk us through an example of how this technology might leapfrog traditional banking infrastructure?
Dr. Prateek Gupta:So we came across a nice case study a few months back from a company operating in North Africa called Sowit that works with farmers to be able to use different types of technology to help improve efficiency within an agricultural context.
Dr. Prateek Gupta:Genetic AI system could be applied is people using mobile phones to take pictures of fruit trees to be able to determine and predict the yield or the future yield of, say, a mango tree or different types of fruit trees, and then from that you can then use that to price the loan that you might be able to get in order to purchase other future agricultural inputs or also for, say, insurance, to be able to get agricultural insurance to be able to hedge against the risk of some kind of weather event that may damage your crop, and that kind of process of being able to take that picture, assess the yield, feed that into your financial product. Have that then provided to you with, say, a chat interface to be able to discuss what your options might be, to provide some suggestions or nudges on the type of action that you could take. This type of thing, I think, could be really interesting in terms of the processes and tooling that Agentech AI could eventually be reliably applied to.
Jim:I hope my insurance company doesn't take a picture of me because I think my life insurance premiums would go through the roof. So one of the things you know, when we talk about AI and especially, you know, the introduction of agentic agents and process automation, the conversation always comes up as to are humans going to get replaced? So you know, obviously there are certain roles that are at risk. You know what in financial services. Do you see those roles at this point in time? Who should be concerned?
Dr. Prateek Gupta:So I guess yeah, there's been lots of interesting research that's been published lately trying to estimate what the impact of Gen AI and ultimately Gen Tech AI is going to have on different jobs and tasks and roles, and it is very unpredictable.
Dr. Prateek Gupta:But I think you can be pretty safe to say that anywhere where you've got repetitive processes so, for example, where you've got repetitive communication so maybe things to do with sales or consistent, repetitive marketing messages that could be something that would be a challenge.
Dr. Prateek Gupta:And where you've got repetitive processes filling in forms or scheduling, for example these types of tasks might be areas that are at risk. So, yeah, where you've got kind of routine information processing tasks that is definitely something that AI is very well placed to be able to replicate. But then I guess there's a whole world of new roles that are going to be required to be able to manage multiple agent work streams, required to be able to manage multiple agent work streams all of the kind of compliance and regulatory roles that are going to be changed as a result of the use of AI within different financial processes people needing to develop new skill sets to be able to interpret and understand how these models are making decisions and what the implications of that are and how that intersects with, uh, different regulatory requirements. I think that's going to be a definitely a big, big growth area, and so if you've got any other thoughts and I think so.
Kieran Garvey:Maybe the way to think about it is um one is a routine task for definitely like uh, they can be people who actually do routine tasks in the industry. They're at the risk of getting replaced. But there's also people who do tasks which are people like the tasks which require a lot of efficiency and a lot of processing of information, because agentic AI tools are really good at this. They can do it at scale. So, jobs which require ingesting and analyzing a lot of massive information that's where the most impact of agentic AI is and that's where the most risk of people who are already working on those kind of jobs they're at the risk. Of people who are already working on those kind of jobs will uh, they will, they're at the risk of getting replaced.
Kieran Garvey:So one example is um customer service. So customer service, let's say I call somebody and maybe I'm asking about lots and lots of products at the customer service, but they are not really um. They have to pull their information from all their databases and they have to read through the information and then they can respond. But if you have agentic AI, basically that's what it does it has lots and lots of information which it has ingested either during training or at the time you prompt it through context engineering and then it answers your question. The time to time to respond to your question has it? It basically goes very, very, very low. Of course there is still concerns about it being hallucinate, like off hallucinations, and it it going maybe off, off carb or putting guardrails on what it can do. But if those, although it can be solved, I think that's the first place where agentic AI systems can actually take over.
Jim:You know, it brings up the question of accountability. It's an interesting debate now with autonomous cars as to who is responsible if there's an accident. Is it, you know, tesla, or is it the person behind the wheel? And that's creating new questions around not just accountability, but also legal insurance, et cetera. So you know, coming back into financial services at capital markets, you know how do we ensure accountability when a decision goes wrong? And if something does go wrong, who's going to be?
Dr. Prateek Gupta:responsible.
Dr. Prateek Gupta:So I think, as you say I think that's kind of in the process of playing out.
Dr. Prateek Gupta:I think there's, I suppose, traditional approaches to look at that, as in, if you're using, say, a third party software and that software causes you to your institution, to your institution, to have some compliance issues, then I think there'll be some precedent there that would be applicable. But I guess where AI models are newer is the fact that so many of these vendors are incorporating or trying to keep up with the latest developments in terms of new AI models that they're putting into their systems, and sometimes that is not clear operating or trying to keep up with the latest developments in terms of new AI models that they're putting into their systems, and sometimes that is not clear the capabilities, the new capabilities that are introduced. So if there's a rush to try and introduce these new capabilities as quickly as possible and it's not clear about what that will change the behavior of your system that you're then providing to financial institutions or from regulators and financial authorities, this is a big new risk area that is going to create a lot of cases within upcoming core panels, no doubt.
Jim:Prateek any thoughts?
Kieran Garvey:here as well. I mean, I have more of a researcher perspective In research, at least in machine learning and deep learning conferences, if you submit your paper, you have to have a broad impact statement at the end of your work. The purpose of that is to make the researchers think about what are the positive and negative consequences of their research, either in short term or in long term. And, um, so that's, uh, that's one way where we account for the things that we are producing in research. In research, um, of course, the human still remains the last line of defense in this whole process. And, uh, where it is deployed, well, who it is designed by? I think they are the most, uh, probably the most accountable in this world process, but that's hardly because there is no chain that can actually backtrack to the source.
Jim:Now you've also said you need a human above-the-loop approach. Can you describe that a little bit? And what should that oversight model look like in practice? And who should be setting the ethical regulatory boundaries for agentic AI?
Dr. Prateek Gupta:So yeah, I think that, as we've said, is playing out at the moment. So lots of different countries are working out their approach to AI. In general, I guess AI is a really cross-cutting issue that cuts across all different sectors in society, but then, specifically within financial services, all of the financial authorities around the world are working out how they're going to respond to the new capabilities, the new risks, the new things that are coming into financial services. So I think that's going to take a long time to play out over the next few years against how quickly technology is evolving and this is a real challenge that obviously the innovators have always moved more quickly than financial regulators, but I think the speed that we're seeing the capabilities of different AI tools emerge is just going to create a bigger gap there.
Dr. Prateek Gupta:And then in terms of the above the loop a human above the loop and incorporating those into the processes, I think that goes back to what I was saying around the new skills and capabilities of compliance officers, of people who are responsible for ensuring that they are complying with regulatory processes.
Dr. Prateek Gupta:They're going to need to learn to understand how these systems work, what the capabilities are and where the new risks are emerging and, I guess, similar to regulated persons who need to be able to sign off social media posts for different financial products before they go out.
Dr. Prateek Gupta:There's going to be a lot of work for businesses to be able to just understand who is going to be responsible and sitting in that process of where the key risk points are and understanding the outputs and the explanations that are coming out of these models, which is very challenging in itself. So I think it's something that is going to be evolving and it's going to require lots of upskilling and changes in capabilities for people that have traditionally been responsible for compliance processes. And I guess also as coming back to the automation point, as people change their type of role, people who do have the deep technical skills may end up taking more of a compliance role because they are able to understand the outputs of these systems. So they're very well placed to be able to understand the key points where new risks are emerging and how that then fits into the emergent regulatory compliance processes.
Jim:So you know we've talked a little bit more on what seems overarching to be negative in terms of compliance, controls, human oversight, etc. But you know what excites both of you most about the potential of agentic AI, and not just in finance, but in the way it might change how we work, learn or even relax the whole access to a very good educational resource.
Kieran Garvey:I can probably learn anytime by just brainstorming with ChatGPT. I don't really completely rely on it, but I can actually make good progress on things which are probably inaccessible for me with my current stage of understanding. But with the help of ChatGPd I can, or llms in general. I can actually ask them to break it down to a very low level and then I can move further in that direction. So that's something which is really exciting for me.
Dr. Prateek Gupta:I think it might just bring down the barriers to being able to build and create new, new things in the world, and I think that is just really interesting in terms of new products, new solutions enabling more people to have the tools to be able to turn their ideas into reality, and I think that is really interesting. It's own right. I think that's, on one side, gonna create like a world of really interesting new things, but also potentially, a world full of mess and graffiti, basically of people being able to create anything which we're already seeing the beginning of, but probably end up with both.
Jim:Well, unfortunately, gentlemen, we've come to the final question of the podcast and we call it the trend drop. It's like a desert island question. Question of the podcast and we call it the trend drop. It's like a desert island question. And if you could only watch or track one trend in agentic AI, what would that be, Prateek, why don't we start with you?
Kieran Garvey:Yeah, sure. So I think one of those will be agent to agent protocols. There has already been some work in this direction, but it's almost like we need a way to assess whether an agent is doing a good job or it's doing a bad job, whether it's a good agent or a bad agent, it's a reliable agent or not. So there has to be some sort of reputation, because if you imagine, and if you imagine this internet, like there are lots of humans right now, um, and we need a way for these agents to be just like, uh, go around at the internet and and interact with everyone. But we can't really do it without, because these agents still have, like, the problem of going off the guardrails, and so we need a way to assess whether a particular agent is doing what it's supposed to do or not. So we need those kinds of protocols which can actually assess the agent's credibility.
Jim:So I'm going to ask a quick follow-up there on that question and I know that's outside of the zone of this trend drop. So apologies to the listeners, but you know one of the things. In one of our conversations we were talking about trading and sometimes what this gentleman was suggesting was understanding the way the machine is thinking sometimes degrades the output, right In terms of it trying to explain itself in a human interaction. So at what point do we have to just have blind trust that this agent is working the way it's supposed to and perhaps letting it go?
Kieran Garvey:So you don't really have to ask the agent or, like you have to look at its output. You can also just like let it go for 100 times and just check how many times it has actually done what it was supposed to do and then eventually, by the rule of by evolution, like you, only select the fittest. So if you have the fitness criteria as the reliability of the agents, you only have the reliable agents in the market, so you don't really have to ask the agent or like only have the reliable agents in the market, so you don't really have to ask the agent.
Jim:You have to see what they're doing. Actually Got it Okay cool. So I'm going to come over to you, kieran, on the trend drop, the one trend that you're keeping an eye on.
Dr. Prateek Gupta:So I think around the world you've got lots of different countries introducing approaches to open banking, open finance, which is basically enabling customers to give permission to their financial data in different products, and I think that is kind of like the underpinning plumbing of agents as an agent being able to access and use your financial data to be able to do different things the kind of classic book me a holiday type scenario automatically, classic book me a holiday type scenario automatically. And I think, in order to enable that type of capability, open finance, open banking, needs to be rolled out more widely than it currently is. But we're seeing a lot of that trend in getting on from almost 100 countries globally now that are introducing approaches to enabling customers to share their data with other financial institutions or agents, and I think that's going to be a really important foundational layer that's going to unlock a lot of these capabilities in the context of financial services.
Jim:So yeah, Well, Kieran Prateek, I want to thank you so much for your time, your insight, and it was a pleasure chatting with you today.
Kieran Garvey:Same here. Thank you very much.
Dr. Prateek Gupta:Yeah, thanks so much for having us.
Jim:Thanks so much for listening to today's episode and if you're enjoying Trading Tomorrow, navigating trends in capital markets, be sure to like, subscribe and share, and we'll see you next time.