Trading Tomorrow - Navigating Trends in Capital Markets

AI Regulation and the Finance Industry

Numerix Season 2 Episode 13

As AI continues to reshape the financial industry, the call for robust, transparent regulation grows louder. How can we answer that call, emphasize AI’s compliance with existing laws, and understand the implications, all while continuing to promote quick innovation?  

In this episode of Trading Tomorrow - Navigating Trends in Capital Markets, Host Jim Jockle of Numerix is joined by Professor Michael Wellman, currently one of the most influential voices on AI regulation and Division Chair of Computer Science and Engineering at the University of Michigan. 

Speaker 1:

Welcome to Trading Tomorrow navigating trends in capital markets, the podcast where we deep dive into the technologies reshaping the world of capital markets. I'm your host, jim Jockel, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics, using the transformative power of blockchain in secure transactions to the role of artificial intelligence and predictive analytics. We're here to ensure you stay informed and ahead of the curve. Join us as we engage with industry experts, thought leaders and technology pioneers, offering you a front-row seat to the discussions shaping the future of finance, because this is Trading Tomorrow navigating trends in capital markets, where the future of capital markets unfolds.

Speaker 1:

It's now 2024, and AI continues to dominate as the hottest topic in technology trends. It's permeating every industry, including finance. But while this fast-moving and innovative technology is creating a lot of answers to problems, it's also creating a lot of answerless problems, especially around regulation, as AI continues to reshape the financial industry. The call for robust, transparent regulation grows louder. So how can we answer that call? Emphasize AI's compliance with existing laws, understand the implications, all while continuing to promote quick innovation? Joining us today to discuss is Professor Michael Wellman, professor and Chair of Computer Science and Engineering at the University of Michigan. Professor Wellman has a long history working with AI. He's earned his PhD on the subject in 1988 for the MIT and has since spent his career researching the field. Known for his work at the intersection of AI and economics, professor Wellman has served on advisory committees for the US Treasury Office of Financial Research, the OFR and the Financial Industry Regulatory Authority, finra. He was even chosen to testify to the Senate on artificial intelligence in financial services. Michael, thank you so much for joining us today.

Speaker 2:

Very happy to be here.

Speaker 1:

I first want to say you argue there's no way to know where AI will be in 10 or even five years. Can you explain?

Speaker 2:

Yes, you know, ai has surprised us in the past and it's very likely to surprise us in the future. The remarkable success of ChatGPT was not a surprise in the sense that people knew we could build chatbots, but what was surprising was how good it got once it achieved a certain threshold of size and got combined with new technology of reinforcement learning from human feedback. People understand how it works, but did not realize that it would have such a high degree of capability and be able to perform so many different tasks. That surprised even experts in the field. Another thing about AI is that it does not necessarily follow simple trends. We can't just extrapolate over performance increases. It tends to be subject to sudden bursts of capability. You can't even predict those, because maybe it seems like you've done something really new and the next thing is right around the corner, but then you hit a ceiling that maybe was unanticipated, some roadblock.

Speaker 1:

What does this mean for the financial industry? How can we leverage and regulate a technology that's so unpredictable?

Speaker 2:

Well, I think for regulation it would be prudent to just expect that it's going to get better and increase in capability and be prepared for that, basically to mitigate risk. You prepare for what you could potentially foresee, even if it may not come to pass In terms of adopting of the technology. You don't have to predict the future for that. You can just really focus on assessing what the current capability is and how that can provide value right now.

Speaker 1:

So perhaps you can discuss some examples in which AI has benefited the financial markets.

Speaker 2:

So of course, ai has been employed in finance for quite a while now. Machine learning has been used in credit authorization and assessment and, of course, algorithm trading over the past 10 or 15 years has really come to dominate the markets. That's not necessarily all AI, but AI has really facilitated the rapid development of it and deployment of it. In a lot of ways those developments have been really positive. Making electronic markets and being able to deploy market makers into farther reaches of the space of financial markets has arguably helped efficiency and it can certainly automating the workflows. Processing lots of information can make markets work better.

Speaker 1:

Well, let's talk a little bit about risk market manipulation. You previously and publicly discussed the risk of market manipulation through AI, so perhaps you can give us a little bit more insights on some of your past statements and also how regulatory bodies can detect and prevent AI driven market manipulation.

Speaker 2:

So market manipulation has also been with us for a long time and, just as with any other practice, those who would try to manipulate or be misleading in markets will avail themselves of whatever tools there are, and if AI can help them do that job better, they will do so.

Speaker 2:

I think the risk for AI and market manipulation is that it could really make market manipulation much better and harder to detect. We can also, of course, use AI on the regulatory side for much better surveillance and, using machine learning, to try to understand the patterns that are associated with manipulative activity. But that kind of creates an arms race between the surveyors, the detectors and the would-be manipulators. This arms race is really no different than the sort of fake news arms race or the misinformation arms race that's on social media and in other kinds of media. Market manipulation, after all, is just another kind of or at least many forms of market manipulation, and really just another kind of misinformation. So where this arms race will play out, who's going to win in the end, is kind of indeterminate. There's reasons to think that in the long run, once AI is being used by the manipulators, they'll be able to evade any new detection scheme, and so we can't really rely just on AI to save the day, to prevent market manipulation.

Speaker 1:

And from your perspective, is this just foreign actors that would get into this game, or is it market manipulation from an accident, if you would?

Speaker 2:

No, just as today, there's many intentional actors and they're domestic that try to get an edge on markets through some manipulative activities. The parties don't necessarily need to change. I'm sure there's plenty of them right here but the other possibilities that you raise are also issues. If a foreign actor, say even a nation state, wants to subvert markets, well, ai could potentially be a great tool for harming markets and for accidental manipulation is also an issue, because when you deploy algorithmic trading, especially with machine learning, you may not really understand what your strategies are doing.

Speaker 1:

Well, and it's an interesting time because obviously the equity market has been doing algorithmic trading for years, but the concepts of electrification of markets, like in the bond markets and even through derivative type markets, they're still in their nascent stages. So it almost seems that AI might be an accelerant to some of moving away from voice brokering of the past.

Speaker 2:

Yeah, I mean. Once the market goes electronic, it becomes easy to deploy your algorithms in those places right away, and you could take the same ideas, maybe, that you developed for trading in equities markets and apply them into, maybe with some alteration, to other instruments and other venues. What the new brand of AI large language models opens up through the language channel is maybe even participating in markets that are not fully electronic, that still have language communication in them, because that's what's really new about the latest wave of AI is the ability to potentially interact through natural language.

Speaker 1:

Well, and one would argue I guess AI has been around forever at this point, probably both of our lifetimes, but it's the accessibility and integratability within tools, whether it's in your browser or on your phone, has been part of the accelerant with chat, GPT. I mean, would you agree with the integration and as part of the acceleration, or is there another catalyst to why AI? Why now?

Speaker 2:

Well, I think it's the tremendous increase in capability, and now there's a rush to bring tools to everyone, including general tools that anyone can use in their desktop, but as well as specialized tools for certain parties. I think there's forces in both directions. That kind of levels the playing field by making certain capabilities available to everybody, but to the extent that the very best AI will be based on who has access to the largest bodies of training data and specialized information. It could also possibly narrow capabilities and there could be tremendous advantages in trading to parties that have access to large bodies of non-public information.

Speaker 1:

Thought about that in terms of the size of institutions, I mean. Obviously so many of the largest players, as we know, within the bulge bracket are all have AI pilot programs, but obviously there are smaller asset management. I'm sure the data companies themselves are going to have protections around how their data is fed into training models. Are you concerned that it could create a disparity in the big only get bigger and the smaller at a disadvantage?

Speaker 2:

I think that's something that we should be a concern about Particularly. There's certain firms that have access to a lot of our private information that maybe they collected for various reasons, but if that could become an asset for advantage in financial trading, we should be thinking about whether that can be deployed or not and whether it should be regulated.

Speaker 1:

So, speaking of regulation, obviously right now there's tons of legislation flying through Congress. Nothing has been passed. You know typical Congress, but in terms of some of the legislation that you are seeing and regulatory initiatives specific to market manipulation, I mean, how confident are you that we could legislate our way into protecting the markets?

Speaker 2:

Yes.

Speaker 2:

Now, of course, there may be many areas in where existing regulation already covers fraudulent behavior in an appropriate way, but there's also a potential for new AI loopholes.

Speaker 2:

The current laws and most regulations are built under the assumption that it's human beings making the decisions, and when it's computers making the decisions, it could be that there's something about the way the regulations are written that lets them get around it. And one example is in the area of market manipulation, where a lot of the existing rules promulgated by the SEC and through Dodd-Frank in the United States rely on determining the intent of a trader when they put in orders to the system. Are they intended to really trade or are they just there to mislead? Well, there could be some question about how you judge intent when it comes to a computer program, and, in particular, if your computer program was generated by machine learning, or that is, if your trading strategy was generated by computer, through machine learning. That might seem to provide one some kind of deniability as to intent. Now, that's a loophole, I think, and actually one of the pieces of legislation that Senator Warner and colleagues just filed, the Financial AI Risk Reduction Act, does attempt to close that particular loophole.

Speaker 1:

Basically you were suggesting the AI independently can learn potentially to manipulate markets on its own.

Speaker 2:

That's right. We've actually demonstrated that in our own research and I wouldn't say it's a huge surprise. But we basically set up a system where we told a trader that we wanted to maximize its own profits and made it a certain environment where they had an interest in a benchmark and in some derivatives. That depends on some pricing benchmark and it learned to manipulate the primary market to move the benchmark such that they made additional profit on the benchmark, even though we didn't tell it to do that specifically. It basically learned to sacrifice profit in the primary market in order to make more on the derivatives and that would be considered a kind of manipulation that was not directly programmed into the system but was rather learned automatically.

Speaker 1:

Now, in my firm, we specialize in quantitative finance when we're building all the craziest models and calibrations of market data you name it, we've seen it. I mean, compared to, let's say, modern quantitative finance, where is the AI now? Is it getting into determinations on second order Greeks, or is it really? You know, it's a two-year-old who's really smart.

Speaker 2:

Well, I think the places where it's really smart is because the smart people who already know the technical stuff are developing special applications that incorporate their own knowledge. The ones that just learn everything from scratch, it's probably a while before they maybe themselves get the most sophisticated strategies. On the other hand, they have an ability to take into account new kinds of information and features that maybe the human specialist have not figured out.

Speaker 1:

So obviously, when we talk AI, the marriage component is ethics and, from your perspective, do you think financial institutions can make sure their AI systems are ethically designed and free of biases?

Speaker 2:

I think it requires vigilance and continued scrutiny, as well as auditing and surveillance of one's own systems, to make sure that they really are doing what you intended them to do. The hope is that as AI develops, we will also develop new ways to evaluate them, test them thoroughly and try to make sure that they are behaving in accord with our not just the rules and the laws, but our values. But it's going to be hard to prove beyond any possible doubt that they always do.

Speaker 1:

In terms of managing bias? What has to be eliminated from the training data to actually do that? Obviously, we live in a GDPR world, globally and obviously different regimes, and there's plenty of personal information out there. Is it trying to mitigate the learning of the AI to mitigate bias or just curious how that works?

Speaker 2:

I think there may be certain pitfalls that you can avoid by cleaning and vetting your data better, but it's always the possibility of accidentally stumbling upon some kind of correlations that you don't realize and exploiting them in improper ways. So again, here's another area where you will need maybe third-party evaluations and various kinds of bias, identification tests or other kinds of things to try to assign a clean bill of health. Again, it's a moving target. The technology changes, the data environment changes and it'll need to be done. But I think we need standards in this area and probably the development of a lot of third-party expertise that specializes in this. Certification of machine learning systems.

Speaker 1:

So one last question specific to financial institutions. Obviously, I think every industry is exploring the use of AI. Where do you see if, obviously, we can't predict where AI is going? But where do you see institutions focusing right now? Is it productivity of employees? Is it advanced solutions like algorithmic trading? Where are organizations going to realize the most immediate investment in AI over the next 12 months?

Speaker 2:

So I think we're still in this phase that, whatever you're doing, you're wondering how is AI going to maybe improve or affect what I'm doing, and so I think it's across the board Now. It'll probably turn out that some of these potential uses will be more productive than others, and I think, from what we know now, almost anything is worth exploring.

Speaker 1:

And just a last curiosity question. Obviously you're around AI on a daily basis, but how much do you use AI every day?

Speaker 2:

That's a good question. So I try to purposefully avoid using the chatbots to write my papers or anything like that. That could be partly a generational thing, but also partly because I don't want to get sucked into relying on these tools. Maybe a little bit careful about it.

Speaker 1:

And obviously you and I have probably both seen the Terminator in the movie theaters back in a day. Is there anything that keeps you up at night in terms of worrying about AI?

Speaker 2:

Yeah, I mean, it's not the Terminator scenario in particular of this sort of intentional, but I think it's more about the capability increasing faster than we can understand it and how to make sure that we really understand what objectives it's following and how to align it with our own.

Speaker 1:

That's a great answer. So we've made it to the final question of the podcast and we call it the trend drop. It's like a Desert Island question. So if you could only track one trend in AI over the next year, what would that be?

Speaker 2:

So I think what I'm particularly looking for is the emergence of these kind of new tools for evaluating and predicting the effect of AI, or that it's evaluating new AI tools and understanding how well they work and what kinds of maybe new trouble they could get us into. So this development of third party certifiers and standards will that really emerge the way we need it in the next year or so?

Speaker 1:

Well, professor Michael Wellman, chair of Computer Science and Engineering at the University of Michigan. I want to thank you, especially any podcast where I can drop a Terminator reference. I am a happy man. Thank you so much for your time.

Speaker 2:

I enjoyed it. Thank you very much.

Speaker 1:

In past episodes we've discussed digitalization, helping companies make their next steps into the future. One of the biggest points brought up with this process is the importance of having great talent to drive these initiatives. Next week, on Trading Tomorrow, Navigating Trends and Capital Markets, we speak with two hiring experts about how to attract and nurture top tier Gen Z talent, the newest demographic to hit the talent pool. It's a conversation you can't afford to miss. It's going to be great industry sweetie for you. I'm quite shocked at hug anddaughter.