AI Today Podcast #007: AI in the FinTech Industry: Interview with Kumar Srivastava

Artificial Intelligence (AI) is making its presence felt in a wide range of industries. In this podcast, we interview Kumar Srivastava of the BNY Mellon Silicon Valley Innovation center on how AI is impacting fintech and what the future holds for the industry.

Episode Sponsors:

For over 25 years, QS has been helping prospective MBA candidates just like you make informed decisions about choosing the right business school. At our upcoming Dallas event, you can meet face-to-face with admission directors from top ranked US and international business schools, including UT Austin, SMU, Rice, IE, Hult, and many more! You will also be able to participate in interactive GMAT sessions by GMAC – creators of the GMAT exam, apply for $7 million in scholarships, attend Panels, and network with alumni and your future peers. Learn more about exclusive opportunities on the day by registering now to claim your free ticket at https://goo.gl/iRF9PR!

Show Notes:

___________________________________________

A transcript of podcast is available below:

[00:00:22] This podcast is sponsored by QS. For over 25 years, QS has been helping prospective MBA candidates make informed decisions about choosing the right business school. At our upcoming Dallas event, you can meet face-to-face with admissions directors from top-ranked U.S. and international business schools – including UT Austin, SMU, Rice, IE, Hult, and many more. Find out more at topmba.com.

[00:00:50] Kathleen: Hello and welcome to the AI today podcast. I’m your host Kathleen Walch.

[00:00:54] Ronald: And I’m your host Ronald Schmelzer. Our guest today is Kumar Srivastava: VP of product strategy at the NY Mellon, based out of the Silicon Valley innovation center.

[00:01:04] Kathleen: Hello Kumar.

[00:01:05] Kumar: Hello. Hi Ron, hi Kathleen. Thank you for inviting me.

[00:01:08] Kathleen: We’re excited to have you on our show today. I’d like to get started by having you introduce yourself to our listeners. Tell us a little bit about yourself and what you’re doing in the field of AI.

[00:01:18] Kumar: Sure. So I mentioned my name is Kumar Srivastava. I’ve been in this space for a very long time, actually. I started my career at Microsoft years ago and at that point I was working on email spam detection, using machine learning. And running that as a service on a Hotmail scale, which had millions of users spread across the globe. And that was my first experience and foray into running ML at scale – which has become a very, very big deal now, given the resurgence of AI and machine learning, and the need for it to be run at scale with high quality service. But we’re in the middle of… I’ve been to various companies, large and small, mostly around the area of big data analysis, machine learning, AI. Most recently I’ve been in the Silicon Valley Innovation Center at the Bank of New York Mellon.  We’re looking at all sorts of technologies and capabilities, including machine learning and AI. But really, the goal has been to build customer-facing value through applications, using multiple technologies. And it just turns out that a lot of decisions and actions that customers, users, employees have to make, can be helped or enhanced or predicted, using machine learning and AI. So it becomes a really big part of what we are trying to build and what kind of innovation we’re trying to bring about, in our attempts to create client value. And that’s what it is: it just becomes a really big part of any app that we build. There’s always this predictive component, that helps users of that application make better decisions, take better actions.

[00:02:57] Ronald: Great. So, specifically within the fintech industry, we’ve heard that AI is being adopted in some very interesting ways. So talk about some ways in which artificial intelligence, and perhaps some of the related areas, are being adopted by the fintech industry; and different use cases and unique aspects of AI adoption in fintech, as differentiated from some of the other industries.

[00:03:11] Kumar: Sure. The basic purpose of the financial services industry, is to help people, entities, organizations or individuals, make better decisions about their finances – and hopefully increase the amount of capital that someone has through investment decisions. So the entire process starts with the goal in mind, with a strategy in mind. And the institution or individual can reach out to an advisor, so they might have to select the best advisor that makes sense for their goal in mind. That advisor, or with the customer together, comes up with some sort of a strategy or a goal that they want to achieve with the use of investment decisions. And then those investment decisions are converted into actions. The actions are tracked and monitored, and the feedback loop goes back into, again: what is the current state? Are the achieved or not? And if not, what can be done to address that? So the entire industry really works around this. As part of these decisions, you really are either trying to move capital around – so that shows up as payments, and payments technologies – or you are trying to select or make investment decisions and then carry them out, which is the buying and selling of securities.

So fundamentally, the whole industry is about making better decisions. And if you can leverage information that someone else might not have, and use that in your investment decision potentially, you have an edge. Because you are tapping a signal that is otherwise not available to someone else, which means that you have an advantage. And if you use that to make the best decision possible, you potentially will have a higher return than someone else who does not have that information. And that’s how investment managers work. It really comes down to finding these signals that exist, that other people might not be looking at, or other institutions might not be looking at. Using them to predict what will happen, as a result of including that information in the decision, and then making the decision, assuming that you have better information. And it comes down to using… There’s a lot of potential for AI to collect all this information around us that’s in the news media, that could be in financial calls reports, it could be in what’s published by the SEC. There’s information all around, and the question is: who can best collect, aggregate analyze that information as fast as possible, to come up with that competitive advantage.

And that’s really where AI fundamentally can change how the financial services industry works. And then you can think about all the other operation labs which are moving money around, moving securities around, or doing so with high quality. All of these sub-problems that exist in the industry, can also be made better by ensuring that the required transactions can be processed and completed with the highest quality. Which means you can predict failures when a transaction might not go through. You can predict regulations and compliance that’s required in the financial services industry, like KYC or AML. You can use machine learning techniques to determine whether there are patterns that should really increase the suspicion associated with a certain transaction or a certain entity. So there’s an application of categorizing, classifying and predicting things across the board and in that whole investment lifecycle that I just described.

But really, I think the main… What people have been doing – investment managers, investment advisors – for a very long time, is using information to come up with an investment strategy on behalf of a client. And that whole process can be automated. So you could potentially have a world where you don’t need investment managers as an intermediary, or you could have investment managers who are able to tap these signals and actually provide better service to their clients. Regardless of which side… I mean, there are new entrants in the market that are fully automating the process. But again, that’s at the cost of not having personalized service. On the other hand, there are institutions that are getting better at providing better advice through the application of AI.

[00:07:00] Kathleen: Okay, piggybacking off that. I’d like to ask you what some of the challenges are that you’ve seen in adapting AI, specifically in fintech. Whether that’s a cultural challenge or technology itself. If you could go on about that?

[00:07:12] Kumar: Sure. I’d say it’s actually both and a couple of other things. But the good thing about this whole increase in AI, the resurgence and the availability, and all the hype associated with AI – the good thing is that the way the industry is shaping, it’s a very open industry. Meaning all the research institutions in the world, all the big tech companies involved; they are extending the reach of the research. They’re putting everything out, they’re publishing everything; they’re putting datasets out, they’re putting models out. They’re providing the code itself. And so the good thing is: one of the key things to innovate is the technology, the expertise that knows how to use the technology and then the culture and the environment that enables that adoption and use. In terms of technologies, we have more than enough. There are multiple versions of similar technology provided by different institutions, so that part is great. So what the challenges really come down to is: do you have any expertise in-house, that not only understands the industry and the fact that it’s changing at a really rapid pace, who has the aptitude to keep up with that change. But also, they have the understanding to leverage this stuff by applying it to the business problem at hand. And so the first challenge that comes up is: if you have enough brand definition or if you have enough money, you can attract the right talent. And depending on whether you already have enough problems to solve, between those few things you will find the people you need.

You’ll have to compete for the really good talent out there, but you can find it, so that’s a solvable problem. The bigger problem is connecting the technologists with the business domain experts, and transferring the domain problems into the technology realm. So that the data scientists that are getting involved in this project can actually understand what is being done, and what is being asked of them to solve, and they can convert the business problem into a technology problem and then act on it. And as far as that, they are able to understand the signals that are relevant in that space, and actually convert the signals into some sort of an algorithm that can find the patterns; that predicts something or that can classify something. So one challenge is connecting the domain knowledge with the technology. The other challenge is really the culture. So this is sort of a chicken and egg problem. Everyone understands and knows that you should be investing in AI, to varying degrees. A lot of C-suite don’t understand what AI is, but they understand that it has the potential of changing a lot of things and that means that that size is increasing. The problem is that actually adopting AI means changing the price from inside out. Every decision that can be made could potentially be made with the appropriate AI, that has been built for that decision. So the question is: can you identify these decisions, and can you actually convert that into a buildable AI model that can help with that decision-making.

The problem is that you have to start somewhere. And one thing that I’ve seen across many companies is: you try to start small, but in your attempt to start small, you try to start with really an inconsequential problem. Something that doesn’t really have the impact even, if it was solved to perfection, it would not have the impact that you could claim success or value from a technology investment, the RoI. So when you confuse starting small with starting with an inconsequential problem, you end up with not enough value to get the second round of investment or the broader application of that technology across the enterprise. So a lot of enterprises are starting at that level where the answer is to start small, but start with a problem that is core to the business. Because without that, not only can you not show results, but you cannot get the investment required to continuously improve over time.

[00:11:00] Kathleen: Yeah, that’s a really interesting point that you brought up.

[00:11:04] Kumar: Yeah. That often happens again and again, where starting small is confused with starting with something that can happen in the corner of an enterprise, without really changing things or changing the status quo. And that hurts, because one of the challenges is… If you’re not building that expertise with AI with the long-term in mind, we’re not building that as a core competency, then eventually you will be forced to have someone else from outside, so you have to outsource that competency. Which means you are not really controlling your destiny as an enterprise, or you basically get left behind and somebody else leverages this AI. I think the key point really is that every decision that someone makes – and the only reason why there should not be AI involved in every decision that’s made in an enterprise, is that it’s going to take time to build the custom models for every decision. If there’s any other reason involved, which is that we have decided not to focus on this area of the problem because it’s too core to our business, so we don’t want to change it, or it’s the culture problem where someone says ‘no no, that’s going to change what my team looks like or what my annual goals look like, that’s why I don’t want to touch this, that’s why I don’t want to use AI there’. These are real issues that can’t hold in the long run.

The reason why AI is gonna be so big is not because it’s new or it’s different. There’s always been the fact that decisions that have an information advantage, are always better. Traditionally enterprises have collected information in different ways, and that’s why we have enterprises just going out and consulting experts, consultancy companies. That’s why enterprises should strive to market reports and publications and whatnot, because it’s all about information. Information that can drive better decisions. AI is simply a piece of technology that has the ability to process a lot more information, more than a set of people could do, and do it in a faster, better way at higher quality. As, you know, depending on the model and how you make it. So it’s not new in that it’s really a better way of processing information, and using that in decision-making. And that has been part of decision-making in enterprises across humanity always. So that’s why this is so fundamental: because it is just a better way of making decisions. It has to be a core competency.

[00:13:17] Kathleen: Right. Yeah and I wanted to just jump in here really quick, because you have said: you have to start small, but don’t start with something that’s inconsequential. Ron and I brought this up in one of our podcasts about AI winters. You know, one reason that AI winters happened, was that people over-promised and then under-delivered on the technology. I think that when people start with something that’s inconsequential, it underdelivers on what their expectations are. So enterprises and businesses really need to make sure that they are focusing on the right problems that they’re solving.

[00:13:52] Ron: Exactly. One of the things you mentioned earlier, when you were talking about some of the problem areas that are unique to AI, is you talked a little bit about something that is somewhat unique to fintech, although other industries have it as well. But fraud is such a core part of the business, because we’re just moving bits around, and as you know, those bits represent actual money and assets and people’s livelihood and business and personal wealth. So in that area especially, that’s a unique area where you can apply AI to a very specific problem, and look at fraud patterns and suspicious patterns. Now what have you been seeing done there in particular with AI? To be able to smell fishy transactions, or to look at things that just don’t seem like the kind of thing that a normal person would do as part of their normal personal or business transactions?

[00:14:38] Kumar: Right. What I think is interesting is… I spent a lot of time at my Microsoft part of my career in detecting malicious usage of our online properties that Microsoft had, and basically attacking or taking over identities, and pretending to be someone else and then trying to use our services. It’s the same area, it’s just a different domain. And the one thing that works really well in the area of fraud – and I think that’s really the pattern and you have different flavors of this strategy – but what every machine learning or AI model will try to do, is to try and classify the transaction or the entity behind the transaction, into three groups: positively bad, positively good and then something mixed, the mixed bag. And way you can do that is by putting in all of descriptive information about the entity or from internal sources, from external sources. You can also establish patterns of their behaviors. So you know, if a client transacts every day at a certain time and then suddenly you a transaction coming in at a different time, which doesn’t fit their pattern, that’s somewhat suspicious. It’s not a guarantee of fraud, but it’s suspicious.

So it’s really about – and especially in the short term – the way AI and fintechs that are focusing on fraud can succeed, is not by saying “we will maximize the finesses of the non-good and the non-bad”, but it’s about reducing the size of the suspicious bucket. Because the human that’s involved in deciding whether something is put forward or not – what we want to do initially, is make them more likely to make a better decision. And the way you make them more likely to make a better decision, is by number one: stack ranking the potential fraud by severity. So if you have a team of 10 people looking at fraud, and have a 100 pieces of fraud; if you have to look at 100 random transactions, you’re not really optimizing what you could get out of you those 10 people. What you want to do is stackline the 100 by likelihood of fraud, and have your analyst focus on the most important, or the most severe, or the most high-value transactions that have been flagged as likely fraud, to verify that. What you’re really doing, is maximizing the throughput in the quality of work that that team of people can do. Now in the long run, you could say that you don’t need 10 people, and maybe you only need five. But then you can only get there when you have the human in the loop re-labeling your data, because the quality of the model will say: these 100 were suspicious out of that – let’s assume an even split. 50% were verified to be bad and 50 percent was verified to be good. Now this is amazing information for a model, because now you can feed that back into the system and you have better training data to produce a better model. And one thing that’s true for fraud in financial services, is the volume as a fraction of total number of transactions – the number of known fraud cases – is actually very, very small. So typical machine learning techniques, supervised learning techniques, don’t really work well. Most companies are looking at using unsupervised learning, and so clustering or anomaly detection as a way of… Because you are limited by a smaller set of training data. And the way to solve that problem in the short-term is to have a human in the loop and just make them more productive. And then they will generate that data for you, that’s going to make your supervised learning techniques that are more advanced and more enterprise-ready, to be more leveraged to build that model. And eventually you can see how things progressed. But I think it’ll always be a combination of – for the most important most severe cases that are mixed, you still want to have a human in the loop, because you get the benefit of learning from that and then feeding it back into the AI.

[00:18:13] Kathleen: Okay. So what are some of the gaps in current AI capabilities that would lead to broader AI adaption in the fintech space?

[00:18:22] I think the biggest one in the financial services industry, that a lot of fintechs are struggling with: a lot of enterprises that use and want to use fintech providers who are using AI, encounter the inability for a lot of these things, a lot of these techniques to explain why a decision was made. Why did the system predict something or classify something? That’s a really big problem and because of that ‘lack of accountability’, because of the lack of it… Because typically when you have a human involved, the accountability is at the human level. So someone can go to that human employee and say: why did you make that decision? And you will get an answer from that system.

But what the problem is: with AI and ML, you don’t get an answer. Imagine how many different ways you could ask. If the model hasn’t changed, you could run that transaction again and it would give you the same answer, but it won’t explain why we came up with that answer. And that’s a big problem for enterprises who are used to having that accountability – for many reasons, including quality of service, compliance. regulations. You need to have a reason for doing certain things. So when you introduce something that cannot provide that, you have that break in that accountability chain and that is a big gap. And then there’s a lot of effort going in trying to explain why AI behaves the way it behaves. But that area of work is still fairly new and there are no good answers there yet.

[00:19:44] Ronald: Yes. That’s interesting. So this is actually one of those areas that may be part of the future direction for AI, especially deep learning, as we’re trying to make increasing use of the system. People need to understand more about how these decisions are being made. I have a feeling that what you’re saying is true for many applications of deep learning altogether. It’s easy to tell about image recognition, and voice recognition, and things like that because we could say: oh yeah, I could see that you got that image right, that you recognize that sentence. But we have complicated situations of fraud and it’s like: you tagged this as fraud, and I’m not quite sure how you figured that out. And they may be right, they may be wrong, right?

[00:20:22] Kumar: Exactly. With image recognition or voice tagging or translation… The gap still exists. It’s the human mind that’s covering the gap, because you’re looking at images and you can see why these systems said that. ‘It’s because of this area or this area, and I can see they could have gotten confused on that’. But it’s really the human brain that’s covering up that gap. But in other places, where the human mind cannot; where you have tons and tons of data that had to be processed to come up with a model, and then different parts of the different layers of different nodes of the neural net total cottal it up …The human mind can’t address that gap. And that’s why there’s that lack of accountability. That has to be addressed; otherwise, what will happen is that either these capabilities will be second class citizens to typical business rules. So you will always have that layer of forced accountability through these: so if the AI says something but the business rule says something else, the business rule wins. And so you basically are reducing the impact that AI can have, or you will end up with AI being used in the inconsequential problem.

[00:21:24] Ronald: This might be a good wrap-up question then. As a last note: what do you believe is the future of AI, in general? And the application of fintech and beyond? And maybe some of the unique opportunities for fintech companies to, perhaps, adapt AI in ways that aren’t being done now.

[00:21:40] Kumar: So I think the future really is… Going back to my earlier point: this is just a better way of making decisions through information. Information has always been… I keep going back to the movie, the James Bond movie GoldenEye, where the villain tries to take over the world. It’s basically based on just having information and controlling information. That really is the reason why I feel every decision needs to be supplemented. It’s always been – most decisions are supplemented with descriptive analytics, and all we’re saying is now you can extend that, and have predictions and prescription and whatnot. But really, I think where we’re going is: we really have two kinds of AI, I feel. One will be the AI that will help making decisions, and the second kind of AI that will help you correct mistakes that your actions might have caused, so by being able to predict the impact of the AI. And I think that’s again an area that’s not well-addressed toward being applied: what is the impact of how AI changes the real world? Do we understand its impact beyond the immediate surrounding?

For example, let’s say the Microsoft chatbot that went rogue, that people trended within 24 hours; it took just 24 hours to start being abusive. That’s an example of the real-world impact in a constrained environment. That what was turned off and I’m sure the teams behind it went and figured out how to maybe prevent it in the future. But there’s a longer-term impact of that event, because now every time we talk about AI, we bring up that example or examples like that, where it went rogue. So there is an impact in the real world of any AI, which is beyond the immediate environment. So now we’re saying OK: AI might not be as trustworthy, because in the real world it can be trained by just feeding it bad data or biased data. You can make it change its behavior from what it was intended to be when it was being trained in the lab. So there are real world implications of these decisions being made through AI. And when you have a complex network chain, with multiple AI models making decisions, we just don’t know how these things would interact with each other and how they impact each other. Or if they are impacted by each other, and by connections that are second-, third-, and fourth degree. And that’s just the whole area of what we need to understand. Maybe that’s what Elon Musk means by ‘AI are going crazy’ or ‘dangerous’: could you have AI that has the ability to understand these patterns and how it interacts with other AI models out there? And could someone change its behavior? Could the AI change its behavior to impact the change of behavior of someone else.

So you basically have this adversarial thing going on between different AIs deployed in different systems. When you think of applying the enterprise layer on top it becomes quite scary very quickly. Not because it’s going to cause the end of humanity, but because it’s going to cause the end of enterprises, if it’s not controllable. Because you know you lose trust, your customers move away, mind share goes down. All of these things can happen. And that’s really where I think, in terms of the future, we really have to understand: when we put something out in the real world, we need to build the techniques and the technology to understand its impact. And I don’t think we’re there yet.

[00:24:54] Ronald: Yeah, I think that’s interesting. So it’s not just using AI, but also controls in place to understand what happens if the decisions are impacting the business in a detrimental way, basically

[00:25:06] Kumar: I call it the unexpected or unnoticed.

[00:25:09] Kathleen: Yeah, unintended consequences, just like you said. I don’t think that they thought that – that bot ended up becoming racist and said pretty bad things, and I don’t think they thought that within 24 hours it was going to do that.

[00:25:23] Kumar: That’s the scary part. When you when you talk about a child, you say that the brain is like a sponge. That’s exactly what happen. They must have worked on that bot for at least six months, I’m guessing, for it to come out in the real world. And it took 24 hours to change its entire personality.

[00:25:44] Ronald: And the funny thing is: I think that bots need to understand the concept of ‘trolling’. Trolling is a kind of behavior, it’s a mischievous behavior. Like ‘oh you’re trolling me, okay; I’m not going to listen to you anymore’.

[00:25:58] Kumar: That’s where biases come in. Now you have to start putting in some controls and what kind of data goes into training. But then that decision itself can have a bias of the person that is making that decision.

[00:26:11] Kathleen: Yeah, that’s exactly what I was going to say. Where we have biased training data. And if you have bias training data then, by default, your product is going to be biased.

[00:26:19] Kumar: Right. And if you put someone in place like… Facebook’s reaction to the election meddling, is to hire a thousand people that will look at all the news feeds and ads that are being put into the system. The problem is that the people making the decision have their own bias. And so the boundary won’t be black and white, and it’s going to be what you think as the individual that’s been hired to do this job. Maybe going through some amount of training. But ultimately it’s your decision, whether you think it’s over the line or not. And that line will be different for different people. Even removing bias will introduce some other bias.

[00:26:53] Ronald: Exactly. That’s an interesting this has actually been a conversation we’ve been having with a colleague. You know one of the things he’s like: ‘Ah, I really wish there was a lot that would create news that was unbiased, because the problem with news is that news is biased.” I’m like, well, that’s a great idea, but then how are you going to… What exactly are you going to be training the system with? Because the nature of deciding what’s important enough to be reported as news, and how you report it, that’s all highly biased. It’s a challenge.

[00:27:20] Kathleen: All right, Kumar. Well thank you for joining us for today’s podcast, we had a great time.

[00:27:25] Kumar: Same here. This was a great conversation. Thank you.

[00:27:25] Ronald: We really enjoyed conversing with you! It sounds like you have a great future ahead and I know that you’ve been putting a lot of time in here with AI. They are. We’re going to definitely keep track of all the great things you’re working on.

[00:27:34] Kumar: Thank you.

[00:27:36] Kathleen: Yeah, we’ll follow you. And maybe we’ll reach out again further down the road, to have a follow up conversation.

[00:27:43] Kumar: Absolutely, that would be great. If I come across any progress with any of these problems that worry me, we can definitely have a chat about that.

[00:27:53] Kathleen: Sounds good. All right! Well, thank you listeners and we’ll catch you at the next podcast.

[00:27:57] Ronald: Make sure to listen to our shownotes, that are going to be coming up. We’ll list some of the things we talked about here in this podcast and link to some interesting articles. I know Kumar’s a very frequent contributor online into a bunch of different periodicals, so we may include a couple of his writings as well in our shownotes for you all to read. Thank you very much for joining us on the podcast and catch you at the next one.

[00:28:17] And that’s a wrap for today. To download this episode, find additional episodes and transcripts, subscribe to our newsletter and more. Please visit our website at cognilytica.com. Join the discussion in between podcasts on the AI Today Facebook group, and make sure to join the Cognilytica Facebook page for updates on this and future podcasts. Also subscribe to our podcast on iTunes, Google Play and elsewhere, to get notified of future episodes.

[00:28:44] Ronald: Want to support this podcast and get your message out to our listeners? Then become a sponsor. We offer significant benefits for AI Today sponsors, including promotion in the podcast and landing page, and the opportunity to be a guest on AI Today Show. For more information on sponsorship, visit the Cognilytica website and click on the podcast link.