AI in Finance: Need for explainability and trust

Tech Triveni speaker Nitendra Rajput

Nitendra Rajput

Vice President, AI Garage

Mastercard

About Nitendra Rajput

Nitendra recently joined as the Vice President and leads the AI Garage unit in Mastercard. He has over 20 years of experience in the area of Artificial Intelligence, Machine Learning, and Mobile Interactions. He has authored over 100 publications at top international ACM and IEEE venues, owing to which he has been recognized as an ACM Distinguished Scientist, ACM Distinguished Speaker (2015) and a senior IEEE member.

He has delivered several tutorials and conducted workshops in AI and speech areas at top ACM venues (MobileHCI, IUI, CSCW, CHI). With over 50 granted patents to his name, Nitendra was an IBM Master Inventor and was also an IBM Academy of Technology members. In 2012, he co-authored a book titled "Speech in Mobile and Pervasive Environments" was published by John Wiley & Sons. Before joining Mastercard, he spent 18 years at IBM Research, working on different aspects of machine learning, HCL, software engineering, and mobile sensing. He then spent 2.5 years at InfoEdge as EVP and Head of Analytics from 2017 to 2019. Nitendra holds a Master's degree from IIT Bombay.


Session

The area of Artificial Intelligence has gone through tremendous technological advancement in the past few years. Such advancement means that algorithms are now able to go beyond the lab and work on real-world problems and datasets. The accuracies have increased significantly so that such algorithms can be relied upon for key decision making. This now brings us to a stage where humans are depending on algorithms to solve critical problems such as making a decision of life-and-death while driving, making a decision on what drug to provide to which patient, making a decision on which company to acquire. However such big decisions need some reasoning so that decision-makers can start to believe the output of AI systems.

In this talk, we will elaborate on a sub-area within Artificial Intelligence which deals with the ability to justify, explain and sell the output of an AI system. We will discuss how certain machine algorithms can be designed to be naturally explainable and how the output of a machine learning algorithm can be made interpretable and hence trustable by the decision-makers.

Share the talk