Standard Chartered moves to tackle unjust bias in AI models
Bank taps Truera to help ensure use of data analytics adheres to principles of fairness, ethics and transparency
4 Aug 2020 | The Asset

Standard Chartered has tapped the services of Truera, a tech startup based in Redwood City, California, to help analyze artificial intelligence (AI) models in order to identify and eliminate unjust biases in their use in the decision-making process.

The bank uses AI and data analytics to better support clients and stakeholders, but it says it wants to do so in a responsible and trustworthy way that adheres to the principles of fairness, ethics, transparency and self-accountability.  “New developments in analytical technology and expanding usage of data require us to fundamentally rethink how we demonstrate ongoing adherence to our pillars and tackle the issue of unjust bias,” Sam Kumar, global head of analytics and data management at Standard Chartered, says in a statement.

Machine learning (ML), which makes it quicker and easier to analyze large amounts of data and identify patterns and trends, leads to better performance and risk management when used correctly. However, because ML models uses complex automated algorithms, they can act like a “black box” whose inputs and operations are not visible to the user.  As such, it is often challenging for data scientists to explain in detail how decisions are made, and validation of a model’s effectiveness can take longer.

As it wants to ensure that data is used ethically, such as in the use of ML for credit decision-making, the bank has asked Truera to help create a solution that gives greater insight into the ML processes so as to identify and mitigate unjust bias.

Truera provides a platform that helps eliminate the black box surrounding widely used AI and ML technologies in order to achieve measurable business results, address unfair bias, and ensure governance and compliance. Its Model Intelligence platform is powered by enterprise-class AI Explainability technology based on six years of research at Carnegie Mellon University.

Truera worked closely with the bank’s retail analytics, risk, digital and technology teams on one of the bank’s credit decision-making algorithms. The solution that has been developed works across multiple ML platforms and is able to pinpoint the specific variables that influence risk scoring. It also has the ability to look for correlations between seemingly impartial variables that can act as proxies for demographic indicators such as race or gender, which could lead to the introduction of unjust bias resulting in unfair decisions, the bank says.

Standard Chartered says it will work with Truera to further develop the software and explore its application across a range of AI use cases. 

Have you read?