Breaking News
Exclusive: ‘Creating responsible AI operations’ – Scott Zoldi, FICO in “The Fintech Magazine”
As the use of AI becomes ever-more pervasive, data scientists and organisations ‘just doing their best’ to make sure they behave ethically and responsibly
will find that’s no longer good enough, says Scott Zoldi, Chief Analytics Officer at data analytics company FICO.
Artificial intelligence (AI) has become widely-used to inform and shape strategies and services across a multitude of industries, from healthcare to retail, and has even played a role in the battle against coronavirus.
But the mass adoption and increasing volumes of digitally-generated data are creating new challenges for businesses and governments. There is, for example, a growing focus on the reasoning behind AI decision-making algorithms, and creating a responsible framework.
Decisions made by AI algorithms can appear callous and sometimes even careless as the use of AI pushes the decision-making process further away from those the decisions affect. It is not uncommon for organisations to cite data and algorithms as the justification for unpopular decisions and this can be a cause for concern when it comes to respected AI leaders making mistakes.
Examples can be seen across industries. In 2016, Microsoft’s racist and offensive online chatbot was blamed on AI, Amazon’s AI recruitment system ignored female applicants in 2018, and a Tesla car crashed in autopilot mode in 2019, mistaking a truck for a street sign.
Alongside the potential for incorrect decision-making, there is also the risk of AI bias. To help prevent these issues, new regulations have been created to protect consumer rights and monitor developments in AI.
The pillars of AI
Organisations across the world must enforce responsible AI standards now. To do so, they need to formally document and enforce their model development and operational standards, and set them in the context of the three pillars of responsible AI, which are explainability, accountability and ethics.
Explainability: Organisations relying on an AI decision-making system must ensure they have an algorithmic construct that captures and communicates the relationship between the decision variables to arrive at a final business decision. With this data at hand, businesses can explain the model’s decision – for example, a flagged transaction labelled as a high-risk fraud due to a high volume of transactions involving new accounts in Kazakhstan. This explanation can then be used by human analysts to further investigate the implications and accuracy of the decision.
Accountability: AI models must be properly built and focus has to be placed on the limitations of machine learning, and careful thought applied to the algorithms used. It is essential for technology to be transparent and compliant. Thoughtfulness in the development of models ensures the decisions make sense; for example, scores adapt appropriately with increasing risk of input features. Beyond explainable AI, there is the concept of humble AI – ensuring that the model is used only on the data examples similar to data on which it was trained. Where that is not the case, the model may not be trustworthy and one should downgrade to an alternative algorithm.

Once these three measures are introduced, organisations can feel confident that the decisions they make are sound digital choices, and they know all models will follow this framework.
Measures to enforce responsible AI
There is no question that building responsible AI models takes time and is painstaking work. But the meticulous scrutiny is a necessary, ongoing process to ensure AI is used responsibly. This scrutiny must include regulation, audit and advocacy.
Regulations play an important role in setting the standard of conduct and rule of law for use of algorithms. In the end, however, regulations are either met or not, and demonstrating alignment with regulation requires audit.
A framework for creating auditable models and modelling processes is needed to demonstrate compliance with regulation. These audit materials include the model development process, algorithms used, bias protection tests and demonstration of the use of reasonable scoring.
Model development process audits are currently conducted in haphazard ways. New blockchain-based model development audit systems are being introduced to enforce and record immutable model development standards, testing methods and results. Furthermore, they are used to record detailed contributions made by data scientists and management teams’ approvals throughout the development cycle. They serve as a system of record to justify how the model was built responsibly but also as the guidebook for how to monitor the model in operations, to ensure compliance with responsible AI standards in real-life environments.
In future, organisations that say they are ‘doing their best’ with data and machine learning, will not be doing enough. The rise of AI advocates will further expand and challenge the responsible use of machine learning to prevent the real suffering that can be inflicted due to wrong outcomes of AI systems.
Responsible AI will soon be the expectation and the standard across boards of directors and around the world, with advocacy groups keeping use of AI in check. Organisations must stay ahead of this curve. They should do so by strengthening and setting standards of AI explainability, accountability and ethics, to ensure they are behaving responsibly when making digital decisions.
This article was published in The Fintech Magazine #19, Page 76-77
People In This Post
Companies In This Post
- Chancellor Unveils Plans to “Supercharge” Growth of Innovative Financial Services Firms Read more
- FF Tattoo Studio: Vyntra on Keeping Instant Payments Real-Time and Safe Read more
- Bank of America on the Real Value of RTP Read more
- dLocal Partners With Alchemy Pay to Streamline Crypto-to-Fiat Payments in Latin America Read more
- AI in Banking: From Pilots to Production-Pragmatic Read more




