Breaking News
EXCLUSIVE: “7 myths we tell ourselves about AI and why they matter” – Clara Durodie, Cognitive Finance in ‘The Fintech Magazine’
If the robots really are coming, they can expect a very confused reception, says Clara Durodie, Founder & CEO of Cognitive Finance
People think they know a lot about artificial intelligence, or AI, but they don’t. Many businesses, when we first meet them, don’t have a good grasp of what it is, and wildly under or over-estimate how it can be used.
Some of the misconceptions are to do with definition. Industry practitioners often incorrectly see AI and machine learning as two separate fields, whereas the latter is a subset of the former. There are, in fact, further more subsets of AI, like natural language processing and computer vision. Not really understanding what you’re talking about leads to confusion; for instance you wouldn’t know what questions to ask of vendors, nor what people you need to hire. You don’t know what you don’t know. This knowledge gap gets amplified through the organisation and leads to problems such as incorrect allocation of resources. For example, we audited the machine learning projects of a large global bank and identified that 80 per cent of the work across offices was duplication.
From a business perspective, AI has proven to be a business tool for growth and profitability – when used properly. AI is complex. In order to achieve shortcuts, sometimes we reach conclusions that are false. We hear incorrect statements and half-truths: they are myths. When we keep repeating them they become truths.
This is why I’d like to highlight six myths we tell ourselves about AI and why it matters that we correct what we think we know.
1. ‘The AI revolution is enterprise-wide’
Sadly, often it isn’t. In the majority of cases, there is no central point of collecting all the data an organisation has. Departments and offices data continue to be stored in silos. Simply put, if you don’t have complete data, you don’t have AI systems you can rely on. This silo mentality often comes hand in hand with a sense of territorialism. Rather than protecting their own silos – for instance, human resources data versus customers services data – it is important to build a sharing culture across the organisation to leverage information each department holds.
2. ‘We’re ready to transform the organisation’
You’re probably not. You have to change the culture of your business first.
You can rectify this by educating your staff about what AI actually is, ideally starting with data analytics courses, moving to more advanced ML and tailored courses. The key is to include everyone at all levels – from chair of the board to the receptionist who answers the phone.
Why?
Because ensuring people understand how AI works – and that they can still have a job when they upskill – is the best way to tackle the fear of change and of robots taking their jobs. You can allay those fears by instilling a feeling of belonging and inclusivity; by listening to people’s ideas; and by encouraging critical thinking. This all-included education also opens staff to career mobility within your company.
Don’t be surprised if your receptionist discovers a vocation for data and becomes a skilled member of your data analytics team. You can outsource re-skilling of your staff. However, don’t rely only on providers that don’t know your business-specific challenges. Mass-sell, one-size-fits-all academic programmes are not aways suitable.
“In the majority of cases, there is no central point of collecting all the data an organisation has – and, if you don’t have data, you don’t have AI”
3. ‘The lowest hanging fruit’
Many say that starting with a small machine learning project to prove it’s working without spending too much on it, should be the way forward. This is a false economy. It has been proven: AI is a powerful business tool. Therefore, digital transformation needs a well-thought-through AI strategy at its core. This means rethinking the entire organisation starting with data strategy.
Starting with the lowest hanging fruit, instead of an encompassing AI strategy, can easily turn into an expensive exercise, leading to duplication of work and resource drain, without any real business improvements.
4. ‘I don’t care about the technology, I only care about the impact’
That’s a quote from the chief economist of a financial institution. It’s a short-sighted position to take.
You have to have an interest in and understanding of the technology because that’s what will help take your business forward. How can you be able to use something if you don’t care about how it operates? Ideally, the board should aim to bring in one or two members with both industry and technology experience to drive meaningful discussions. The job of a board is to ensure the CEO delivers on strategy and be able to deliver meaningful oversight. How can you do that if you don’t know what questions to ask?
At the very least, the board should have a dedicated technology advisory committee and put technology permanently on the board agenda. This isn’t a one-off.
5. ‘AI is just about automation’
Not really, but I understand why you might think so. We are currently in the ‘narrow AI’ stage where an algorithm is designed to perform a single task without human assistance, and any knowledge gleaned from completing that task is not applied to another process.
Which means that, so far, AI has largely been used as an efficiency tool or an optimisation tool. But thinking about it solely in terms of efficiency is limiting and completely the wrong approach to inform long-term investment and business strategy. However, AI technologies can deliver much more than this. Predictive analytics enables optimisation by predicting what is going to happen, while prescriptive analytics takes over and makes decisions instead of you. The more you allow a machine to take over decisions, the more you move away from narrow AI to more advanced forms.
6. ‘AI is objective and fully trustworthy’
No.
AI is not objective and is only asgood as the data you train it on. Just like humans, AI systems are vulnerable to underlying biases, such as gender and race. Good governance of AI therefore must be part of an organisation’s risk management and corporate governance strategies and, as such, directors need strong digital literacy skills to understand it and help prevent any potential negative impacts.
7. ‘We need an AI centre of excellence’
While generally well-intentioned, creating a centre of excellence for AI can often drive a divide between it and the rest of the company. Are you simply implying the rest of the staff aren’t excellent? Just by naming such units ‘AI testing labs’ instead, can encourage a more inclusive environment and discussion around an organisation’s digital transformation. From a practical point of view, taking a siloed approach to both R&D and data is only more likely to create duplication.
AI in financial services affects people’s lives. It reshapes economic and social rules. This makes AI in our industry even more complex. As practitioners, we need to learn new heuristics – to be comfortable to operate with an open mindset that enables us to learn something new every day.
We need to cultivate in ourselves and our teams a level of intellectual humility combined with intellectual curiosity. This mindset enables us to ask the right questions, make sound decisions and break down myths.
This article was published in The Fintech Magazine Issue 25, Page 36-37
People In This Post
Companies In This Post
- Trust Travel, a TUI Brand, Partners With Qover to Deliver Seamless Travel Protection at Booking Read more
- New Partnership Introduces First-of-its-Kind Model To Democratise Olympic Sailing Read more
- Offa Outpaces High Street Banks on Speed With Islamic Mortgage Launch Read more
- Engine by Starling Enters Fourth Global Market With New Zealand’s SBS Bank Read more
- New Visa Study Shows 9 out of 10 Ukrainians Want Their Payments to be Protected by Biometrics Read more

