Breaking News
EXCLUSIVE: “Hype & Hope” – Edward Achtner, HSBC; Wei You Pan, MongoDB and Dan Pears, Capgemini in ‘The Fintech Magazine’
It’s still a potential gamechanger, but there’s been a more sober discussion around generative AI at the start of 2024. Dan Pears, an advisor with Capgemini, HSBC’s AI chief Edward Achtner and Wei You Pan an expert in FS data platforms for MongoDB, reflect on reaction and reality.
2023 was indisputably the Year of Generative AI; the year when ChatGPT exploded onto the scene and successfully catapulted the niche AI subsection outside its usual realm of computer science and into almost every conversation.From individuals to businesses, everyone began to recognise how their worlds could be transformed (or upended) by AI.
In fact, the term was mentioned so many times that Collins Dictionary voted ‘AI’ its Word of the Year.
Aware of a hype storm brewing, technology experts such as Edward Achtner, the division head for HSBC’s Office of Applied Artificial Intelligence (OAAI) cautioned for a measured and thoughtful approach to be taken due to the significant risks and the large amount of work still required to get it right.
Yet conviction that GenAI was ‘the’ ticket to success was at the top of its hype cycle. Conversations on how it could be used to boost revenue, add capabilities, fix data systems, and improve decision-making dominated boardroom discussions. Reports from McKinsey, Bain & Company, Accenture and others, all claimed AI would add value of at least $1trillion for banks and financial institutions.
Industry magazines were filled with mentions of companies diving into pilot projects. Players of all sizes eagerly backed AI service providers on the stock market and the latest and greatest GenAI startups, who were able to line their coffers with billion dollar raises, despite the global fundraising slump.
In stark contrast, GenAI has had a muted start to 2024, with many pilots being put on hold, and much of yesteryear’s euphoric energy now hardened into recognition that businesses need to slow down and grasp the limitations of this technology appropriately if they hope to become part of the new cohort of firms propelled stratospherically by it.
For those familiar with applied machine learning (ML), large language models (LLMs), and the already extensive use of AI within financial services, like HSBC’s Edward Achtner, the sudden fixation on it as if it were an exotic commodity, was baffling.
“Sure, there was a revolution in some respect over the last 12 to 18 months in terms of the capability and the accessibility of AI, but a new phenomenon it was certainly not,” he says. “HSBC alone has 1,000 AI applications, with the oldest dating back a decade.”
“A lot of the challenges that are in generative AI are probably also in traditional AI development”
For Dan Pears who consults with financial and insurance firms as Capgemini’s vice president for insights and data practice, the realisation that the interest had ratcheted to an excessive level came when it went from ‘an interesting toy to something front and centre of almost every single conversation I was having with my clients’.Much of that was down to one product – OpenAI’s ChatGPT.
Thanks to the unprecedented levels of media attention on its customer-facing interface and surprisingly human-like output, ChatGPT seeded concern about generative AI eclipsing existing products or services, and motivated executives not just within the IT departments but up and down the chain of command to begin clamouring for an AI implementation agenda.
Unfortunately, the buzz gave a somewhat conflated idea of the technology’s current capabilities, which in turn influenced decision-makers to push on the innovation gas pedal and spend time and capital on projects to transform their company’s data into GenAI… only to realise that 2 + 2 didn’t equal 4 so easily.
According to research by American firm Everest Group on the GenAI pilots of 50 Fortune 500 companies, 90 per cent won’t be moving into production anytime soon, if at all, and frustrations are high. Everest’s research found a huge percentage of companies not only discovered that generative was the wrong technology to address their business need, but that they had better technologies already developed and in place! Similarly, a chunk of Dan Pears’ client list is made up of teams who inadvertently bit off more than they anticipated.
“One of the two biggest project types that we’re working on at the minute is with clients who finished or have almost finished their first sets of experimentation and are beginning to realise that AI or generative AI is not the expected magic wand,” he says.“What they’re finding instead, off the back of their first forays into the technology, is that a lot of remediation work is required: they still need to make sure the data is fit for purpose and for consumption, they still need to make sure the governance and lineage is there, and they still need to make sure that their architecture can, in fact, scale to adopt this sort of capability production.
“In short, they realise they have considerable course-correct work to do.”
The Hurdles to Adopting Generative AI
With its straightforward ability to learn and execute preset tasks using predetermined algorithms and rules from a closed dataset, traditional AI showed the industry that it could be a fantastically reliable task performer, especially in business areas that need traditional deterministic or predictive models.
In contrast, the freshly emerging generative or ‘creative’ AI does not perform tasks, so much as produce words and images upon a prompt, a capability which it hones by digesting vast datasets in order to learn to identify underlying patterns.
For companies with a long-standing history in AI proficiency, 2023 was focussed on the smart discovery and testing of tools, like Goldman Sachs with its AI-based tool to write code, or Citigroup using GenAI to assess the impact of new US capital rules.But for those without, their journey more than likely included lessons like learning that generative AI is not a genie-in-a-bottle tool capable of magically organising a company’s data estate into groundbreaking tech for them.
Another, arguably more important lesson was grasping generative AI’s riskier characteristics: like its lack of governance models for output validation, which muddies its reliability, its need to be trained on external Cloud datasets which turns it into a data-leak hazard, and its propensity for generating hallucinations, which it props up with fictionalised sources.
“A lot of the challenges that are in generative AI are probably also in traditional AI development – data quality, volume of data, and so on,” observes Wei You Pan, director of financial industry solutions for developer data platform provider MongoDB. “Also other aspects, like the bias in data, governance, policy.
“The one that is a bit more glaring in generative AI is the issue of hallucination, whereby the generative AI comes up with answers that seem to be right, but actually may be far from the truth. There are a few techniques that can be applied to deal with that. One that people often talk about is prompt engineering, where you try to give more context to the generative AI model. And if you want to go a bit further, to capture a lot of the relevant data and later supplement this contextual prompting using an approach called retrieval-augmented generation.
“I believe we are still looking at a three- to five-year window before generative AI will be scaled across the financial services industry”
“MongoDB, for example, could be applied in this context, because this contextual data could be stored in the database, together with vector embeddings and our new vector indexing and search capabilities, and that could be sent to the various generative AI models through a framework.
“We work with various orchestration frameworks, like LlamaIndex, LangChain, and so on, to help orchestrate this framework in a user-friendly manner.”
The risks associated with GenAI nevertheless make early adoption tricky, especially for industries who, like financial services, need technologies with an exacting level of precision that is predictable, repeatable, reliable, and, additionally, auditable. However, experts are confident that, with continued efforts, generative AI will be able to produce ethical products.
As surmised by Edward Achtner: “It will take time for generative to get there, but at some point in the future, whether it be months or years away, I am confident that it will.”
So, what’s his vision for future GenAI uses in banking, then?
Achtner is putting his money on hybrid use cases that will assimilate traditional AI with creative AI to handle tasks containing analytics and hyper-personalised content creation, along with the development of sophisticated ‘co-pilots’, which will work alongside staff members in a supporting role, boosting their efficiency, capacity and capability.
Companies can, of course, turn to third parties to help build their first generative applications – experts like MongoDB, whose general-purpose developer data platform is proving to be a nifty asset for firms hoping to get their foot in the door of GenAI.
“When trying to build generative or even traditional AI applications, having the ability to leverage an external database like our own can be invaluable,” explains MongoDB’s Wei You Pan, “because it allows for the storage of organised data and skills central to GenAI. To assist, we make a point of adding features like natural language co-pilots, which can be asked to generate visuals, vector reading, and user interfaces that allow developers to issue query languages to the database, using the UI.”
Managing AI Strategy
According to Dan Pears, the biggest lesson of all is realising that 2024 is the right time to finalise a company’s strategy. “I believe we are still looking at a three to five-year window before generative will be scaled across the financial services industry, but as the understanding that GenAI doesn’t solve problems that have already existed in getting analytics at scale settles, and the hype fades, firms will realise that generative won’t materialise without a measured approach and strategy.”
Dan Pears’ advice on how to do this is simple. “Whatever the size of organisation, take a step back to think clearly about the continuum of innovation from proof of concepts and production, and, secondly,
think on a company’s risk profile and its attitude to innovation. The firm’s appetite for change, and for the cost of that change, are extremely important because they are going to be at the centre of this technology, a source of contention, and a determinant of what will move forward.”
The education of employees is among the most important elements of an engagement strategy, so [they] can really embrace generative AI as a means of helping them be their absolute best“”
Pears also recommends drilling out the answers to questions like ‘what problem is the business trying to tackle? Who is the primary consumer of the technology? Which business process will host that AI technique? How will the impact of implementing the technology be measured? How will the value of the technology be monitored and maintained? What will be the total cost for the innovation? And how much are we willing to pay for this technology?’.
In tandem, HSBC’s Edward Achtner also urges companies to prepare the workforce for a generative future.
“The education of employees is among the most important elements of an engagement strategy, so that they’re not only comfortable but can really embrace generative as a means of helping them be their absolute best within their personal and professional development,” he says.
“We’ve done three things around that. One is to have published the HSBC AI and Big Data Standards and Principles, for not only our employees, but the world to see. Two, we’ve launched bank AI literacy programmes to train technologists and non-technologists alike on responsible and ethical AI product development. And third, to rise to the incredible demand that we’re getting from our clients, as well as our teammates, we now have an accredited AI ambassador programme.”
Wei You Pan completely agrees that non-IT users need to be trained ‘to understand that not everything produced by a generative AI model is accurate’.
“The workforce needs to have the expertise to see what could be wrong – be it an error, a bias, or a discrepancy in policies, regulation, or compliance – and know what to do next to ensure business success,” he says.
“If you rely on a machine to automate this process, without a human to inspect the results, I think we could be faced with quite terrible results, that could result in non-compliance. People need to have the expertise to see what could be wrong, and maybe have a framework to feed this back. They need to be trained in policies and regulatory compliance education, so that they could use the results in a way that is fair and compliant.”“
Our relationship with GenAI will be one of augmentation by nature,” suggests Dan Pears, “with a verification step that is altogether different from the traditional definitions of verification at work but being applied in domains everywhere as we speak.
“As an organisation that has worked in this field for some time, even we are still coming to grips with how this defines and changes the way we operate with clients. But I also believe that’s part of GenAI’s magic.”
This article was published in The Fintech Magazine Issue 31, Page 62-64
- EXCLUSIVE: “Putting Small Busıness First” – Mark Hartley, BankiFi in ‘Discover Sibos 2025’ Read more
- Blackcatcard CTO Unveils a Breakthrough Risk Model That Could Redefine Fintech Security Read more
- EXCLUSIVE: “Smarter Decisions. Smarter Operations” – Akber Jaffer, Smartstream in ‘The Fintech Magazine’ Read more
- Discover Sibos 2025 Read more
- Kueski Named Mexico’s Most Ethical Financial Company Read more