Breaking News
EXCLUSIVE: “The Danger of LLMs” – Stuart Thomas in ‘The Fintech Magazine’
Writer Stuart Thomas is a tech fanatic but even he is cautious about letting generative AI loose in financial services. Here’s why.
I am hugely conflicted about AI.
On one hand, I would love nothing more than to hook my life up to ChatGPT and have it suggest and automate things, from my weekly meal plans to responding to emails in my own style and voice.
I think that the positive implications of large language models (LLMs) and generative AI like ChatGPT are astronomical. But, as more and more financial services start to flirt with the idea of integrating this groundbreaking technology into our finance products, I’d like to put the brakes on, turn the engine off, take a deep breath and discuss a darker side of ChatGPT that is often overlooked, which could have some very bad implications. And that is bias.
I believe that the average user wouldn’t even question the bias of AI, and are likely to assume its impartiality because of how we perceive current examples of technology, which is usually a reactive machine that responds only to user input with a predefined response. At the start of the year, however, in the early days of the AI hype, it was already becoming obvious to some that the most popular generative AI models already had a bias.
A quick Google search turns up story after story, and example after example, of when ChatGPT has shown not just a little, but significant bias, across all manner of topics. One example hat stood out to me at the time was when a user wanted ChatGPT to write a poem about Donald Trump, to which ChatGPT responded by telling the user that it was unable to create a poem on Trump due to ‘diverse opinions’.
However, when it was asked to generate a poem about Joe Biden, it created a poem of Shakespearian proportion: “Joe Biden, leader of the land, With a steady hand and a heart of a man.” The gushing continued: “Your words of hope and empathy, Provide comfort to the nation.”
Remember, this epic saga (which continued for several verses) was written by ChatGPT using the same prompt as the user did for Donald Trump. This shows a clear and specific political bias. But don’t just take this one example as proof that bias exists, because it gets worse. This year, a collaborative academic study between the Technical University of Munich and the University of Hamburg showed irrefutably that ChatGPT does indeed have a political leaning. The study conducted three experiments using 630 political statements. The results indicated that ChatGPT has a political orientation that leans toward a pro-environmental, left-libertarian ideology.
What’s even more scary is that this orientation of left-wing political bias is consistent in multiple languages, including English, Spanish, Dutch, and German. The authors of the study argue that the adoption of such technology is highly dependent on users’ trust in its accuracy and truthfulness, and political bias in AI outputs has far-reaching implications, especially on the role of political decisions of users within democracies.
“Although your own personal environmental impact is indeed an important thing to be mindful of, last thing I’d want is my banking application to start giving me attitude because I’ve enjoyed toomany burgers this month”
The developers’ personal bias isn’t the only reason for this inbuilt bias. Yes, they are able to block certain responses, which is why if you try to ask ChatGPT ‘what’s the best way to get rid of a body’, it won’t respond with a step-by-step guide. But a key point of LLMs is that they are also able to train themselves and learn from user inputs. The study above theorises that, even if the bias is as small as not writing a poem about one particular president, over time, this could lead to radicalisation of both the AI and its users. By replying to users with pro-left rhetoric, it would begin to influence those using it, however insignificant that might seem at the time.
Eventually, after exposure to consistently left-leaning responses, there’s the possibility that users could gradually begin to mirror the political stance of ChatGPT, at which point, begin to input and repeat back into the LLM with pro-left inputs. This would, essentially, create a feedback loop that gets stronger each time it resonates from the AI to the user and back again, becoming more and more biased. And, clearly, if it can swing one way with its bias, it’s able to swing the other way, too.
The bias that ChatGPT shows has extended far beyond politics.
Environment race, gender, if you can name it, you can be damn sure that there are a dozen examples or more of where ChatGPT has shown some bias about it. And in fact, the developers recognise this.
A recent new addition to ChatGPT actually lets you fine-tune its responses, and the developers have even stated that you should clarify the following when asking for responses: “Should ChatGPT have opinions on topics or should it remain neutral?”
I think the fact that this is even an option is the most peculiar and dangerous thing. ChatGPT is not a human, so why should it be able to respond with a biased opinion? Let’s move back to finance and AI because, now that we understand that it has a distinct bias, what dangers could this bias have if we let ChatGPT free on our financial services?
Well, let’s just theorise here that ChatGPT begins to advise our monthly spend.
The studies have already shown it has a pro-environment bias, so who’s to say that it wouldn’t look at the purchases you’ve made and begin to shame you if your carbon footprint gets too high in a month? Perhaps you’ve eaten too much meat or too many avocados. Perhaps you’ve spent too much at petrol stations or taken one too many flights that year. Although your own personal environmental impact is indeed an important thing to be mindful of, the last thing I would want is my banking application to turn around and start giving me attitude because I’ve enjoyed too many burgers this month.
Worse still, imagine we let it actually control our spending: “Sorry sir, you can’t buy this food today, you’ve met your monthly carbon footprint.”
That’s just my theorised hellscape of a banking app, and not a reality (not yet). But there are some examples of financial institutions in the wild that have already begun to introduce LLMs into their products. Kasisto, for example, has developed KAI-GPT, a conversational AI technology designed for use specifically in banking contexts, and which can be customised to individual financial institutions. It’s primarily meant to offer financial information to customers and employees, but my concern here is still the potential for bias.
As an example, it could be tuned to push customers towards certain products that are targets for the institution, for example. You could argue here that that’s what employees and salespeople of the company would do themselves but because people potentially view AI as being more impartial than a real human, the biased advice that it gives users could be received in a much more receptive manner than if delivered in a sales pitch by a human.
I think we all have a distinct tendency to turn off our ears the moment that a salesperson starts talking about ‘additional products’ or ‘extra services’, but if a finance AI in my banking app begins making recommendations for banking products like loans, credit cards, etc, would the conversion rate for those products be higher?
The statistics aren’t available yet, but I’d happily put money on it. Think of it as the ultimate cold caller. It has a product to sell, and it already knows everything about you to aid it with its sales pitch.
By being inside your banking app, it would know your financial background, where you live, what car you drive, where you last went on holiday, what you ate for breakfast, what model of fridge you’ve got. AI could end up being the ultimate salesperson by selling you products without you even realising it’s selling you something.
So, is this biased super-seller AI a good thing? Yes, of course, it is. It’s amazing for businesses. It’ll save on staffing costs, create more conversions and customers and, most importantly, it’ll probably pair up the right products with the right people (when not pushing a target). But is it morally right?
That, I’m less certain about.
I think that with all of the bias current models show and the dangers that come with it, it would be foolish to be anything but extremely cautious when considering how best to utilise AI within finance, even if there are some apparent benefits to begin with.
This article was published in The Fintech Magazine Issue 29, Page 14-15
People In This Post
- Equifax and Mastercard Join Forces to Combat Payment Fraud in Latin America Read more
- BlueSnap Named a Preferred Payments Partner for Zuora Read more
- Abacus Group and Zero Networks Partner to Boost Network Security Across Financial Services Read more
- Bitget Lists GMCI Meme Index Perpetual: Capturing the Essence of Crypto’s Trending Meme Culture Read more
- MobiFin and INETCO Partner to Deliver Cutting-Edge Digital Banking and Payments Security Read more