FF News Logo
Monday, April 21, 2025
FinovateSpring - FFNews 728x90 banner (1)

EXCLUSIVE: Responsible AI in the Insurance Industry, Intelligent AI’s Anthony Peake speaks on Algorithms and Data

Aniqah Majid, Fintech Finance

“The reason I started Intelligent AI was Grenfell Tower.” Anthony Peake, CEO of risk management insurtech, Intelligent AI, is lamenting on the 2017 fire, which due to a number of botched risk assessments and poor cladding, led to the deaths of 72 people. At the time, Peake was working extensively with insurers and looking into fire service call-out data. The fire services were called out 15 times to Grenfell the year prior, but no claims were made by residents as half of the residents did not have insurance, and the other half could not afford it. 

“You had two different views of risk, one side completely blind (insurers) the other side (fire services) data-rich. And none of the insurers at that time were looking at fire service call-out data.” A great miscarriage of justice underscores the events of Grenfell, most damningly the misuse of crucial risk data which was readily available to insurers. 

 

With growing reliance on immediate automated services, AI is fast becoming an integral part of insurance 

In parallel to its growing presence is the call for more robust regulation and surveillance, ensuring that automation is doing what it should be doing.  To enforce the use of responsible AI, companies must utilise data which will not put the information of their policyholders at risk, from the point of them giving consent, to the settlement of their claim. To achieve this, insurers are tasked with the ever-complex practice of retaining transparency with their sourcing and unteaching bias from one-two step decision algorithms. Since last year, heavy-hitters like Google and Microsoft have committed to pushing more ethical digital practices, providing advice on data security and AI development.

From front-facing customer advice to back-end claims processing, insurance would be a redundant service without the use of data analytics. IBM found that when it comes to data, 95% of insurers rely on third-party risk and customer data, this value shrinks to 45% for insurers who use real-time data on top of that. The wealth of data is there. According to an Accenture 2021 survey, seven out of ten (69%) consumers would share their data, being medical records and driving habits, to insurers if it meant more personalised pricing. 

 

The issue remains that insurers have more data than they know what to do with

Intelligent AI identifies the risk in commercial property. With their Digital Twins solution, the insurtech extracts data from survey data, to satellite and 3D mapping data, all to provide a 360-degree view of risk to business and property owners. In 2021, the company teamed up with tech organisation Digital Catapult as part of Innovate UK’s ethical AI project. 

Peake told FF News: “We worked with Digital Catapult, and we put together a whole AI ethics framework,” said Peake. “I and some of the consortium members were very concerned that, when you unlock a lot of data, you create profiles of organisations, in our case commercial businesses. Quite often you could unlock data that an insurer did not have, and present a profile of an organisation as riskier than the insurer initially thought.” 

With the initiative developed by Innovate UK, a big area of focus was to make sure the framework did not discriminate against the types of companies they were collecting data from. For large businesses, insurers often look past potential risk as they are guaranteed reliable returns. This advantage does not extend to SMEs, which due to their smaller size, are seen to be riskier

 

Though the road to unbias claims processing is paved with good intentions, regulators underestimate the sophistication of this feat

Algorithms work through locating patterns, they are slow in taking individual context into account. One such case was investigated in a study by UC Berkeley, where they found that black American homeowners were 5.3 basis points higher in mortgage interests than their white counterparts. Interestingly, the algorithmic lenders studied matched rates of interest to location, not a race, thus the resulting bias was indirect. This omission shows that algorithm-based technology is still in its nascent phase. 

Whether an algorithm is working using unbiased data is particularly important to insurance as there are very few insurance products available that will cover companies if they go wrong. 

However, the use of algorithms is unavoidable in the insurance industry, and insurers should not be apprehensive about using them. In the same UC Berkeley study, it was found that fintech lenders were less likely to discriminate against minority applicants than traditional lenders were, as the online application process bolstered competition, and made it easier for underrepresented groups to compare pricing. 

“Traditionally a risk engineer would go to a factory and one of the questions they would ask is, “how far away is the nearest fire station,” a human would say it’s 5-10 minutes away,” said Peake. 

 

“What we can do is pull out data from the fire services, knowing exactly where everything is and can calculate the latitude and longitude of exactly that” 

The current landscape is coloured with the threats of climate change and rapidly growing enterprise; responsible AI-backed risk management can no longer be an afterthought in commercial property insurance. This need filters into all sectors in the insurance industry. Lemonade is currently unteaching biases from their algorithms and Aviva is providing the blended option of either digital or human-led claims processing; insurers are finding balance with AI to deliver services that are large-scale and transparent. 

With the introduction of PSD2, which pushes for companies to open up their data to third parties, AI in the industry is becoming more evenly regulated. The Solvency II directive insists that insurers make their AI-based systems and performance results as digestible as possible for consumers. Transparency ensures that consumers are in lock-step with insurers when it comes to risk management and claims processing, both in finding solutions and understanding what they’re signing up for. 

“As long as AI is built on an ethical framework, and we understand that if you take old data and build new models, you will end up with old biases on a large scale. The benefits of faster and more accurate real-time data affect both customers and the insurer,” Peake says. “In our ethical framework, we have ensured that you need to be fully aware of such bias and ensure that this is taken into account in the training data and also that people have a right to query the results, have a clear point of contact, and a simple way to ask for corrections.”

 

People In This Post

Companies In This Post

  1. MoneyGram and Plaid Partnership Drives Seamless Global Payments Read more
  2. BNY Upgrades Compliance Monitoring by Completing Implementation of Behavox Quantum AI Read more
  3. Paysecure Wins “Best Online Payment Service 2025” at SiGMA Americas Awards Read more
  4. PingPong Payments Partners With Trulioo for Global Business Verification Read more
  5. FIS Announces Sale of Worldpay Stake and Strategic Acquisition of Global Payments’ Issuer Solutions Business Read more
Media banner asia 1200x1200_ design 2