EXCLUSIVE: ‘Judgement Calls’ – Anthony Peake, IntelligentAI in ‘The Insurtech Magazine’
IntelligentAI’s Anthony Peake was motivated by a tragedy to improve the quality of data used to identify risk. But with that comes great responsibility
“The reason I started IntelligentAI was Grenfell Tower.”
Anthony Peake, co-founder and CEO of the risk underwriting platform, is talking about the night in June 2017 when fire ripped through a 24-storey block of flats in West London, leaving 72 people dead and exposing a shocking catalogue of safety failures. From the combustible cladding that fed the flames to the woeful lack of readily available detailed information on the building’s construction and layout which would hamper rescue efforts, the data that might have flagged Grenfell as a disaster waiting to happen had never been collated or, more importantly, shared. Insurers were as much in the dark as anyone else.
Like many watching the tragedy unfold on live TV, Peake, who’s spent a lifetime in IT, was first appalled and then angry that information technology could have been used to avert this and other disasters, but wasn’t.
At the time of the Grenfell Tower fire, Peake was already involved in analysing fire service call out data on behalf of insurers elsewhere, and he was curious to see the records for the block. It emerged that the service had been called to the tower 15 times during the previous the year, although no claims were made by residents as half did not have insurance, and the other half could not afford it, so there was no obvious data trail to inform any underwriting process.
“You had two different views of risk, one side completely blind (insurers) the other side (fire services) data-rich. And none of the insurers at that time were looking at fire service call out data,” says Peake.
And so, with co-founder of IntelligentAI, insurance professional Neil Strickland, he set about creating a risk underwriting solution that made sure they did as part of a much wider data mapping exercise.
IntelligentAI plugs the knowledge gaps for commercial property insurers by using a digital twin approach: AI first cleanses, then analyses and compiles information from more than 300 data sources to create a virtual 3-D mirror image of a property. This visual representation of a traditional ‘statement of value’ – the file of information underpinning risk assessments, which, according to IntelligentAI, is often only 40 per cent complete – assesses status not only of the precise location but also of the surrounding area in which it sits.
This comprehensive approach means that autonomous machines are surfacing and making judgment calls on ever more granular data in complex situations, and while better data is of obvious benefit in terms of preventing loss of life and reducing the liability on insurers’ books, it also potentially raises questions over its ethical use.
With growing reliance on immediate automated services, AI is fast becoming an integral part of insurance, but in parallel to its growing presence is the call for more robust regulation and surveillance of autonomous processes.
IBM found that while 71 per cent of insurers have data-centric products and services in their portfolios, many still lack a cohesive data strategy. And yet the information they have access to is only likely to grow. According to an Accenture 2021 survey, seven out of ten (69 per cent) consumers would share their data, including medical records and driving habits, to insurers if it meant more personalised pricing.
The responsible application of AI in determining outcomes based on this data requires insurers to do two things: ensure transparency in their sourcing of data and, secondly, monitor for bias in one-two step decision-making algorithms. Since last year, heavy-hitters like Google and Microsoft have committed to encouraging more ethical digital practices, providing advice on data security and AI development to the rest of industry.
And, in 2021, IntelligentAI teamed up with tech organisation Digital Catapult as part of Innovate UK’s ethical AI project, which aims to increase support for company’s like Peake’s as they develop and deploy AI technologies in a way that does not cause unintentional harm to any individual or to society.
“I and some of the consortium members were very concerned that, when you unlock a lot of data, you create profiles of organisations, in our case commercial businesses,” says Peake. “And, in that process, quite often you could unlock data that an insurer did not have, and present a profile of an organisation as riskier than the insurer initially thought.”
A big area of focus for the project was to make sure that, under the framework, AI did not discriminate against smaller companies in particular. For large businesses, insurers often look past potential risk as they are guaranteed reliable returns. This advantage often does not extend to SMEs.
Though the road to unbiassed claims processing is paved with good intentions, regulators underestimate the sophistication of this feat.
Algorithms work through locating patterns, and they are slow in taking individual context into account. One case, in private real estate, which was investigated in a study by UC Berkeley, illustrates the point. It found that black American homeowners were 5.3 basis points higher in mortgage interests than their white counterparts. The algorithmic lenders studied matched rates of interest to location, not a race, thus the resulting bias was indirect, but the unfair impact in the real world was obvious. This omission shows that algorithm-based technology is still in its nascent phase.
Whether an algorithm is working using unbiased data is just as important in commercial insurance.
The current insurance landscape is being propelled by multiple forces – both internal and external – towards the adoption of AI-backed risk management. But there is an urgent need to address any biases hidden in legacy datasets used to train predictive models, otherwise policyholders and wider society will begin to question whether their claims are being treated fairly, consistently and honestly.
Lemonade has taken the bull by the horns and is currently unteaching biases from its algorithms while Aviva is providing the blended option of either digital or human-led claims processing. So, insurers are finding balance with AI to deliver services that are large-scale and transparent, while – largely with the introduction of PSD2, which pushes for companies to open up their data to third parties – use of the technology is also becoming more evenly regulated. The Solvency II directive, for example, insists that insurers make their AI-based systems and performance results as digestible as possible for consumers. Transparency ensures that consumers are in lock-step with insurers when it comes to risk management and claims processing, both in finding solutions and understanding what they’re signing up for.
Peake says the industry must understand that ‘if you take old data and build new models, you will end up with old biases on a large scale’, but that can be addressed by embracing an ethical framework.
“In our ethical framework, we have ensured that you need to be fully aware of such bias and ensure that this is taken into account in the training data,” says Peake. “And also that people have a right to query the results, have a clear point of contact, and a simple way to ask for corrections.”