Taming AI technology for use in underwriting

Taming AI
Risk segregation is a core principle of insurance underwriting and the application of artificial intelligence (AI) tools can help define potential risks more accurately. However, users’ claims that AI tools are biased in their decisions are becoming louder and the regulator is listening closely.

This is what happened to Goldman Sachs in November 2019 following a client’s tweet that went viral, accusing the firm of sexism when calculating credit scores for the Apple Card which it developed and issues. The male customer claimed that he received a credit limit 20x higher than his wife, despite filing joint tax returns, living in a community-property state and having been married for a long time. Apple’s customer services representatives blamed the algorithm for the decision.

The regulator announced it would conduct an investigation to determine whether the bank had violated New York law and “ensure all consumers are treated equally regardless of sex”. 
Although Goldman Sachs explained that it actually does not know the applicant’s gender during the application process, this did little to undo the damage. More importantly, perhaps, the bank was further criticised for being unable to explain what drove the credit decision.

Warnings that AI tools are susceptible to bias are not new. It can happen if the dataset used to train the application is not sufficiently diverse. Also, insurers may use proxies for data, such as zip codes, that inadvertently discriminate against certain people.

AI could transform underwriting

Nevertheless, the CEO of US start up insurer Lemonade, Daniel Schreiber, defended the application of AI in the insurance sector in a recent blog. Schreiber believes that AI’s capacity to crunch data will eventually result in each individual person contributing to an insurance pool in direct proportion to the risk they represent.

Schreiber rightly draws out the possibility of being in a risk class of just one. But what if it’s the wrong class? What if the machine is missing the risk relevant data on the individual? AI doesn’t guarantee absence of bias or absence of error. The more homogeneous one’s life is, the more likely it is that AI will be fit for risk assessment purposes. But this is the opposite of what Schreiber is hoping to say. AI will more likely be accurate when a person is in a very large group, not as an individual.

Further, Schreiber suggests that to ensure impartiality of its outputs, the AI tool should be judged by the outcomes.

This is similar to what Goldman Sachs recommended to clients. In response to criticism, the bank tweeted that “if you believe that your credit line does not adequately reflect your credit history (…) we want to hear from you”. It does seem unfair that clients need to prove that the bank’s assessment is unfair without even knowing what the assessment was based on.

But Schreiber suggested another way to test the quality of the AI assessment: The Uniform Loss Ratio (ULR) test. The ratio measures the amounts paid out in claims by the insurance company in relation to the premium it collects. If an insurance company charges all customers a rate proportionate to the risk they pose, this ratio should be constant across their customer base, Schreiber argues. “Once we aggregate people into sizable groupings – say by gender, ethnicity or religion – the law of large numbers should kick in, and we should see a consistent loss ratio across such cohorts”.

What Schreiber is suggesting is that if as a group (e.g. black skinned) it can be shown that premium per unit risk is the same as for people with another subdivision of that class (e.g. white skin) then all is well. While this may sound reasonable, for this approach to be truly fair, the ULR would have to be simultaneously met in all group designations at once. Black-skinned male MUST have the same ULR as white-skinned female and the same ULR as 95 year old black Jewish transgender, and this must be the same as 21 year old Catholic white male with three cars and seven guns who never goes out in daylight and has no employment history - cash transactions only.

Proof of fair price would be almost impossible if this is based on data and “black box” AI algorithms. In the end, the only good test of fair price is that the informed customer is willing to pay it. This may not sound very new age or techy but this test has to be more important than ULR or any other. A ‘market reasoning’ answer to technical problems is the right answer for any market. Insurers have been sampling market reasoning for hundreds of years, why stop now? They are good at it!

Regulators want transparency

Regulators remain sceptical. Already in 2016 the UK’s Financial Conduct Authority (FCA) identified two areas where the use of Big Data has the potential to leave some consumers worse off. Big Data changes the extent of risk segmentation so that categories of customers may find it harder to obtain insurance. The FCA is also concerned about the potential that Big Data might enhance firms’ ability to identify opportunities to charge certain customers more.

A leaked EU White Paper which circulated around data privacy experts in January 2020 noted that while human decision-making is also prone to mistakes and biases, when the same level of bias is present in an AI it could affect and discriminate many people without the social control mechanisms that govern human behaviour.

The draft suggests that a regulatory framework should define legal requirements for developers and users of AI including a preventative ex ante character and an ex post character. Preventative ex ante requirements aim to reduce risks created by AI before products or services that rely on AI are placed on the market or are provided (e.g. process requirements, including transparency and accountability that shape the design of AI systems). Ex post requirements address the situations once the harm has materialised and would aim either to facilitate enforcement or to provide possibilities of redress or other types of remedy (e.g. requirements on redress, remedies). 

Concrete examples of ex ante requirements include:

•    accountability and transparency requirements for developers to disclose the design parameters of the AI system, 
•    metadata of datasets used for training, on conducted audits, 
•    general design principles of developers to reduce the risks of the AI system, as well as
•    requirements for users regarding the quality and diversity of data used to train AI systems.

 
Ex post requirements could include:
•    requirements on liability for harm/damage caused by a product or a service relying on AI as well as 
•    requirements on enforcement and redress for individuals, according to the draft document.

In a January 20, 2020 article in the Financial Times, Alphabet’s CEO Sundar Pichai said that “companies such as ours cannot just build promising new technology and let market forces decide how it will be used”.

Pichai added that there is “no question in my mind that artificial intelligence needs to be regulated”. He further suggested that existing rules such as Europe’s General Data Protection Regulation (GDPR) can serve as a strong foundation. “Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”. 

Similar articles

How law firms can minimise the risk of being hacked
Insight

How law firms can minimise the risk of being hacked

Law firms might face an increasing risk of their IT systems being hacked, indicate recent cyber incidents.

brexit
Insight

Why Brexit-related D&O claims are likely to be rare

It would not be unreasonable to expect that, almost three years after the EU referendum in the UK, executives would have a clear view on the consequences the decision is likely to have for the company they lead and a plan to manage the potential negative impact of the decision.

Agribusiness
Blog

Agribusiness insurance thrives in Brazil

Farmers in Brazil can expect attractive and innovative new insurance products as the private sector seizes growth opportunities opened up by the country’s government.

Top 50
News

Top 50 best employers in London

Lockton ranked in Top 50 best employers in London