How science literature can help to identify the next asbestos

How science literature can help to identify the next asbestos
banner_mobile

Data modelling of emerging risks could facilitate product launches and improve safety levels – if we use the right information.

Emerging risks have, by definition, always been a challenge for companies and the insurance industry. Being new, there’s relatively little data with which to assess the probability of such risks materialising, and the severity and frequency of the subsequent ill effects.

As companies and industries change ever-more quickly, the pace of innovation also increases, and so arguably does the emergence of new risks. As a result, the traditional challenges posed by emerging risks have, at least in some respects, increased.

Products, behaviours and technologies we assume to be safe and adopt as part of our everyday lives are not without their risks.

We know from experience that the products, behaviours and technologies we assume to be safe and adopt as part of our everyday lives are not without their risks. For example, while the toxic properties of asbestos are now well-known, it was once considered a 'wonder material'.

It would be beneficial all-round if significant new liability risks could be identified sooner – in other words, if we knew what the next asbestos would be. Unfortunately, the identification and quantification of emerging risks by businesses, regulators and insurers has not always been particularly well-informed, robust or transparent.

Emerging risks are commonplace in environmental and personal injury liability insurance. Aside from asbestos-related injuries, well-known examples include hand-arm-vibration-syndrome (HAVS) and noise-induced-hearing loss (NIHL). Both led to losses well in advance of historic data-based loss projection curves (ie, sooner and far greater losses than expected), making it harder for insurers to manage the loss.

New disease types, such as prion disease and secondary Raynaud’s, are actually very rare. Most emerging liability risks concern newly recognised causes of well-known injuries such as dementia, heart disease and allergies.

Different language

In order to understand and quantify emerging liability risks, the language of natural catastrophe modelling (widely adopted among insurers) is sometimes employed by insurers. This is not always helpful, however, because there are many differences between the two types of risk.

Natural catastrophes are time-bound, mitigated according to building codes and highly localised. The foreseeable location and severity of natural events is apparent in geographical features, such as un-eroded rock faces, flood plains, assets at risk (eg, property values) and biological responses (eg, the age of the trees). It makes some sense to talk about return periods.

Liability emerging risks, on the other hand, have a different set of characteristics:

• Once established by precedent, liability exposure persists, year on year until the supply of losses is exhausted.

By mining peer-review scientific journals, it is potentially possible to identify the next generation of catastrophe liability risks far in advance of screening claims activity.

• The cause and effect may be separated by an unknown time delay, perhaps decades (eg, bladder cancer).

• Liability may be retrospective, and influenced by changes in expert or legal consensus (eg, noise-induced-hearing loss).

• Injury may be transmitted to future generations (eg, epigenetic toxins).

• Hazard exposure changes with the take-up of new technologies, driven by market forces (eg, hand-held power tools).

• Resilience changes with lifestyle (eg, obesity-related diabetes).

• Mitigation varies with medical skill and availability of care and social mitigation responses such as welfare.

In short, data on the losses themselves and the factors that govern the eventual loss profile may take years to establish. To make quantification of such risks even more challenging, liability emerging risks do not recur – they are one-offs.

Science literature

So how are emerging liability risks to be identified and evaluated?

This task is challenging, in part, because expert opinion is remarkably conservative. New evidence or insights from science research take a long time to become a consensus. Yet expert consensus may be needed to establish legal causation. By understanding the weak points in expert consensus, the opportunity for new insights to emerge can be assessed.

There is usually plenty of warning, however. For example, hand-arm-vibration-syndrome, which first came to light as a result of chains-saws use, was identified 14 years before the first industrial use compensation payment was made in the UK. The idea that it was restricted to chain-saw use was obviously flawed.

A lot of clues lie in scientific literature. By mining peer-review scientific journals, it is potentially possible to identify the next generation of catastrophe liability risks for businesses far in advance of screening claims activity.

Data mining without expertise, however, generates false positives.

Mining the world’s science literature abstracts for odds ratios and relative risks is an option, but without understanding the methodology behind each such summary estimate, the data miner ends up pooling values that cannot be meaningfully pooled and flags up many more false positives than would be useful.

By combining data in bespoke deterministic models, the potential size and variance of a liability exposure can be estimated. 

Even worse, the freely available abstracts very often do not accurately represent the data contained in the subscription-only body of the text. For instance, there could be 20 null estimates of risk in the body of the paper, but only one positive result entered into the abstract. Data mining of research abstracts could be useful as an indicator of a topic worth following up, but not much else. Expert assessment is always needed.

As a contemporary example, BPA is often cited as a potential human toxin with a vast array of possible associated harms, many of which bear no mechanistic relationship with each other and are logically inconsistent. The data comes mostly from poorly designed studies in humans and from experiments at massive unrepresentative doses in rodents. Unsurprisingly, after taking into account the weaknesses of published work, the world’s foremost risk assessment authorities express very little concern. (The BPA case has a 19% probability of convincing a UK court as to generic causation.) Despite these rather obvious methodological flaws, uninterpretable BPA research continues to flourish and garner public concern.

Evaluation

For those emerging risks that seem more likely than not to become real losses, the question then is how big could these losses be? If big enough, more detailed questions can be resolved. Sometimes the potential loss is so big that action needs to be taken even if the rational case for successful claims is marginal.

If correctly assessed, scientific studies provide data that can be used to calculate how big an emerging liability problem could be. In an analogous way, public health authorities usually make their estimates of the rate of new harm being done by reference to information in the same scientific publications that draw attention to the new problem. With some modification, liability insurers can use the same data combined with data from other studies to estimate frequency and severity of liability loss.

For example, if the emerging causation knowledge relates to a well-known injury, scientific study tells you the likely age profile of the injured, the impairment profile, the effectiveness of medicine, the cost of medical support, and sometimes indicates the latency between exposure to hazard and manifestation. These factors relate to severity and timing of loss.

In addition, science very often tells you how many relevant diagnoses there are in any year, how many people are exposed to the hazard and to what level and, if there is a breach of duty, how probable it is that breach would be found in the history of those who have the injury/disease. If there is a specific sign that the injury was caused by that hazard, science tells you how often such a sign is found in the injured, and the probability of making out specific causation is so addressed. These factors relate to the frequency of a good liability claim in those who are injured.

The use of predictive modelling can help businesses to become more comfortable with uncertainty, thereby enabling them to develop and launch new products.

By combining the data in bespoke deterministic models, the potential size and variance of a given liability exposure can be estimated. Where data is uncertain, so is the size estimate. In short, science enables predictive pricing. While such modelling has been used by insurers for many years to quantify natural catastrophes, the application of such modelling for emerging liability risks is less commonplace.

Business enablement

Re: Liability (Oxford) Ltd has begun circulating mathematical models of specific emerging liability scenarios. The quality assessment of science publications is now a standardised practice. By restricting data inputs to so-called “best evidence”, as opposed to evidence diluted by countless poor and logically inconsistent studies, the models provide justifiable estimates of the liability exposure. Each jurisdiction has its own version of each model.

Analysis of the literature also provides realistic estimates of how much longer it would take for generic and specific causation to be sufficiently evidenced. What can’t be modelled is the time taken for experts to agree a new consensus – such is the political nature of scientific expertise – or the interest that would be shown by the claims-making industry. There is a balance to be had between risk and reward.

Of course, a model is not an answer – it’s just a guide. But the use of such predictive modelling can help businesses to become more comfortable with uncertainty, thereby enabling them to develop and launch new products.

Contrary to some expectations, the earlier and more accurate identification of certain risks would not lead to the exclusion of such risks from insurance policies – instead it should make it more feasible for these risks to be insured.

In turn, we might see the development of more named-peril insurance policies, such as for electronic cigarettes, processed meat, shift work and so on. Policies would hopefully become more specific, better informed, and far more tailored to companies’ specific risk profiles.


For more information, please contact Andrew Auty on:

andrew@reliabilityoxford.co.uk

+44 (0)18 6524 4727