“Every mass litigation that we investigated, was preceded by peer-reviewed science by some number of years. We realised that a hypothesis that some product, chemical, substance or exposure can result in bodily injury first emerges years before general acceptance within the scientific community, before public awareness and before the first litigation.” Praedicat aims to identify these trends and pinpoint liability exposures years before they hit re/insurers.
The identification engine will employ a text-based data mining system that aggregates the results of peer-reviewed scientific literature. The system examines new hypotheses regarding potential casualty exposures and applies analytics and scoring around these results to determine how the science will evolve over time.
“develop a comprehensive and scalable solution for identifying emerging risks, supported by footprints in the scientific literature”, with the ultimate objective being a world that is “cleaner, safer and healthier”.
It is our experience that the basic hypothesis is sound – new liability losses are usually preceded by several years of scientific input. In our own work on vibration white finger for example, the ‘scientific’ date of knowledge was 14 years before any cases were settled in the UK. The Radar service adopted the same “science comes first” hypothesis in 2002, based on the results of a project for the Association of British Insurers and Lloyd’s.
As to the identification methodology, well similar data mining approaches have been adopted by several regulatory authorities in the EU. Compared with just asking the experts, text mining systems have flagged up only half the valid concerns and, at the same time, generated hundreds of time consuming false positives. Reviewers have concluded that, unfortunately, science papers are often written in a way which fails to properly emphasise the known limitations of the methodology, to miss potential connections with real emerging issues and to include buzz words that will improve search hit rates. These limitations are apparent to subject experts, but perhaps not to software.
For the Radar service we find that articles need to be read in full before any liability-related opinion about them can be safely drawn. It should be borne in mind that most of the relevant science research is aimed at a precautionary view of public health, rather than ‘reasonable’ liability. It is usually reported with that editorial slant.
What’s really great about a software based search/identify routine is the number of articles that can be systematically scanned, scored and collated. Many insurers are equipped to do this for themselves. The key thing though is to score according to liability exposure potential.
It would be great fun to investigate this service in detail and to help insurers to form a view of its strengths and weaknesses.