October 2nd, 2018 | Richard Dixon
Confession corner: I am a frustrated weather forecaster. It was my career aim that led me into studying meteorology at University before I moved into the world of research and then into the research and building of catastrophe models. Working in catastrophe modelling for a while now has made me realise how similar, despite different time horizons, weather forecasting and catastrophe modelling can be. Let me try to explain…
In a gross simplification, weather forecasters and catastrophe modellers both use models to work out likely weather/catastrophe outcomes, always with a keen eye on the low probability, high severity events. Missing those can lead to red faces for weather forecasters, but also for your Chief Risk Officer when guidance from your risk team means you haven’t, for example, bought enough reinsurance! Interestingly, there is an approach to catastrophe modelling where being more like a weather forecaster might benefit us - or at least provide an alternative risk assessment method.
The diagram to the right contrasts a weather forecasting model with a catastrophe model. Both essentially use a “best” view of the risk. In a weather forecasting model, this is often the weather model run at high resolution with as many observations of the atmosphere as possible. For the catastrophe model, this is often the “mean” state of the model to give a best view of risk: shown with red lines below.
However, both catastrophe models and weather forecasting models allow for uncertainty in the modelling process. Catastrophe models typically give us the range of uncertainty of a loss around all our events and return periods with the uncertainty in the events’ occurrence, hazard and vulnerability response knitted into the uncertainty output (blue lines). Weather models’ uncertainty is represented by ensembles (again, blue lines). The ensembles represent running the model with slightly different starting conditions to produce a range of forecast outcomes that widen with time, depending on how chaotic the atmospheric situation is.
For instance, here one of the ensembles suggests the peak in the gusts expected in the forecast is notably higher than the other ensembles. In both cases, the risk assessor (the analyst in catastrophe modelling team or the weather forecaster in the meteorological example) need to examine the data to present an assessment of risk to their audience and to take into account the underlying uncertainty from the model.
What I find different, but maybe a glimpse into to how catastrophe modelling could evolve, is in model usage. In the catastrophe modelling world - largely by dint of what is available and the cost of licensing multiple models - risk modelling teams will only typically use one or two models. In the forecasting world it’s fair to say that the meteorological community has, for many years, had a wider range of modelling centres from which data is available compared to our selection of catastrophe models. Off the top of my head I can think of global forecasting models available from the UK (Met Office), Germany (DWD), France (Arpege), US (GFS), Canada (CMC) and Europe (ECMWF) available to any weather forecaster.
With such a raft of models available, is using multiple models the “done thing” when it comes to weather forecasting or do, for example, the Met Office like to stick to their guns and use their own model above all else when it comes to forecasting? I was fortunate enough to speak to Paul Gundersen, Chief of Forecasting Operations at the Met Office around their usage of models in the forecasting process. On the use of multiple models, Paul comments: “We find using the higher-performing high resolution deterministic forecasting models (and their consistency/history over successive runs) gives us a better idea of the realistic spread of solutions than an ensemble suite from any one given model”.
So – at least in the forecasting world, using multiple models and their associated uncertainty will typically outperform using just one model. Should we be following suit, where possible?
Given that in catastrophe modelling we are in the game of understanding and allowing for extremes in our decision-making process, I also asked Paul how forecasters typically react to a specific model throwing out a small number of severe events in their ensembles: similar, maybe, to one cat model having a particularly intense tail. Paul replies: “If a small percentage of one model’s ensembles pick up a severe event and retain it for at least a couple of forecast runs then we’ll run with that low probability in the forecast, even if it doesn’t appear in another model’s ensembles”. So: having one model with “noise” in the tail where others may not necessarily show this is something the forecasters won’t typically ignore.
I certainly feel we can learn from the forecasters’ approach. Per Paul’s comments above, at least in the forecasting world, slavishly following the results from one model may not be optimal in understanding the risk – including the tail. But, of course, cost is often the issue in catastrophe modelling. Do you increase your costs and bring in the benefit of having Model B for a second set of eyes and possibly something in the tail that Model A may not have?
This of course doesn’t nullify using a single vendor as a global modelling solution: far from it. Such solutions are paramount to a global understanding of a geographically diverse portfolio, but it’s in understanding each peril individually that the “weather forecaster” approach may well be a useful alternative approach: multiple sets of eyes from multiple vendors, if price permits.
This is where the new era in catastrophe modelling I see unfolding around us makes this “forecaster” approach increasingly more feasible. I could have written this article anywhere, but it seems appropriate to write about this on this blog for Simplitium, who are hosting typically lower-cost models via the Oasis framework that might enable us as risk-takers to take on board two, three or four risk models for a particular peril and territory in the future.
Is it time to ditch single-model frameworks and start thinking more like weather forecasters?
About Richard Dixon:
Richard has spent the last 17 years in the insurance industry building and researching catastrophe models at a model vendor and then evaluating them whilst working for brokers and reinsurers. He now is a consultant to the insurance industry at CatInsight, specialising in model evaluation. Most recently he was Head of Catastrophe Research at Hiscox, being responsible for their internal "View of Risk". Prior to working in the insurance industry, he obtained a PhD in meteorology at Reading University, specialising in extra-tropical cyclones. For more information, visit Richard Dixon's blog: www.catinsight.co.uk.
ModEx is the cat risk modelling platform for the (re)insurance industry, delivered by Simplitium and powered by Oasis LMF.
ModEx delivers flexible, scalable and secure cat risk modelling services via a hosted and fully managed environment. The platform provides the (re)insurance industry with a cost-effective and reliable way to meet their cat risk modelling requirements. The ModEx solution caters for the full suite of perils and allows model providers and other (re)insurance systems to meet the individual needs of customers through a robust and secure shared services platform.
The platform is fully independent and transparent, and provides (re)insurance firms with the tools to gain a deeper understanding of their risk by accessing the best models from large and niche model developers from around the world. For further information, please visit www.simplitium.com/modex.