Speaker(s): Mathias Raschke (R+V Versicherung)
In this contribution, statistical aspects of models for aggregated losses from natural catastrophes are discussed in the sense of scientific peer-review. It is shown, with the aid of an example of earthquake models, that important stochastic issues of spatial statistics are not well understood. The misinterpretation of a residual as random component lead to an overestimation of aggregated event losses according to the principal of area-equivalence. Furthermore, the uncertainty quantification (if-present) frequently does not consider the statistical error propagation. In contrast, the logic-tree approach is applied for some catastrophe models even though this approach is not validated in a scientific sense. We doubt that the logic tree approach with subjective weightings is an appropriate tool to quantify an objective estimation error. The third statistical aspect is the overfit of models. Many natural catastrophes models do not consider this issue well. Significance tests or statistical model selection are rarely used and the consequence (larger standard errors) are not realized. At last, we look at opportunity and practice of statistical testing of loss distributions which are generated by catastrophe models. The validation of model results frequently does not fulfil the statistical demands, and the recent statistical tests are not very useful for the instance of limited loss data. A recent test procedure overcomes this weak point and is here presented. To sum up, statistical oriented users of catastrophe models, such as actuaries, should motivate the communities for natural hazards and risks, as well as the vendor model providers to improve the catastrophe models in regards to statistical and stochastic issues.