Techniques for Explainable Artificial Intelligence in Insurance
Artificial intelligence (AI) can already perform important actuarial tasks, such as extracting relevant information from text documents and images for underwriting or claims processing. Since the importance of AI in insurance will continue to increase in the foreseeable future, it is important to ensure that decisions and calculations made using AI are comprehensible. For this purpose, Explainable AI (XAI) methods are demonstrated that increase the transparency of AI algorithms and will enrich the work of actuaries in the future. However, parallel to the underlying algorithms to be investigated, XAI methods rely on certain assumptions and can be prone to errors and attacks themselves. Thus, we present an outlook on recent developments for models with build-in explanations without the need for an additional model to generate explanations post-hoc.