Media Making Machine Learning Techniques Interpretable for Predictive Modelling

Making Machine Learning Techniques Interpretable for Predictive Modelling

uploaded September 24, 2020 Views: 299 Comments: 0 Favorite: 0 CPD
Speakers: 
Description:

Predictive modelling in insurance is performed for many years by actuaries with the help of statistical models (e.g. Generalized Linear Models - GLM). The advantage of statistical models is that the final result is usually easily interpretable (e.g. multiplicative form of GLM) by quants but also by non-quants.
Machine learning techniques are now more and more popular in the insurance industry and have a lot of applications. Whereas the advanced techniques (e.g. random forest or neural networks) usually have a better predictive power than statistical models, their main drawback is that they are black-box and their results are difficult to understand/interpret which doesn’t always provide sufficient comfort to take business decisions.
Hopefully several techniques have been developed the past few years in order to better understand the results of machine learning techniques.
In this presentation, we will introduce (with a focus on practical use and not mathematical details) the concepts of features importance, partial dependence plots (PDP), individual conditional expectation (ICE), Shapley value, H-Statistics for interactions,… and explain how they can be used to boost insights from data in insurance applications (thanks to adequate features selection, features engineering and results interpretation).
These interpretability tools make the use of machine learning techniques much more relevant in insurance as it allows to improve the predictive power while understanding the drivers of the results; which is fundamental to take relevant business decisions.

Tags:
Content groups:  content2020, content2021

0 Comments

There are no comments yet. Add a comment.