Explainable Machine Learning
Machine learning techniques have become more and more popular within the financial industry mainly because of
- the potential to capture complex interactions from data
- the potential for better predictive models than traditional statistical models
- the ability to capture non-linear interactions within a range of inputs
Machine Learning techniques have been viewed as useful additions in the actuary’s modelling toolkit that could enable insurers to process and learn from more data
Nevertheless, these models - sometimes viewed as black box models - can sometimes be hard to interpret, audit and debug, subsequently making it harder to trust and use the outcome of the prediction resulting from these models.
During this presentation we will introduce techniques (e.g. PDP, ICE, ALE, Interaction measure, Shapley value…) that can be used in order to better understand and interpret machine learning models and results, showing why they need not be viewed as ‘black box models’.
We will build on this to help identify ways to obtain sufficient comfort in the models in order to make business decisions and to be able to explain the impact to stakeholders.