Skip to content

lorentzenchr/responsible_ml_material

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Responsible ML with Insurance Applications

Welcome to our lecture. It covers the following main topics:

  • Statistical learning, model comparison, and calibration assessment (Christian)
  • Explainability (Michael)

From time to time, we will update the material linked below. You can also clone the repository with

"git clone https://github.com/lorentzenchr/responsible_ml_material.git"

Christian's Material

Slides

Slides (pdf)

Main reference

Tobias Fissler, Christian Lorentzen, and Michael Mayer. “Model Comparison and Calibration Assessment: User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice”. In: (2022). doi: 10.48550/ARXIV.2202.12780

Python and R code for the tutorial

Michael's Material

Slides

Slides XAI (pdf)

Lecture notes

Note that the Python and R outputs differ.

Python notebooks (ipynb)

  1. Introduction
  2. Explaining Models
  3. Improving Explainability

R output (HTML)

  1. Introduction
  2. Explaining Models
  3. Improving Explainability

Setup

  • Python: We use Python 3.11 and the packages specified here.
  • R: We use R 4.3 and up-to-date versions of tidyverse, lubridate, splitTools, withr, caret, mgcv, ranger, lightgbm, xgboost, MetricsWeighted, hstats, shapviz, patchwork, OpenML, farff, insuranceData, keras. For visualizing neural nets, we also need the Github package "deepviz". Follow these instructions for how to install keras with TensorFlow.

Additional Literature

Model evaluation and scoring functions

Explainability

  • C. Lorentzen and M. Mayer. “Peeking into the Black Box: An Actuarial Case Study for Interpretable Machine Learning”. In: SSRN Manuscript ID 3595944 (2020). doi: 10.2139/ssrn.3595944.
  • M. Mayer, D. Meier, and M. V. Wüthrich. “SHAP for Actuaries: Explain Any Model”. In: SSRN Manuscript ID 4389797 (2023) doi: 10.2139/ssrn.4389797.
  • Christoph Molnar. Interpretable Machine Learning. 1st ed. Raleigh, North Carolina: Lulu.com, 2019. isbn: 978-0-244-76852-2. url: https://christophm.github.io/interpretable-ml-book

Books on responsible ML or AI

  • Alyssa Simpson Rochwerger and Wilson Pang. Real World AI: A Practical Guide for Responsible Machine Learning. Lioncrest Publishing, 2021
  • Patrick Hall, James Curtis, and Parul Pandey. Machine Learning for High-Risk Applications. O’Reilly Media, Inc., 2022