Skip to content

h2oai/xai_guidelines

Repository files navigation

Responsible Use Guidelines for Explainable Machine Learning

A proposal for a 180-minute hands-on tutorial at ACM FAT* 2020, Barcelona, Spain.

All tutorial code and materials are available here: https://github.com/h2oai/xai_guidelines. All materials may be re-used and re-purposed, even for commerical applications, with proper attribution of the authors.

For the tutorial outline, please see: responsible_xai.pdf.

To use the code examples for this tutorial:

  1. Navigate to https://aquarium.h2o.ai.
  2. Click Create a new account below the login. Follow the Aquarium instructions to create a new account.
  3. Check the registered email inbox and use the temporary password sent there to login to Aquarium.
  4. Click Browse Labs in the upper left.
  5. Find Open Source MLI Workshop and click View Details.
  6. Click Start Lab and wait for several minutes as a cloud server is provisioned for you.
  7. Once your server is ready, click on the Jupyter URL at the bottom of your screen.
  8. Enter the token h2o at the top Jupyter security Password or Token text box.
  9. Click the xai_guidelines folder. (For those interested, the patrick_hall_mli folder contains resources from a 2018 FAT* tutorial.)
  10. You now have access to the tutorial materials. You may browse them at your own pace or wait for instructions. You may also come back to them at anytime using your Aquarium login.

To view preliminary example code:

Preliminary tutorial slides: Guidelines for Responsible Explainable ML

Tutorial Instructors:

Patrick Hall: Patrick Hall is senior director for data science products at H2O.ai where he focuses on increasing trust and understanding in machine learning through interpretable models, post-hoc explanations, model debugging, and bias testing and remediation. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute. Find out more about Patrick on GitHub, Linkedin, or Twitter.

Navdeep Gill: Navdeep Gill is a senior data scientist and engineer at H2O.ai. Navdeep is a founding member of the interpretability team at H2O.ai and has worked on various other projects at H2O.ai including the open source h2o, automl, and h2o4gpu machine learning libraries. Before joining H2O.ai, Navdeep worked at Cisco, focusing on data science and software development and previous to that he researched neuroscience. Find out more about Navdeep on GitHub, Linkedin, or Twitter.

Nick Schmidt: Nick Schmidt is the director of the AI Practice at BLDS, a leading fair-lending advisory firm. At BLDS, Nick concentrates on creating real-world ethical AI systems for some of the largest financial institutions in the world. Prior to BLDS, Nick worked as an analyst and consultant at several well-respected economic and financial firms. Find out more about Nick on Linkedin.

About

Guidelines for the responsible use of explainable AI and machine learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published