In this repository, you can find the data, code, and figures related to the paper: AUTOLYCUS: Exploiting Explainable Artificial Intelligence (XAI) for Model Extraction Attacks against Interpretable Models (to be published in PETs 2024).
- You can experiment on the proposed attack using ipynb files.
- Datasets can be included in the code directly or manually added to the 'data' folder. Make sure they follow a similar format to other datasets for explainer compatibility.
- Some experimental result plots from the development stage can be found in 'LIME/plots' and 'SHAP/plots'.
- 'requirements.txt' provides the libraries and their version numbers as used in our work.
- 'packages_full.txt' gives the list of all packages in the environment in which we conducted our experiments.
We welcome suggested improvements for streamlining the code and enhancing the attack. During the standardization, we realized there are many bugs, runtime issues and we hope to resolve them in time. For further questions and improvements about the code, email abdullahcaglar.oksuz@case.edu.