![](https://private-user-images.githubusercontent.com/512815/290153171-71b950f3-2b36-4a58-8f48-955380b30a09.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkyNzE1ODgsIm5iZiI6MTcxOTI3MTI4OCwicGF0aCI6Ii81MTI4MTUvMjkwMTUzMTcxLTcxYjk1MGYzLTJiMzYtNGE1OC04ZjQ4LTk1NTM4MGIzMGEwOS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNjI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDYyNFQyMzIxMjhaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1iOTYwM2FkOGRkMWM5OTBiMGVjYjBmZjE5YTlhOTkyN2Y2ZDkwNWFjNjEyOTdjMmViNjA5YWQ5MjZjMjBiOGQ3JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.uoIF-bGWstgd3d1l_rvZqhQ8YAS4FnyhUkSZGXK_MDU)
Code for Can we trust LLM Self-Explanations for Entity Resolution?.
To install ELLMER locally run :
pip install .
To replicate experiments, first download the DeepMatcher datasets somewhere on your local disk, then use the python eval
script.
You can choose the LLM model_type
by choosing:
- OpenAI models deployed on Azure with
--model_type azure_openai
- local Llama2-13B model
--model_type llama2
- local Falcon model
--model_type falcon
You can choose how many samples the evaluation should account for (--samples
param), the explanation granularity (--granularity
param, accepted values are token
and attribute
).
You can choose one or more datasets
for the evaluation as the name of one or more directories in the base_dir
.
python scripts/eval.py --base_dir path/to/deepmatcher_datasets --model_type azure_openai --datasets beers --samples 5 --granularity token
Other optional parameters can be specified in the script.
If you extend or use this work, please cite:
@article{teofili2024ellmer,
title={Can we trust LLM Self-Explanations for Entity Resolution?},
author={Teofili, Tommaso and Firmani, Donatella and Koudas, Nick and Merialdo, Paolo and Srivastava, Divesh},
year={2024}
}