📢 Mar. 29, 2024 We release the ConspEmoLLM and ConspLLM models and the codes!
You can use the models in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map='auto')
Then follow the prompts in the paper to predict the results.
generate_ids = model.generate(inputs["input_ids"], max_length=256)
response = tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
print(response)
Batch inference. The data format needs to follow data/test.json.
bash src/run_inference.sh
bash src/run_sft.sh
The ConspEmoLLM and ConspLLM are licensed under [MIT]. Please find more details in the MIT file.
Raw COCO data
Raw LOCOAnnotations
If you use the series of ConspEmoLLM in your work, please cite our paper:
@article{liu2024conspemollm,
title={ConspEmoLLM: Conspiracy Theory Detection Using an Emotion-Based Large Language Model},
author={Liu, Zhiwei and Liu, Boyang and Thompson, Paul and Yang, Kailai and Jain, Raghav and Ananiadou, Sophia},
journal={arXiv preprint arXiv:2403.06765},
year={2024}
}