Repository for paper Alleviating Contextual Misguidance: Response-Aware Prompt Compression for Long-Context Question Answering.
Due to the difference from tokenizers of different models, we have codes for each model (Llama3.1-8B and Qwen2.5-7B).
To run baselines in the paper, please refer to /codes. For examples, to run LOOC with Qwen2.5-7B for LITM:
python LOOC/codes/LOOC_get_corpus_qwen.py --model-path your/hf/model/path --data-path your/input/data/path --save-path your/save/path
This will result in: 1. importance scores for each segment in the prompt 2. compressed corpus of different ranking orders.
Prompts and the inference results can be downloaded here.
For LITM, data are generated following https://github.com/nelson-liu/lost-in-the-middle. Please follow their repository to generate your own long context corpus.
For LongBench, data and templates are downloaded from https://github.com/THUDM/LongBench.