Dong Yan1,2, Jian Liang1,2*, Ran He1,2, Tieniu Tan1,2,3
1School of Artificial Intelligence, University of Chinese Academy of Sciences
2NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences
3Nanjing University
yandong2025@ia.ac.cn, liangjian92@gmail.com
- [2026/04] The code of TRACE-RPS was released!
- [2026/02] Code is under preparation. Stay tuned!
- [2026/01] TRACE-RPS paper is accepted to ICLR 2026!
Recent studies have shown that large language models (LLMs) can infer private user attributes (e.g., age, location, gender) from user-generated text shared online, enabling rapid and large-scale privacy breaches. Existing anonymization-based defenses are coarse-grained, lacking word-level precision in anonymizing privacy-leaking elements. Moreover, they are inherently limited as altering user text to hide sensitive cues still allows attribute inference to occur through models' reasoning capabilities. To address these limitations, we propose a unified defense framework that combines fine-grained anonymization (TRACE) with inference-preventing optimization (RPS). TRACE leverages attention mechanisms and inference chain generation to identify and anonymize privacy-leaking textual elements, while RPS employs a lightweight two-stage optimization strategy to induce model rejection behaviors, thereby preventing attribute inference. Evaluations across diverse LLMs show that TRACE-RPS reduces attribute inference accuracy from around 50% to below 5% on open-source models. In addition, our approach offers strong cross-model generalization, prompt-variation robustness, and utility-privacy tradeoffs.
The table below summarizes our main evaluations:
Create a conda environment and install dependencies:
git clone https://github.com/Jasper-Yan/TRACE-RPS.git
cd TRACE-RPS
conda create -n trace_rps python==3.10
conda activate trace_rps
pip install -r requirements.txtBefore running TRACE / inference with closed-source models, fill in your API settings in credentials.py.
Please check the task_config.path and task_config.outpath fields in YAML files under configs/reddit/ before running.
python anonymization/trace.py --config_path configs/reddit/defense/trace_synthpai.yamlpython rps/rps.py --config_path configs/reddit/defense/rps.yamlpython inference.py --config_path configs/reddit/inference_synthpai/llama3_8b.yamlYou can also switch to other configs in the same folders (e.g., GPT-4o, LLaMA2).
python inference.py --config_path configs/reddit/eval/reddit_eval.yamlThis work is based on llmprivacy and llm-adaptive-attacks. We sincerely thank the authors and contributors of these excellent open-source projects.
If you find our work helpful, please consider citing:
@article{yan2026stop,
title={Stop Tracking Me! Proactive Defense Against Attribute Inference Attack in LLMs},
author={Yan, Dong and Liang, Jian and He, Ran and Tan, Tieniu},
journal={arXiv preprint arXiv:2602.11528},
year={2026}
}This project is licensed under the MIT License.


