Skip to content

huseyinatahaninan/Differentially-Private-Fine-tuning-of-Language-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

Differentially-Private-Fine-tuning-of-Language-Models

This repo contains the code of 'Differentially Private Fine-tuning of Language Models', which is published as a conference paper at ICLR 2022. Please find the instructions in the subfloders.

Contact

Please feel free to raise any issues. The contacts of authors can be found in here. Feel free to drop us an email if you have any questions (yuda3@mail2.sysu.edu.cn for questions about language understanding).

Citation

@inproceedings{yu2022differentially,
  title={Differentially private fine-tuning of language models},
  author={Yu, Da and Naik, Saurabh and Backurs, Arturs and Gopi, Sivakanth and Inan, Huseyin A and Kamath, Gautam and Kulkarni, Janardhan and Lee, Yin Tat and Manoel, Andre and Wutschitz, Lukas and others},
  year = {2022},
  booktitle = {International Conference on Learning Representations (ICLR)}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published