Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,13 @@ This code release is aimed at two target audiences:
2. Differential Privacy researchers will find this easy to experiment and tinker
with, allowing them to focus on what matters.


## Latest updates

2024-12-18: We updated this [tutorial](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb) to show how [LoRA](https://arxiv.org/abs/2106.09685) and [peft](https://huggingface.co/docs/peft/en/index) library could be used in conjuncture with DP-SGD.

2024-08-20: We introduced [Fast Gradient Clipping](https://arxiv.org/abs/2009.03106) and Ghost Clipping(https://arxiv.org/abs/2110.05679) to Opacus, significantly reducing the memory requirements of DP-SGD. Please refer to our [blogpost](https://pytorch.org/blog/clipping-in-opacus/) for more information.

## Installation

The latest release of Opacus can be installed via `pip`:
Expand Down Expand Up @@ -76,23 +83,16 @@ shows an end-to-end run using Opacus. The
[examples](https://github.com/pytorch/opacus/tree/main/examples/) folder
contains more such examples.

### Migrating to 1.0

Opacus 1.0 introduced many improvements to the library, but also some breaking
changes. If you've been using Opacus 0.x and want to update to the latest
release, please use this
[Migration Guide](https://github.com/pytorch/opacus/blob/main/Migration_Guide.md)

## Learn more

### Interactive tutorials

We've built a series of IPython-based tutorials as a gentle introduction to
training models with privacy and using various Opacus features.

- [Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
- [Building an Image Classifier with Differential Privacy](https://github.com/pytorch/opacus/blob/main/tutorials/building_image_classifier.ipynb)
- [Training a differentially private LSTM model for name classification](https://github.com/pytorch/opacus/blob/main/tutorials/building_lstm_name_classifier.ipynb)
- [Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
- [Opacus Guide: Introduction to advanced features](https://github.com/pytorch/opacus/blob/main/tutorials/intro_to_advanced_features.ipynb)
- [Opacus Guide: Grad samplers](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_grad_sampler.ipynb)
- [Opacus Guide: Module Validator and Fixer](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_module_validator.ipynb)
Expand All @@ -119,12 +119,12 @@ Consider citing the report if you use Opacus in your papers, as follows:
If you want to learn more about DP-SGD and related topics, check out our series
of blogposts and talks:

- [Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
- [Differential Privacy Series Part 1 | DP-SGD Algorithm Explained](https://medium.com/pytorch/differential-privacy-series-part-1-dp-sgd-algorithm-explained-12512c3959a3)
- [Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus](https://medium.com/pytorch/differential-privacy-series-part-2-efficient-per-sample-gradient-computation-in-opacus-5bf4031d9e22)
- [PriCon 2020 Tutorial: Differentially Private Model Training with Opacus](https://www.youtube.com/watch?v=MWPwofiQMdE&list=PLUNOsx6Az_ZGKQd_p4StdZRFQkCBwnaY6&index=52)
- [Differential Privacy on PyTorch | PyTorch Developer Day 2020](https://www.youtube.com/watch?v=l6fbl2CBnq0)
- [Opacus v1.0 Highlights | PyTorch Developer Day 2021](https://www.youtube.com/watch?v=U1mszp8lzUI)
- [Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)

## FAQ

Expand Down
6 changes: 3 additions & 3 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ Yes! Opacus is open-source for public use, and it is licensed under the [Apache

## How can I report a bug or ask a question?

You can report bugs by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`.
You can report bugs or ask questions by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
<!-- You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`. -->

## I'd like to contribute to Opacus. How can I do that?

Expand Down Expand Up @@ -76,7 +76,7 @@ If these interventions don’t help (or the model starts to converge but its pri

## How to deal with out-of-memory errors?

Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial, and we built `virtual_step` to make this possible while still memory efficient (see *what is virtual batch size* in these FAQs).
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial. To this end, we built [Fast Gradient Clipping](https://pytorch.org/blog/clipping-in-opacus/) and `virtual_step` (see *what is virtual batch size* in these FAQs) to make DP-SGD memory efficient.

## What does epsilon=1.1 really mean? How about delta?

Expand Down
2 changes: 1 addition & 1 deletion tutorials/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Tutorials
This folder contains multiple tutorials to get you started on training differentially private models!
This folder contains multiple tutorials to get you started on training differentially private models! We recommend "building_text_classifier.ipynb" to experiment with latest Opacus features such as Fast Gradient Clipping, LoRA, and fine-tuning Hugging Face Transformers.

Note that you may not have all the required packages. You can install opacus's dev version, which will
bring in all the required packages in these tutorials:
Expand Down
23 changes: 12 additions & 11 deletions website/pages/tutorials/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ const React = require('react');

const CWD = process.cwd();

const CompLibrary = require(`${CWD}/node_modules/docusaurus/lib/core/CompLibrary.js`);
const CompLibrary = require(
`${CWD}/node_modules/docusaurus/lib/core/CompLibrary.js`,
);
const Container = CompLibrary.Container;
const MarkdownBlock = CompLibrary.MarkdownBlock;

Expand Down Expand Up @@ -69,7 +71,8 @@ class TutorialHome extends React.Component {
<a
href="https://bit.ly/per-sample-gradient-computing-opacus-layers"
target="_blank">
Efficient Per-Sample Gradient Computation for More Layers in Opacus
Efficient Per-Sample Gradient Computation for More Layers in
Opacus
</a>
</li>
<li>
Expand All @@ -81,13 +84,18 @@ class TutorialHome extends React.Component {
</li>
</ol>
<h4>Videos*</h4>
<p>* Note that Opacus API has changed over time and some of the code samples and demos in the videos may not work. The concepts presented in the videos though are concrete and still valid.</p>
<p>
* Note that Opacus API has changed over time and some of the code
samples and demos in the videos may not work. The concepts
presented in the videos though are concrete and still valid.
</p>
<ol>
<li>
<a
href="https://www.youtube.com/watch?v=U1mszp8lzUI"
target="_blank">
PyTorch Developer Day 2021: Fast and Flexible Differential Privacy Framework for PyTorch
PyTorch Developer Day 2021: Fast and Flexible Differential
Privacy Framework for PyTorch
</a>
</li>
<li>
Expand All @@ -114,13 +122,6 @@ class TutorialHome extends React.Component {
Differentially Private Deep Learning In 20 Lines Of Code
</a>
</li>
<li>
<a
href="https://blog.openmined.org/pysyft-opacus-federated-learning-with-differential-privacy/"
target="_blank">
PySyft + Opacus: Federated Learning With Differential Privacy
</a>
</li>
</ol>
</div>
</Container>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
GradSampleModuleFastGradientClipping
================

.. automodule:: opacus.grad_sample.grad_sample_module_fast_gradient_clipping
:members:
1 change: 1 addition & 0 deletions website/sphinx/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Opacus API Reference

privacy_engine
grad_sample_module
grad_sample_module_fast_gradient_clipping
optim/optimizers
data_loader
accounting/accounting
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
DistributedDPOptimizerFastGradientClipping
==============

.. automodule:: opacus.optimizers.ddpoptimizer_fast_gradient_clipping
:members:
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
DPOptimizerFastGradientClipping
==============

.. automodule:: opacus.optimizers.optimizer_fast_gradient_clipping
:members:
3 changes: 2 additions & 1 deletion website/sphinx/source/optim/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ Optimizers
.. toctree::

dp_optimizer
dp_optimizer_fast_gradient_clipping
dp_per_layer_optimizer
dp_ddp_optimizer
dp_ddp_optimizer_fast_gradient_clipping
dp_ddp_per_layer_optimizer

5 changes: 5 additions & 0 deletions website/sphinx/source/utils/fast_gradient_clipping_utils.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Fast Gradient Clipping Utils
=============

.. automodule:: opacus.utils.fast_gradient_clipping_utils
:members:
1 change: 1 addition & 0 deletions website/sphinx/source/utils/utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ Utils
tensor_utils
packed_sequences
uniform_sampler
fast_gradient_clipping_utils
Loading