You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running the code above prints one value, but if you run it again you get a different value. What is going on here?
50
+
Running the code above prints one value, but if you run it again you get a different value. What is going on here?
51
51
52
52
Every time the pipeline is run, [`torch.randn`](https://pytorch.org/docs/stable/generated/torch.randn.html) uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time.
53
53
@@ -81,16 +81,16 @@ If you run this code example on your specific hardware and PyTorch version, you
81
81
82
82
<Tip>
83
83
84
-
💡 It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
85
-
just integer values representing the seed, but this is the recommended design when dealing with
86
-
probabilistic models in PyTorch as `Generator`'s are *random states* that can be
84
+
💡 It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
85
+
just integer values representing the seed, but this is the recommended design when dealing with
86
+
probabilistic models in PyTorch as `Generator`'s are *random states* that can be
87
87
passed to multiple pipelines in a sequence.
88
88
89
89
</Tip>
90
90
91
91
### GPU
92
92
93
-
Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
93
+
Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
94
94
95
95
```python
96
96
import torch
@@ -113,7 +113,7 @@ print(np.abs(image).sum())
113
113
114
114
The result is not the same even though you're using an identical seed because the GPU uses a different random number generator than the CPU.
115
115
116
-
To circumvent this problem, 🧨 Diffusers has a [`~diffusers.utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
116
+
To circumvent this problem, 🧨 Diffusers has a [`~diffusers.utils.torch_utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
117
117
118
118
You'll see the results are much closer now!
119
119
@@ -139,14 +139,14 @@ print(np.abs(image).sum())
139
139
<Tip>
140
140
141
141
💡 If reproducibility is important, we recommend always passing a CPU generator.
142
-
The performance loss is often neglectable, and you'll generate much more similar
142
+
The performance loss is often neglectable, and you'll generate much more similar
143
143
values than if the pipeline had been run on a GPU.
144
144
145
145
</Tip>
146
146
147
-
Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
148
-
susceptible to precision error propagation. Don't expect similar results across
149
-
different GPU hardware or PyTorch versions. In this case, you'll need to run
147
+
Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
148
+
susceptible to precision error propagation. Don't expect similar results across
149
+
different GPU hardware or PyTorch versions. In this case, you'll need to run
150
150
exactly the same hardware and PyTorch version for full reproducibility.
0 commit comments