From 535505ad7e64aee8dc5a1858b010a07a4382139c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Tolga=20Cang=C3=B6z?= Date: Mon, 30 Oct 2023 19:45:19 +0300 Subject: [PATCH 1/5] Fix typos, improve, update --- docs/README.md | 37 ++++++++++++++---------------- docs/source/en/_toctree.yml | 10 ++++---- docs/source/en/index.md | 2 +- docs/source/en/installation.md | 8 +++++++ docs/source/en/quicktour.md | 34 ++++++++++++++++----------- docs/source/en/stable_diffusion.md | 9 ++++---- 6 files changed, 56 insertions(+), 44 deletions(-) diff --git a/docs/README.md b/docs/README.md index fd0a3a58b0aa..1fbb264dd652 100644 --- a/docs/README.md +++ b/docs/README.md @@ -71,7 +71,7 @@ The `preview` command only works with existing doc files. When you add a complet Accepted files are Markdown (.md). Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting -the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/_toctree.yml) file. +the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml) file for English documentation for instance. ## Renaming section headers and moving sections @@ -81,14 +81,14 @@ Therefore, we simply keep a little map of moved sections at the end of the docum So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file: -``` +```md Sections that were moved: [ Section A ] ``` and of course, if you moved it to another file, then: -``` +```md Sections that were moved: [ Section A ] @@ -109,8 +109,8 @@ although we can write them directly in Markdown. Adding a new tutorial or section is done in two steps: -- Add a new file under `docs/source`. This file can either be ReStructuredText (.rst) or Markdown (.md). -- Link that file in `docs/source/_toctree.yml` on the correct toc-tree. +- Add a new file under `docs/source/`. This file can either be ReStructuredText (.rst) or Markdown (.md). +- Link that file in `docs/source//_toctree.yml` on the correct toc-tree. Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four. @@ -119,7 +119,7 @@ depending on the intended targets (beginners, more advanced users, or researcher When adding a new pipeline: -- create a file `xxx.md` under `docs/source/api/pipelines` (don't hesitate to copy an existing file as template). +- Create a file `xxx.md` under `docs/source//api/pipelines` (don't hesitate to copy an existing file as template). - Link that file in (*Diffusers Summary*) section in `docs/source/api/pipelines/overview.md`, along with the link to the paper, and a colab notebook (if available). - Write a short overview of the diffusion model: - Overview with paper & authors @@ -128,9 +128,7 @@ When adding a new pipeline: - Possible an end-to-end example of how to use it - Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows: -```py -## XXXPipeline - +``` [[autodoc]] XXXPipeline - all - __call__ @@ -138,7 +136,7 @@ When adding a new pipeline: This will include every public method of the pipeline that is documented, as well as the `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`. -```py +``` [[autodoc]] XXXPipeline - all - __call__ @@ -148,7 +146,7 @@ This will include every public method of the pipeline that is documented, as wel - disable_xformers_memory_efficient_attention ``` -You can follow the same process to create a new scheduler under the `docs/source/api/schedulers` folder +You can follow the same process to create a new scheduler under the `docs/source//api/schedulers` folder. ### Writing source documentation @@ -164,7 +162,7 @@ provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will `pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description. -The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\]. +The same works for methods so you can either use \[\`XXXClass.method\`\] or \[\`~XXXClass.method\`\]. #### Defining arguments in a method @@ -172,7 +170,7 @@ Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its description: -```py +``` Args: n_layers (`int`): The number of layers of the model. ``` @@ -182,7 +180,7 @@ after the argument. Here's an example showcasing everything so far: -```py +``` Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. @@ -197,16 +195,16 @@ For optional arguments or arguments with defaults we follow the following syntax following signature: ```py -def my_function(x: str = None, a: float = 1): +def my_function(x: str=None, a: float=3.14): ``` then its documentation should look like this: -```py +``` Args: x (`str`, *optional*): This argument controls ... - a (`float`, *optional*, defaults to 1): + a (`float`, *optional*, defaults to `3.14`): This argument is used to ... ``` @@ -235,14 +233,14 @@ building the return. Here's an example of a single value return: -```py +``` Returns: `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token. ``` Here's an example of a tuple return, comprising several objects: -```py +``` Returns: `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs: - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` -- @@ -268,4 +266,3 @@ We have an automatic script running with the `make style` command that will make This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's recommended to commit your changes before running `make style`, so you can revert the changes done by that script easily. - diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index f2adb148cc28..6b8bcd47742e 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -11,8 +11,8 @@ - sections: - local: tutorials/tutorial_overview title: Overview - - local: using-diffusers/write_own_pipeline - title: Understanding models and schedulers + - local: tutorials/write_own_pipeline + title: Understanding pipelines, models and schedulers - local: tutorials/autopipeline title: AutoPipeline - local: tutorials/basic_training @@ -253,7 +253,7 @@ - local: api/pipelines/musicldm title: MusicLDM - local: api/pipelines/paint_by_example - title: PaintByExample + title: Paint By Example - local: api/pipelines/paradigms title: Parallel Sampling of Diffusion Models - local: api/pipelines/pix2pix_zero @@ -298,7 +298,7 @@ - local: api/pipelines/stable_diffusion/ldm3d_diffusion title: LDM3D Text-to-(RGB, Depth) - local: api/pipelines/stable_diffusion/adapter - title: Stable Diffusion T2I-adapter + title: Stable Diffusion T2I-Adapter - local: api/pipelines/stable_diffusion/gligen title: GLIGEN (Grounded Language-to-Image Generation) title: Stable Diffusion @@ -313,7 +313,7 @@ - local: api/pipelines/text_to_video_zero title: Text2Video-Zero - local: api/pipelines/unclip - title: UnCLIP + title: unCLIP - local: api/pipelines/latent_diffusion_uncond title: Unconditional Latent Diffusion - local: api/pipelines/unidiffuser diff --git a/docs/source/en/index.md b/docs/source/en/index.md index f4cf2e2114ec..ce6e79ee44d1 100644 --- a/docs/source/en/index.md +++ b/docs/source/en/index.md @@ -45,4 +45,4 @@ The library has three main components:

Technical descriptions of how ๐Ÿค— Diffusers classes and methods work.

- \ No newline at end of file + diff --git a/docs/source/en/installation.md b/docs/source/en/installation.md index ee15fb56384d..3bf1d46fd0c7 100644 --- a/docs/source/en/installation.md +++ b/docs/source/en/installation.md @@ -50,6 +50,14 @@ pip install diffusers["flax"] transformers +## Install with conda + +After activating your virtual environment, with `conda` (maintained by the community): + +```bash +conda install -c conda-forge diffusers +``` + ## Install from source Before installing ๐Ÿค— Diffusers from source, make sure you have PyTorch and ๐Ÿค— Accelerate installed. diff --git a/docs/source/en/quicktour.md b/docs/source/en/quicktour.md index 3cf6851e4683..0122ef911a85 100644 --- a/docs/source/en/quicktour.md +++ b/docs/source/en/quicktour.md @@ -26,7 +26,7 @@ The quicktour will show you how to use the [`DiffusionPipeline`] for inference, -The quicktour is a simplified version of the introductory ๐Ÿงจ Diffusers [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) to help you get started quickly. If you want to learn more about ๐Ÿงจ Diffusers goal, design philosophy, and additional details about it's core API, check out the notebook! +The quicktour is a simplified version of the introductory ๐Ÿงจ Diffusers [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) to help you get started quickly. If you want to learn more about ๐Ÿงจ Diffusers' goal, design philosophy, and additional details about its core API, check out the notebook! @@ -76,7 +76,7 @@ The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and s >>> pipeline StableDiffusionPipeline { "_class_name": "StableDiffusionPipeline", - "_diffusers_version": "0.13.1", + "_diffusers_version": "0.21.4", ..., "scheduler": [ "diffusers", @@ -173,11 +173,11 @@ The model configuration is a ๐ŸงŠ frozen ๐ŸงŠ dictionary, which means those para Some of the most important parameters are: -* `sample_size`: the height and width dimension of the input sample. -* `in_channels`: the number of input channels of the input sample. -* `down_block_types` and `up_block_types`: the type of down- and upsampling blocks used to create the UNet architecture. -* `block_out_channels`: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. -* `layers_per_block`: the number of ResNet blocks present in each UNet block. +* `sample_size`: The height and width dimension of the input sample. +* `in_channels`: The number of input channels of the input sample. +* `down_block_types` and `up_block_types`: The type of down- and upsampling blocks used to create the UNet architecture. +* `block_out_channels`: The number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. +* `layers_per_block`: The number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a `batch` axis because the model can receive multiple random noises, a `channel` axis corresponding to the number of input channels, and a `sample_size` axis for the height and width of the image: @@ -191,7 +191,7 @@ To use the model for inference, create the image shape with random Gaussian nois torch.Size([1, 3, 256, 256]) ``` -For inference, pass the noisy image to the model and a `timestep`. The `timestep` indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the `sample` method to get the model output: +For inference, pass the noisy image and a `timestep` to the model. The `timestep` indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the `sample` method to get the model output: ```py >>> with torch.no_grad(): @@ -215,18 +215,23 @@ For the quicktour, you'll instantiate the [`DDPMScheduler`] with it's [`~diffuse ```py >>> from diffusers import DDPMScheduler ->>> scheduler = DDPMScheduler.from_config(repo_id) +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) >>> scheduler DDPMScheduler { "_class_name": "DDPMScheduler", - "_diffusers_version": "0.13.1", + "_diffusers_version": "0.21.4", "beta_end": 0.02, "beta_schedule": "linear", "beta_start": 0.0001, "clip_sample": true, "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, "num_train_timesteps": 1000, "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", "trained_betas": null, "variance_type": "fixed_small" } @@ -234,21 +239,22 @@ DDPMScheduler { -๐Ÿ’ก Notice how the scheduler is instantiated from a configuration. Unlike a model, a scheduler does not have trainable weights and is parameter-free! +๐Ÿ’ก Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: -* `num_train_timesteps`: the length of the denoising process or in other words, the number of timesteps required to process random Gaussian noise into a data sample. -* `beta_schedule`: the type of noise schedule to use for inference and training. -* `beta_start` and `beta_end`: the start and end noise values for the noise schedule. +* `num_train_timesteps`: The length of the denoising process or in other words, the number of timesteps required to process random Gaussian noise into a data sample. +* `beta_schedule`: The type of noise schedule to use for inference and training. +* `beta_start` and `beta_end`: The start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler's [`~diffusers.DDPMScheduler.step`] method: model output, `timestep`, and current `sample`. ```py >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample >>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) ``` The `less_noisy_sample` can be passed to the next `timestep` where it'll get even less noisier! Let's bring it all together now and visualize the entire denoising process. diff --git a/docs/source/en/stable_diffusion.md b/docs/source/en/stable_diffusion.md index f9407c3266c1..06eb5bf15f23 100644 --- a/docs/source/en/stable_diffusion.md +++ b/docs/source/en/stable_diffusion.md @@ -16,7 +16,7 @@ specific language governing permissions and limitations under the License. Getting the [`DiffusionPipeline`] to generate images in a certain style or include what you want can be tricky. Often times, you have to run the [`DiffusionPipeline`] several times before you end up with an image you're happy with. But generating something out of nothing is a computationally intensive process, especially if you're running inference over and over again. -This is why it's important to get the most *computational* (speed) and *memory* (GPU RAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. +This is why it's important to get the most *computational* (speed) and *memory* (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the [`DiffusionPipeline`]. @@ -108,6 +108,7 @@ pipeline.scheduler.compatibles diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, @@ -115,7 +116,7 @@ pipeline.scheduler.compatibles ] ``` -The Stable Diffusion model uses the [`PNDMScheduler`] by default which usually requires ~50 inference steps, but more performant schedulers like [`DPMSolverMultistepScheduler`], require only ~20 or 25 inference steps. Use the [`ConfigMixin.from_config`] method to load a new scheduler: +The Stable Diffusion model uses the [`PNDMScheduler`] by default which usually requires ~50 inference steps, but more performant schedulers like [`DPMSolverMultistepScheduler`], require only ~20 or 25 inference steps. Use the [`~ConfigMixin.from_config`] method to load a new scheduler: ```python from diffusers import DPMSolverMultistepScheduler @@ -155,13 +156,13 @@ def get_inputs(batch_size=1): Start with `batch_size=4` and see how much memory you've consumed: ```python -from diffusers.utils import make_image_grid +from diffusers.utils import make_image_grid images = pipeline(**get_inputs(batch_size=4)).images make_image_grid(images, 2, 2) ``` -Unless you have a GPU with more RAM, the code above probably returned an `OOM` error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the [`~DiffusionPipeline.enable_attention_slicing`] function: +Unless you have a GPU with more vRAM, the code above probably returned an `OOM` error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the [`~DiffusionPipeline.enable_attention_slicing`] function: ```python pipeline.enable_attention_slicing() From 045d80dfa75a6625ff3016d759365fe7852bd9ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Tolga=20Cang=C3=B6z?= <46008593+standardAI@users.noreply.github.com> Date: Mon, 30 Oct 2023 21:04:45 +0300 Subject: [PATCH 2/5] Update _toctree.yml --- docs/source/en/_toctree.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index 6b8bcd47742e..0a983a3a9e47 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -11,7 +11,7 @@ - sections: - local: tutorials/tutorial_overview title: Overview - - local: tutorials/write_own_pipeline + - local: using-diffusers/write_own_pipeline title: Understanding pipelines, models and schedulers - local: tutorials/autopipeline title: AutoPipeline From ff3ab3d5e486c67718761b3a6f34d3cde22a83bd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Tolga=20Cang=C3=B6z?= <46008593+standardAI@users.noreply.github.com> Date: Wed, 1 Nov 2023 10:04:13 +0300 Subject: [PATCH 3/5] Update docs/README.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/README.md b/docs/README.md index 1fbb264dd652..63692266f225 100644 --- a/docs/README.md +++ b/docs/README.md @@ -71,7 +71,7 @@ The `preview` command only works with existing doc files. When you add a complet Accepted files are Markdown (.md). Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting -the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml) file for English documentation for instance. +the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml) file. ## Renaming section headers and moving sections From 569fdf4c0fdb4c3ef9c4f45d0629ab6a1e21afd4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Tolga=20Cang=C3=B6z?= <46008593+standardAI@users.noreply.github.com> Date: Wed, 1 Nov 2023 10:05:38 +0300 Subject: [PATCH 4/5] Update docs/README.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/README.md b/docs/README.md index 63692266f225..30e5d430765e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -109,7 +109,7 @@ although we can write them directly in Markdown. Adding a new tutorial or section is done in two steps: -- Add a new file under `docs/source/`. This file can either be ReStructuredText (.rst) or Markdown (.md). +- Add a new Markdown (.md) file under `docs/source/`. - Link that file in `docs/source//_toctree.yml` on the correct toc-tree. Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so From 16e38672e274de826bc14046fddbed7391a519b3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Tolga=20Cang=C3=B6z?= <46008593+standardAI@users.noreply.github.com> Date: Wed, 1 Nov 2023 10:13:55 +0300 Subject: [PATCH 5/5] Apply Grammarly fixes --- docs/source/en/quicktour.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/docs/source/en/quicktour.md b/docs/source/en/quicktour.md index 0122ef911a85..c5ead9829cdc 100644 --- a/docs/source/en/quicktour.md +++ b/docs/source/en/quicktour.md @@ -133,7 +133,7 @@ Then load the saved weights into the pipeline: >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) ``` -Now you can run the pipeline as you would in the section above. +Now, you can run the pipeline as you would in the section above. ### Swapping schedulers @@ -173,11 +173,11 @@ The model configuration is a ๐ŸงŠ frozen ๐ŸงŠ dictionary, which means those para Some of the most important parameters are: -* `sample_size`: The height and width dimension of the input sample. -* `in_channels`: The number of input channels of the input sample. -* `down_block_types` and `up_block_types`: The type of down- and upsampling blocks used to create the UNet architecture. -* `block_out_channels`: The number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. -* `layers_per_block`: The number of ResNet blocks present in each UNet block. +* `sample_size`: the height and width dimension of the input sample. +* `in_channels`: the number of input channels of the input sample. +* `down_block_types` and `up_block_types`: the type of down- and upsampling blocks used to create the UNet architecture. +* `block_out_channels`: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. +* `layers_per_block`: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a `batch` axis because the model can receive multiple random noises, a `channel` axis corresponding to the number of input channels, and a `sample_size` axis for the height and width of the image: @@ -210,7 +210,7 @@ Schedulers manage going from a noisy sample to a less noisy sample given the mod -For the quicktour, you'll instantiate the [`DDPMScheduler`] with it's [`~diffusers.ConfigMixin.from_config`] method: +For the quicktour, you'll instantiate the [`DDPMScheduler`] with its [`~diffusers.ConfigMixin.from_config`] method: ```py >>> from diffusers import DDPMScheduler @@ -245,9 +245,9 @@ DDPMScheduler { Some of the most important parameters are: -* `num_train_timesteps`: The length of the denoising process or in other words, the number of timesteps required to process random Gaussian noise into a data sample. -* `beta_schedule`: The type of noise schedule to use for inference and training. -* `beta_start` and `beta_end`: The start and end noise values for the noise schedule. +* `num_train_timesteps`: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. +* `beta_schedule`: the type of noise schedule to use for inference and training. +* `beta_start` and `beta_end`: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler's [`~diffusers.DDPMScheduler.step`] method: model output, `timestep`, and current `sample`. @@ -257,7 +257,7 @@ To predict a slightly less noisy image, pass the following to the scheduler's [` torch.Size([1, 3, 256, 256]) ``` -The `less_noisy_sample` can be passed to the next `timestep` where it'll get even less noisier! Let's bring it all together now and visualize the entire denoising process. +The `less_noisy_sample` can be passed to the next `timestep` where it'll get even less noisy! Let's bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a `PIL.Image`: @@ -311,10 +311,10 @@ Sit back and watch as a cat is generated from nothing but noise! ๐Ÿ˜ป ## Next steps -Hopefully you generated some cool images with ๐Ÿงจ Diffusers in this quicktour! For your next steps, you can: +Hopefully, you generated some cool images with ๐Ÿงจ Diffusers in this quicktour! For your next steps, you can: * Train or finetune a model to generate your own images in the [training](./tutorials/basic_training) tutorial. * See example official and community [training or finetuning scripts](https://github.com/huggingface/diffusers/tree/main/examples#-diffusers-examples) for a variety of use cases. -* Learn more about loading, accessing, changing and comparing schedulers in the [Using different Schedulers](./using-diffusers/schedulers) guide. -* Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher quality images with the [Stable Diffusion](./stable_diffusion) guide. +* Learn more about loading, accessing, changing, and comparing schedulers in the [Using different Schedulers](./using-diffusers/schedulers) guide. +* Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the [Stable Diffusion](./stable_diffusion) guide. * Dive deeper into speeding up ๐Ÿงจ Diffusers with guides on [optimized PyTorch on a GPU](./optimization/fp16), and inference guides for running [Stable Diffusion on Apple Silicon (M1/M2)](./optimization/mps) and [ONNX Runtime](./optimization/onnx).