From aedeea77b6eed8d64b0f0861deb1572deda56d89 Mon Sep 17 00:00:00 2001 From: omahs <73983677+omahs@users.noreply.github.com> Date: Mon, 24 Nov 2025 09:53:35 +0100 Subject: [PATCH] fix typos --- docs/hub/spaces-sdks-docker-dash.md | 4 ++-- docs/hub/speechbrain.md | 2 +- docs/xet/api.md | 2 +- docs/xet/deduplication.md | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/hub/spaces-sdks-docker-dash.md b/docs/hub/spaces-sdks-docker-dash.md index a7bdc9405..f64dd9105 100644 --- a/docs/hub/spaces-sdks-docker-dash.md +++ b/docs/hub/spaces-sdks-docker-dash.md @@ -22,7 +22,7 @@ When you create a Dash Space, you'll get a few key files to help you get started ### 1. app.py -This is the main app file that defines the core logic of your project. Dash apps are often structured as modules, and you can optionally seperate your layout, callbacks, and data into other files, like `layout.py`, etc. +This is the main app file that defines the core logic of your project. Dash apps are often structured as modules, and you can optionally separate your layout, callbacks, and data into other files, like `layout.py`, etc. Inside of `app.py` you will see: @@ -36,7 +36,7 @@ Inside of `app.py` you will see: Here, we define our server variable, which is used to run the app in production. 4. `app.layout = ` - The starter app layout is defined as a list of Dash components, an indivdual Dash component, or a function that returns either. + The starter app layout is defined as a list of Dash components, an individual Dash component, or a function that returns either. The `app.layout` is your initial layout that will be updated as a single-page application by callbacks and other logic in your project. diff --git a/docs/hub/speechbrain.md b/docs/hub/speechbrain.md index 8122c731a..df940e775 100644 --- a/docs/hub/speechbrain.md +++ b/docs/hub/speechbrain.md @@ -14,7 +14,7 @@ All models on the Hub come up with the following features: ## Using existing models -`speechbrain` offers different interfaces to manage pretrained models for different tasks, such as `EncoderClassifier`, `EncoderClassifier`, `SepformerSeperation`, and `SpectralMaskEnhancement`. These classes have a `from_hparams` method you can use to load a model from the Hub +`speechbrain` offers different interfaces to manage pretrained models for different tasks, such as `EncoderClassifier`, `EncoderClassifier`, `SepformerSeparation`, and `SpectralMaskEnhancement`. These classes have a `from_hparams` method you can use to load a model from the Hub Here is an example to run inference for sound recognition in urban sounds. diff --git a/docs/xet/api.md b/docs/xet/api.md index 70c08a26b..0a9516f90 100644 --- a/docs/xet/api.md +++ b/docs/xet/api.md @@ -29,7 +29,7 @@ Suppose a hash value is: Then before converting to a string it will first have its bytes reordered to: `[7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8, 23, 22, 21, 20, 19, 18, 17, 16, 31, 30, 29, 28, 27, 26, 25, 24]` -So the string value of the the provided hash [0..32] is **NOT** `000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`. +So the string value of the provided hash [0..32] is **NOT** `000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`. It is: `07060504030201000f0e0d0c0b0a0908171615141312111f1e1d1c1b1a1918`. ## Endpoints diff --git a/docs/xet/deduplication.md b/docs/xet/deduplication.md index 1ff30209d..1f00cefa1 100644 --- a/docs/xet/deduplication.md +++ b/docs/xet/deduplication.md @@ -112,7 +112,7 @@ Not all chunks are eligible for global deduplication queries to manage system lo 2. **Hash pattern matching**: Chunks are eligible if: the last 8 bytes of the hash interpreted as a little-endian 64 bit integer % 1024 == 0. **Recommendations:** -**Spacing constraints**: The global dedupe API is optimized to return information about nearby chunks when there is a match. Consider only issueing a request to an eligible chunk every ~4MB of data. +**Spacing constraints**: The global dedupe API is optimized to return information about nearby chunks when there is a match. Consider only issuing a request to an eligible chunk every ~4MB of data. #### Query Process