From de13626f58a2361049456c8b4fd45d2dadae2fc7 Mon Sep 17 00:00:00 2001 From: Christopher Tee Date: Fri, 3 May 2024 14:32:39 -0400 Subject: [PATCH] docs(building-blocks): Fix typos and missing hyperlinks --- docs/docs/building-blocks/1-language_models.md | 2 +- docs/docs/building-blocks/3-modules.md | 4 ++-- docs/docs/building-blocks/7-assertions.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/docs/building-blocks/1-language_models.md b/docs/docs/building-blocks/1-language_models.md index 13595ff083..ddf5f815e0 100644 --- a/docs/docs/building-blocks/1-language_models.md +++ b/docs/docs/building-blocks/1-language_models.md @@ -10,7 +10,7 @@ Let's first make sure you can set up your language model. DSPy support clients f ## Setting up the LM client. -You can just call the constructor that connects to the LM. Then, use `dspy.configure` to declare this as the dexfault LM. +You can just call the constructor that connects to the LM. Then, use `dspy.configure` to declare this as the default LM. For example, to use OpenAI language models, you can do it as follows. diff --git a/docs/docs/building-blocks/3-modules.md b/docs/docs/building-blocks/3-modules.md index dad351596c..c4563f0040 100644 --- a/docs/docs/building-blocks/3-modules.md +++ b/docs/docs/building-blocks/3-modules.md @@ -6,7 +6,7 @@ sidebar_position: 3 A **DSPy module** is a building block for programs that use LMs. -- Each built-in module abstracts a **prompting technique** (like chain of thought or ReAct). Crucially, they are generalized to handle any [DSPy Signature]. +- Each built-in module abstracts a **prompting technique** (like chain of thought or ReAct). Crucially, they are generalized to handle any [DSPy Signature](https://dspy-docs.vercel.app/docs/building-blocks/signatures). - A DSPy module has **learnable parameters** (i.e., the little pieces comprising the prompt and the LM weights) and can be invoked (called) to process inputs and return outputs. @@ -17,7 +17,7 @@ A **DSPy module** is a building block for programs that use LMs. Let's start with the most fundamental module, `dspy.Predict`. Internally, all other DSPy modules are just built using `dspy.Predict`. -We'll assume you are already at least a little familiar with [DSPy signatures], which are declarative specs for defining the behavior of any module we use in DSPy. +We'll assume you are already at least a little familiar with [DSPy signatures](https://dspy-docs.vercel.app/docs/building-blocks/signatures), which are declarative specs for defining the behavior of any module we use in DSPy. To use a module, we first **declare** it by giving it a signature. Then we **call** the module with the input arguments, and extract the output fields! diff --git a/docs/docs/building-blocks/7-assertions.md b/docs/docs/building-blocks/7-assertions.md index 71ad163136..ae6a49859c 100644 --- a/docs/docs/building-blocks/7-assertions.md +++ b/docs/docs/building-blocks/7-assertions.md @@ -30,7 +30,7 @@ Specifically, when a constraint is not met: - Past Output: your model's past output that did not pass the validation_fn - Instruction: your user-defined feedback message on what went wrong and what possibly to fix -If the error continues past the `max_backtracking_attempts`, then `dspy.Assert` will halt the pipeline execution, altering you with an `dspy.AssertionError`. This ensures your program doesn't continue executing with “bad” LM behavior and immediately highlights sample failure outputs for user assessment. +If the error continues past the `max_backtracking_attempts`, then `dspy.Assert` will halt the pipeline execution, alerting you with an `dspy.AssertionError`. This ensures your program doesn't continue executing with “bad” LM behavior and immediately highlights sample failure outputs for user assessment. - **dspy.Suggest vs. dspy.Assert**: `dspy.Suggest` on the other hand offers a softer approach. It maintains the same retry backtracking as `dspy.Assert` but instead serves as a gentle nudger. If the model outputs cannot pass the model constraints after the `max_backtracking_attempts`, `dspy.Suggest` will log the persistent failure and continue execution of the program on the rest of the data. This ensures the LM pipeline works in a "best-effort" manner without halting execution.