From df07ba0c88d2a97b6e025d060046388c9eb5190f Mon Sep 17 00:00:00 2001 From: YangZhaoo <41366656+YangZhaoo@users.noreply.github.com> Date: Sun, 20 Oct 2024 17:23:01 +0800 Subject: [PATCH] doc typo Maybe there is a missing f here --- docs/docs/building-blocks/1-language_models.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/building-blocks/1-language_models.md b/docs/docs/building-blocks/1-language_models.md index 2a1bc2db83..126ee51f11 100644 --- a/docs/docs/building-blocks/1-language_models.md +++ b/docs/docs/building-blocks/1-language_models.md @@ -110,7 +110,7 @@ For any LM, you can configure any of the following attributes at initialization gpt_4o_mini = dspy.LM('openai/gpt-4o-mini', temperature=0.9, max_tokens=3000, stop=None, cache=False) ``` -By default LMs in DSPy are cached. If you repeat the same call, you will get the same outputs. But you can turn of caching by setting `cache=False` while declaring `dspy.LM` object. +By default LMs in DSPy are cached. If you repeat the same call, you will get the same outputs. But you can turn off caching by setting `cache=False` while declaring `dspy.LM` object. ### Using locally hosted LMs