From 2bc141af9ee80239f1338b8f62096693062a75f2 Mon Sep 17 00:00:00 2001 From: Rafael Vasquez Date: Wed, 1 Oct 2025 15:18:40 -0400 Subject: [PATCH 1/2] Copy edit typos Signed-off-by: Rafael Vasquez --- docs/lab-1.5/README.md | 2 +- docs/lab-1/README.md | 2 +- docs/lab-2/README.md | 2 +- docs/lab-3/README.md | 2 +- docs/lab-5/README.md | 4 ++-- docs/lab-6/README.md | 6 +++--- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/lab-1.5/README.md b/docs/lab-1.5/README.md index 8c76e9f..a2a6e1f 100644 --- a/docs/lab-1.5/README.md +++ b/docs/lab-1.5/README.md @@ -39,7 +39,7 @@ The first response may take a minute to process. This is because `ollama` is spi ![batman](../images/openwebui_who_is_batman.png) -You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about! +You may notice that your answer is slightly different then the screen shot above. This is expected and nothing to worry about! ## Conclusion diff --git a/docs/lab-1/README.md b/docs/lab-1/README.md index 7eac259..894517e 100644 --- a/docs/lab-1/README.md +++ b/docs/lab-1/README.md @@ -45,7 +45,7 @@ The first response may take a minute to process. This is because `ollama` is spi ![who is batman](../images/anythingllm_who_is_batman.png) -You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about! +You may notice that your answer is slightly different then the screen shot above. This is expected and nothing to worry about! ## Conclusion diff --git a/docs/lab-2/README.md b/docs/lab-2/README.md index d93fac4..df189b9 100644 --- a/docs/lab-2/README.md +++ b/docs/lab-2/README.md @@ -29,7 +29,7 @@ First, use ollama to list the models that you currently have downloaded: ``` ollama list ``` -And you'll see a list similiar to the following: +And you'll see a list similar to the following: ``` ollama list NAME ID SIZE MODIFIED diff --git a/docs/lab-3/README.md b/docs/lab-3/README.md index 9870f95..73f3113 100644 --- a/docs/lab-3/README.md +++ b/docs/lab-3/README.md @@ -90,7 +90,7 @@ How would you respond to client who has had their freight lost as a representati ![lost freight](../images/anythingllm_lost_freight.png) -That's not a satisfactory or interesting response, right? We need to interate on it, and provide more context about the client, like what they may have lost. **Tip: always think about adding more context!** +That's not a satisfactory or interesting response, right? We need to iterate on it, and provide more context about the client, like what they may have lost. **Tip: always think about adding more context!** ``` The freight they lost was an industrial refrigerator, from Burbank, California to Kanas City, MO. diff --git a/docs/lab-5/README.md b/docs/lab-5/README.md index 4f9f508..d0ad761 100644 --- a/docs/lab-5/README.md +++ b/docs/lab-5/README.md @@ -6,10 +6,10 @@ logo: images/ibm-blue-background.png ## Configuration and Sanity Check -Open up AnyThingLLM, and you should see something like the following: +Open up AnythingLLM, and you should see something like the following: ![default screen](../images/anythingllm_open_screen.png) -If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or +If you see this that means AnythingLLM is installed correctly, and we can continue configuration. If not, please find a workshop TA or raise your hand we'll be there to help you ASAP. Next as a sanity check, run the following command to confirm you have the [granite4:micro](https://ollama.com/library/granite4) diff --git a/docs/lab-6/README.md b/docs/lab-6/README.md index df6fec5..6015de5 100644 --- a/docs/lab-6/README.md +++ b/docs/lab-6/README.md @@ -4,7 +4,7 @@ description: Learn how to build a simple local RAG logo: images/ibm-blue-background.png --- -## Retrieval-Augmented Generation overview +## Retrieval-Augmented Generation Overview The LLMs we're using for these labs have been trained on billions of parameters, but they haven't been trained on everything, and the smaller models have less general knowledge to work with. For example, even the latest models are trained with aged data, and they couldn't know about current events or the unique data your use-case might need. @@ -30,7 +30,7 @@ ollama pull granite3.3:2b If you didn't know, the supported languages with `granite3.3:2b` now include: -- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages. +- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may fine-tune this Granite model for languages beyond these 12 languages. And the Capabilities also include: @@ -60,7 +60,7 @@ For example: At first glance, the list looks pretty good. But if you know your IBM CEOs, you'll notice that it misses a few of them, and sometimes adds new names that weren't ever IBM CEOs! (Note: the larger granite3.3:8b does a much better job on the IBM CEOs, you can try it later) -But we can provide the small LLM with a RAG document that supplements the model's missing informaiton with a correct list, so it will generate a better answer. +But we can provide the small LLM with a RAG document that supplements the model's missing information with a correct list, so it will generate a better answer. Click on the "New Chat" icon to clear the context. Then download a small text file with the correct list of IBM CEOs to your Downloads folder: From db80cd29462400496bff43143228de9415adddc3 Mon Sep 17 00:00:00 2001 From: Rafael Vasquez Date: Wed, 1 Oct 2025 15:35:16 -0400 Subject: [PATCH 2/2] Copy edit typos Signed-off-by: Rafael Vasquez --- docs/lab-7/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/lab-7/README.md b/docs/lab-7/README.md index 6117db6..d36901d 100644 --- a/docs/lab-7/README.md +++ b/docs/lab-7/README.md @@ -51,7 +51,7 @@ and brittle prompts with structured, maintainable, robust, and efficient AI work * Easily integrate the power of LLMs into legacy code-bases (mify). * Sketch applications by writing specifications and letting `mellea` fill in the details (generative slots). -* Get started by decomposing your large unwieldy prompts into structured and maintainable mellea problems. +* Get started by decomposing your large unwieldy prompts into structured and maintainable Mellea problems. ## Let's setup Mellea to work locally @@ -119,7 +119,7 @@ With this more advance example we now have the ability to customize the email to personalized for the recipient. But this is just a more programmatic prompt engineering, lets see where Mellea really shines. -### Simple email with boundries and requirements +### Simple email with boundaries and requirements 1. The first step with the power of Mellea, is adding requirements to something like this email, take a look at this first example: