Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/lab-1.5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The first response may take a minute to process. This is because `ollama` is spi

![batman](../images/openwebui_who_is_batman.png)

You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
You may notice that your answer is slightly different then the screen shot above. This is expected and nothing to worry about!

## Conclusion

Expand Down
2 changes: 1 addition & 1 deletion docs/lab-1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ The first response may take a minute to process. This is because `ollama` is spi

![who is batman](../images/anythingllm_who_is_batman.png)

You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
You may notice that your answer is slightly different then the screen shot above. This is expected and nothing to worry about!

## Conclusion

Expand Down
2 changes: 1 addition & 1 deletion docs/lab-2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ First, use ollama to list the models that you currently have downloaded:
```
ollama list
```
And you'll see a list similiar to the following:
And you'll see a list similar to the following:
```
ollama list
NAME ID SIZE MODIFIED
Expand Down
2 changes: 1 addition & 1 deletion docs/lab-3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ How would you respond to client who has had their freight lost as a representati

![lost freight](../images/anythingllm_lost_freight.png)

That's not a satisfactory or interesting response, right? We need to interate on it, and provide more context about the client, like what they may have lost. **Tip: always think about adding more context!**
That's not a satisfactory or interesting response, right? We need to iterate on it, and provide more context about the client, like what they may have lost. **Tip: always think about adding more context!**

```
The freight they lost was an industrial refrigerator, from Burbank, California to Kanas City, MO.
Expand Down
4 changes: 2 additions & 2 deletions docs/lab-5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ logo: images/ibm-blue-background.png

## Configuration and Sanity Check

Open up AnyThingLLM, and you should see something like the following:
Open up AnythingLLM, and you should see something like the following:
![default screen](../images/anythingllm_open_screen.png)

If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or
If you see this that means AnythingLLM is installed correctly, and we can continue configuration. If not, please find a workshop TA or
raise your hand we'll be there to help you ASAP.

Next as a sanity check, run the following command to confirm you have the [granite4:micro](https://ollama.com/library/granite4)
Expand Down
6 changes: 3 additions & 3 deletions docs/lab-6/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Learn how to build a simple local RAG
logo: images/ibm-blue-background.png
---

## Retrieval-Augmented Generation overview
## Retrieval-Augmented Generation Overview
The LLMs we're using for these labs have been trained on billions of parameters, but they haven't been trained on everything, and the smaller models have less general knowledge to work with.
For example, even the latest models are trained with aged data, and they couldn't know about current events or the unique data your use-case might need.

Expand All @@ -30,7 +30,7 @@ ollama pull granite3.3:2b

If you didn't know, the supported languages with `granite3.3:2b` now include:

- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may fine-tune this Granite model for languages beyond these 12 languages.

And the Capabilities also include:

Expand Down Expand Up @@ -60,7 +60,7 @@ For example:

At first glance, the list looks pretty good. But if you know your IBM CEOs, you'll notice that it misses a few of them, and sometimes adds new names that weren't ever IBM CEOs!
(Note: the larger granite3.3:8b does a much better job on the IBM CEOs, you can try it later)
But we can provide the small LLM with a RAG document that supplements the model's missing informaiton with a correct list, so it will generate a better answer.
But we can provide the small LLM with a RAG document that supplements the model's missing information with a correct list, so it will generate a better answer.

Click on the "New Chat" icon to clear the context. Then download a small text file with the correct list of IBM CEOs to your Downloads folder:

Expand Down
4 changes: 2 additions & 2 deletions docs/lab-7/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ and brittle prompts with structured, maintainable, robust, and efficient AI work
* Easily integrate the power of LLMs into legacy code-bases (mify).
* Sketch applications by writing specifications and letting `mellea` fill in
the details (generative slots).
* Get started by decomposing your large unwieldy prompts into structured and maintainable mellea problems.
* Get started by decomposing your large unwieldy prompts into structured and maintainable Mellea problems.

## Let's setup Mellea to work locally

Expand Down Expand Up @@ -119,7 +119,7 @@ With this more advance example we now have the ability to customize the email to
personalized for the recipient. But this is just a more programmatic prompt engineering, lets see where
Mellea really shines.

### Simple email with boundries and requirements
### Simple email with boundaries and requirements

1. The first step with the power of Mellea, is adding requirements to something like this email, take a look at this first
example:
Expand Down