diff --git a/docs/README.md b/docs/README.md
index 3f6e38f..10a8e36 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -29,8 +29,8 @@ Our overarching goals of this workshop is as follows:
| [Lab 2: Chatting with Your Local AI](lab-2/README.md) | Get acquainted with your local LLM |
| [Lab 3: Prompt Engineering](lab-3/README.md) | Learn about prompt engineering techniques |
| [Lab 4: Applying What You Learned](lab-4/README.md) | Refine your prompting skills |
-| [Lab 5: Building a local AI Assistant](lab-5/README.md) | Build a Granite coding assistant |
-| [Lab 6: Coding with an AI Assistant](lab-6/README.md) | Write code using Continue and Granite |
+| [Lab 5: Using AnythingLLM for a local RAG](lab-5/README.md) | Build a Granite coding assistant |
+| [Lab 6: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite |
| [Lab 7: Using Mellea to help with Generative Computing](lab-7/README.md) | Learn how to leverage Mellea for Advanced AI situations |
Thank you SO MUCH for joining us in this workshop! If you have any questions or feedback,
diff --git a/docs/images/History_of_IBM_summary.png b/docs/images/History_of_IBM_summary.png
index 51394b7..0b43ae8 100644
Binary files a/docs/images/History_of_IBM_summary.png and b/docs/images/History_of_IBM_summary.png differ
diff --git a/docs/images/anythingllm_llm_config.png b/docs/images/anythingllm_llm_config.png
index b48b703..77c8f02 100644
Binary files a/docs/images/anythingllm_llm_config.png and b/docs/images/anythingllm_llm_config.png differ
diff --git a/docs/images/ceo_list_with_rag.png b/docs/images/ceo_list_with_rag.png
index 8d52cdd..461b3be 100644
Binary files a/docs/images/ceo_list_with_rag.png and b/docs/images/ceo_list_with_rag.png differ
diff --git a/docs/images/openwebui_main_screen.png b/docs/images/openwebui_main_screen.png
index 7fe6f5c..230cf32 100644
Binary files a/docs/images/openwebui_main_screen.png and b/docs/images/openwebui_main_screen.png differ
diff --git a/docs/images/openwebui_model_selection.png b/docs/images/openwebui_model_selection.png
index 56980b1..67f2d84 100644
Binary files a/docs/images/openwebui_model_selection.png and b/docs/images/openwebui_model_selection.png differ
diff --git a/docs/images/openwebui_open_screen.png b/docs/images/openwebui_open_screen.png
index f08f25f..7b39ae6 100644
Binary files a/docs/images/openwebui_open_screen.png and b/docs/images/openwebui_open_screen.png differ
diff --git a/docs/images/openwebui_rag_source.png b/docs/images/openwebui_rag_source.png
new file mode 100644
index 0000000..6c7656f
Binary files /dev/null and b/docs/images/openwebui_rag_source.png differ
diff --git a/docs/images/openwebui_who_is_batman.png b/docs/images/openwebui_who_is_batman.png
index fc9e686..fdb3f48 100644
Binary files a/docs/images/openwebui_who_is_batman.png and b/docs/images/openwebui_who_is_batman.png differ
diff --git a/docs/images/rag_doc_added.png b/docs/images/rag_doc_added.png
index 00aa6ac..9b791e1 100644
Binary files a/docs/images/rag_doc_added.png and b/docs/images/rag_doc_added.png differ
diff --git a/docs/images/small_llm_ceo_list.png b/docs/images/small_llm_ceo_list.png
index c800634..4ae3278 100644
Binary files a/docs/images/small_llm_ceo_list.png and b/docs/images/small_llm_ceo_list.png differ
diff --git a/docs/lab-1.5/README.md b/docs/lab-1.5/README.md
index 2a59acf..8c76e9f 100644
--- a/docs/lab-1.5/README.md
+++ b/docs/lab-1.5/README.md
@@ -11,23 +11,23 @@ Let's start by configuring [Open-WebUI](../pre-work/README.md#installing-open-we
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
```bash
-ollama pull granite3.1-dense:8b
+ollama pull granite4:micro
```
+!!! note
+ If the granite4:micro model isn't available yet, you can choose granite3.3:2b or granite3.3:8b
!!! note
- The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
+ The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite4). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
-Open up Open-WebUI (assuming you've run `open-webui serve`):
+Open up Open-WebUI (assuming you've run `open-webui serve`): by using this URL with your browser: [http://localhost:8080/](http://localhost:8080/)
-
+
If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help.
Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.
-
-
-You should see the Open-WebUI main page now, with `granite3.1-dense:latest` right there in the center!
+You should see the Open-WebUI main page now, with `granite4:micro` right there in the center!

@@ -43,7 +43,7 @@ You may notice that your answer is slighty different then the screen shot above.
## Conclusion
-**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
+**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite4:micro` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
diff --git a/docs/lab-1/README.md b/docs/lab-1/README.md
index 0bd5a9e..7eac259 100644
--- a/docs/lab-1/README.md
+++ b/docs/lab-1/README.md
@@ -6,22 +6,24 @@ logo: images/ibm-blue-background.png
## Setup
-Let's start by configuring [AnythingLLM installed](../pre-work/README.md#anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
+Let's start by configuring [AnythingLLM installed](../pre-work/README.md#installing-anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
```bash
-ollama pull granite3.1-dense:8b
+ollama pull granite4:micro
```
+!!! note
+ If the granite4:micro model isn't available yet, you can choose granite3.3:2b or granite3.3:8b
!!! note
- The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
+ The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.3). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
Open the AnythingLLM desktop application and either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.

-Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3.1-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
+Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite4:micro` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.

@@ -47,7 +49,7 @@ You may notice that your answer is slighty different then the screen shot above.
## Conclusion
-**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
+**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite4:micro` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
diff --git a/docs/lab-2/README.md b/docs/lab-2/README.md
index efb0f25..d93fac4 100644
--- a/docs/lab-2/README.md
+++ b/docs/lab-2/README.md
@@ -25,8 +25,29 @@ Batman's top 10 enemies are, or what was the most creative way Batman saved the
This is an example of of using the CLI with vanilla ollama:
+First, use ollama to list the models that you currently have downloaded:
+```
+ollama list
+```
+And you'll see a list similiar to the following:
+```
+ollama list
+NAME ID SIZE MODIFIED
+granite3.3:2b 07bd1f170855 1.5 GB About a minute ago
+granite3.3:8b fd429f23b909 4.9 GB 2 minutes ago
+granite4:micro b99795f77687 2.1 GB 23 hours ago
+```
+Next, use Ollama to run one of the models:
+
+```
+ollama run granite4:micro
+```
+And ask it questions, like this:
+```
+Who is Batman?
+```
+And it returns something like this:
```
-$ ollama run granite3.1-dense
>>> Who is Batman?
Batman is a fictional superhero created by artist Bob Kane and writer Bill Finger. He first appeared in Detective Comics #27,
published by DC Comics in 1939. Born as Bruce Wayne, he becomes Batman to fight crime after witnessing the murder of his parents
@@ -37,7 +58,7 @@ characters in the world of comics and popular culture.
```
```
->>> What was Batman's top 10 enemies?
+>>> Who were Batman's top 10 enemies?
Batman has faced numerous villains over the years, but here are ten of his most notable adversaries:
1. The Joker - One of Batman's archenemies, The Joker is a criminal mastermind known for his chaotic and psychopathic behavior.
diff --git a/docs/lab-4/README.md b/docs/lab-4/README.md
index 1cb1920..f4ae1ef 100644
--- a/docs/lab-4/README.md
+++ b/docs/lab-4/README.md
@@ -212,7 +212,7 @@ The best part of this prompt is that you can take the output and extend or short
## Conclusion
-Well done! By completing these exercises, you're well on your way to being a prompt expert. In [Lab 5](https://ibm.github.io/opensource-ai-workshop/lab-5/), we'll move towards code-generation and learn how to use a local coding assistant.
+Well done! By completing these exercises, you're well on your way to being a prompt expert. In [Lab 5](https://ibm.github.io/opensource-ai-workshop/lab-5/), we'll show how to use local RAG with AnythingLLM.
diff --git a/docs/lab-5/README.md b/docs/lab-5/README.md
index c417c2b..0e3bda4 100644
--- a/docs/lab-5/README.md
+++ b/docs/lab-5/README.md
@@ -12,19 +12,22 @@ Open up AnyThingLLM, and you should see something like the following:
If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or
raise your hand we'll be there to help you ASAP.
-Next as a sanity check, run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
+Next as a sanity check, run the following command to confirm you have the [granite4:micro](https://ollama.com/library/granite4)
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.
```bash
-ollama pull granite3.1-dense:8b
+ollama pull granite4:micro
```
+!!! note
+ If the granite4:micro model isn't available yet, you can use granite3.3:2b or granite3.3:8b
-If you didn't know, the supported languages with `granite3.1-dense` now include:
+If you didn't know, the supported languages with `granite4` now include:
-- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
+- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
And the Capabilities also include:
+- Thinking
- Summarization
- Text classification
- Text extraction
@@ -33,6 +36,7 @@ And the Capabilities also include:
- Code related tasks
- Function-calling tasks
- Multilingual dialog use cases
+- Fill-in-the-middle
- Long-context tasks including long document/meeting summarization, long document QA, etc.
Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama`
@@ -40,7 +44,7 @@ but you may want to change it in the future.

-Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3.1-dense:8b` model. (You should be able to
+Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite4:micro` model. (You should be able to
see all the models you have access to through `ollama` there.)

@@ -62,7 +66,7 @@ it knows _something_.
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
-Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense` and `ollama`!
+Congratulations! You have AnythingLLM running now, configured to work with `granite4:micro` and `ollama`!
## Creating your own local RAG
@@ -88,6 +92,11 @@ Not great right? Well now we need to give it a way to look up this data, luckly,
copy of the budget pdf [here](https://github.com/user-attachments/files/18510560/budget_fy2024.pdf).
Go ahead and save it to your local machine, and be ready to grab it.
+!!! note
+ Granite 4 has newer data, so since this lab was created, it DOES have the 2024 data. If you find that's the case, you can try it with the question about 2025 using the 2025 full-year budget using the link below.
+
+
+
Now spin up a **New Workspace**, (yes, please a new workspace, it seems that sometimes AnythingLLM has
issues with adding things, so a clean environment is always easier to teach in) and call it
something else.
diff --git a/docs/lab-6/README.md b/docs/lab-6/README.md
index 370ab24..df6fec5 100644
--- a/docs/lab-6/README.md
+++ b/docs/lab-6/README.md
@@ -30,7 +30,7 @@ ollama pull granite3.3:2b
If you didn't know, the supported languages with `granite3.3:2b` now include:
-- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
+- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
And the Capabilities also include:
@@ -82,6 +82,10 @@ The answer should now be correct. (For example, always before it forgets John Ak

+If you look near the bottom of the answer, you can see the RAG source that it used, along with some options you can click on, including information on the tokens per second and total tokens.
+
+
+
We can also find and download information to pdf from Wikipedia:
For example: [History of IBM](https://en.wikipedia.org/wiki/History_of_IBM)
@@ -91,7 +95,7 @@ Then use this History_of_IBM.pdf as a RAG by clicking on the + and select "Histo
Next, use the Open-WebUI to ask more questions about IBM, or have it summarize the document itself. For example:
```bash
-Write a short 300 word summary of the History_of_IBM.pdf
+Write a short 150 word summary of the History_of_IBM.pdf
```

diff --git a/docs/pre-work/README.md b/docs/pre-work/README.md
index 8cd6657..a4ad482 100644
--- a/docs/pre-work/README.md
+++ b/docs/pre-work/README.md
@@ -14,9 +14,7 @@ These are the required applications and general installation notes for this work
- [Python](#installing-python)
- [Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer.
-- [Visual Studio Code](#installing-visual-studio-code) **(Recommended)** or [any Jetbrains IDE](#installing-jetbrains). This workshop uses VSCode.
- [AnythingLLM](#installing-anythingllm) **(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based.
-- [Continue](#installing-continue) - An IDE extension for AI code assistants.
## Installing Python
@@ -38,6 +36,9 @@ brew install python@3.11
Please confirm that your `python --version` is at least `3.11+` for the best experience.
+!!! note
+ python 3.11 and 3.12 work best. Python 3.13 has trouble with Open-WebUI at the moment.
+
## Installing Ollama
Most users can simply download Ollama from its [website](https://ollama.com/download).
@@ -57,23 +58,7 @@ brew install ollama
```
!!! note
- You can save time by starting the model download used for the lab in the background by running `ollama pull granite3.1-dense:8b` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.
-
-## Installing Visual Studio Code
-
-You can download and install VSCode from their [website](https://code.visualstudio.com/Download) based on your operating system..
-
-!!! note
- You only need one of VSCode or Jetbrains for this lab.
-
-## Installing Jetbrains
-
-Download and install the IDE of your choice [here](https://www.jetbrains.com/ides/#choose-your-ide).
-If you'll be using `python` (like this workshop does), pick [PyCharm](https://www.jetbrains.com/pycharm/).
-
-## Installing Continue
-
-Choose your IDE on their [website](https://www.continue.dev/) and install the extension.
+ You can save time by starting the model download used for the lab in the background by running `ollama pull granite4:micro` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.
## Installing AnythingLLM