Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ Our overarching goals of this workshop is as follows:
| [Lab 2: Chatting with Your Local AI](lab-2/README.md) | Get acquainted with your local LLM |
| [Lab 3: Prompt Engineering](lab-3/README.md) | Learn about prompt engineering techniques |
| [Lab 4: Applying What You Learned](lab-4/README.md) | Refine your prompting skills |
| [Lab 5: Building a local AI Assistant](lab-5/README.md) | Build a Granite coding assistant |
| [Lab 6: Coding with an AI Assistant](lab-6/README.md) | Write code using Continue and Granite |
| [Lab 5: Using AnythingLLM for a local RAG](lab-5/README.md) | Build a Granite coding assistant |
| [Lab 6: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite |
| [Lab 7: Using Mellea to help with Generative Computing](lab-7/README.md) | Learn how to leverage Mellea for Advanced AI situations |

Thank you SO MUCH for joining us in this workshop! If you have any questions or feedback,
Expand Down
Binary file modified docs/images/History_of_IBM_summary.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/anythingllm_llm_config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/ceo_list_with_rag.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/openwebui_main_screen.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/openwebui_model_selection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/openwebui_open_screen.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/openwebui_rag_source.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/openwebui_who_is_batman.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/rag_doc_added.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/small_llm_ceo_list.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 8 additions & 8 deletions docs/lab-1.5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,23 +11,23 @@ Let's start by configuring [Open-WebUI](../pre-work/README.md#installing-open-we
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:

```bash
ollama pull granite3.1-dense:8b
ollama pull granite4:micro
```
!!! note
If the granite4:micro model isn't available yet, you can choose granite3.3:2b or granite3.3:8b

!!! note
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite4). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.

Open up Open-WebUI (assuming you've run `open-webui serve`):
Open up Open-WebUI (assuming you've run `open-webui serve`): by using this URL with your browser: [http://localhost:8080/](http://localhost:8080/)

![default screen](../images/openwebui_open_screen.png)
![user setup screen](../images/openwebui_user_setup_screen.png)

If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help.

Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.

![user setup screen](../images/openwebui_user_setup_screen.png)

You should see the Open-WebUI main page now, with `granite3.1-dense:latest` right there in the center!
You should see the Open-WebUI main page now, with `granite4:micro` right there in the center!

![main screen](../images/openwebui_main_screen.png)

Expand All @@ -43,7 +43,7 @@ You may notice that your answer is slighty different then the screen shot above.

## Conclusion

**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite4:micro` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!

<script data-goatcounter="https://tracker.asgharlabs.io/count"
async src="//tracker.asgharlabs.io/count.js"></script>
12 changes: 7 additions & 5 deletions docs/lab-1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,24 @@ logo: images/ibm-blue-background.png

## Setup

Let's start by configuring [AnythingLLM installed](../pre-work/README.md#anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
Let's start by configuring [AnythingLLM installed](../pre-work/README.md#installing-anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.

First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:

```bash
ollama pull granite3.1-dense:8b
ollama pull granite4:micro
```
!!! note
If the granite4:micro model isn't available yet, you can choose granite3.3:2b or granite3.3:8b

!!! note
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.3). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.

Open the AnythingLLM desktop application and either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.

![wrench icon](../images/anythingllm_wrench_icon.png)

Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3.1-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite4:micro` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.

![llm configuration](../images/anythingllm_llm_config.png)

Expand All @@ -47,7 +49,7 @@ You may notice that your answer is slighty different then the screen shot above.

## Conclusion

**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite4:micro` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!

<script data-goatcounter="https://tracker.asgharlabs.io/count"
async src="//tracker.asgharlabs.io/count.js"></script>
25 changes: 23 additions & 2 deletions docs/lab-2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,29 @@ Batman's top 10 enemies are, or what was the most creative way Batman saved the

This is an example of of using the CLI with vanilla ollama:

First, use ollama to list the models that you currently have downloaded:
```
ollama list
```
And you'll see a list similiar to the following:
```
ollama list
NAME ID SIZE MODIFIED
granite3.3:2b 07bd1f170855 1.5 GB About a minute ago
granite3.3:8b fd429f23b909 4.9 GB 2 minutes ago
granite4:micro b99795f77687 2.1 GB 23 hours ago
```
Next, use Ollama to run one of the models:

```
ollama run granite4:micro
```
And ask it questions, like this:
```
Who is Batman?
```
And it returns something like this:
```
$ ollama run granite3.1-dense
>>> Who is Batman?
Batman is a fictional superhero created by artist Bob Kane and writer Bill Finger. He first appeared in Detective Comics #27,
published by DC Comics in 1939. Born as Bruce Wayne, he becomes Batman to fight crime after witnessing the murder of his parents
Expand All @@ -37,7 +58,7 @@ characters in the world of comics and popular culture.
```

```
>>> What was Batman's top 10 enemies?
>>> Who were Batman's top 10 enemies?
Batman has faced numerous villains over the years, but here are ten of his most notable adversaries:

1. The Joker - One of Batman's archenemies, The Joker is a criminal mastermind known for his chaotic and psychopathic behavior.
Expand Down
2 changes: 1 addition & 1 deletion docs/lab-4/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ The best part of this prompt is that you can take the output and extend or short

## Conclusion

Well done! By completing these exercises, you're well on your way to being a prompt expert. In [Lab 5](https://ibm.github.io/opensource-ai-workshop/lab-5/), we'll move towards code-generation and learn how to use a local coding assistant.
Well done! By completing these exercises, you're well on your way to being a prompt expert. In [Lab 5](https://ibm.github.io/opensource-ai-workshop/lab-5/), we'll show how to use local RAG with AnythingLLM.

<script data-goatcounter="https://tracker.asgharlabs.io/count"
async src="//tracker.asgharlabs.io/count.js"></script>
21 changes: 15 additions & 6 deletions docs/lab-5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,22 @@ Open up AnyThingLLM, and you should see something like the following:
If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or
raise your hand we'll be there to help you ASAP.

Next as a sanity check, run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
Next as a sanity check, run the following command to confirm you have the [granite4:micro](https://ollama.com/library/granite4)
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.

```bash
ollama pull granite3.1-dense:8b
ollama pull granite4:micro
```
!!! note
If the granite4:micro model isn't available yet, you can use granite3.3:2b or granite3.3:8b

If you didn't know, the supported languages with `granite3.1-dense` now include:
If you didn't know, the supported languages with `granite4` now include:

- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.

And the Capabilities also include:

- Thinking
- Summarization
- Text classification
- Text extraction
Expand All @@ -33,14 +36,15 @@ And the Capabilities also include:
- Code related tasks
- Function-calling tasks
- Multilingual dialog use cases
- Fill-in-the-middle
- Long-context tasks including long document/meeting summarization, long document QA, etc.

Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama`
but you may want to change it in the future.

![wrench icon](../images/anythingllm_wrench_icon.png)

Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3.1-dense:8b` model. (You should be able to
Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite4:micro` model. (You should be able to
see all the models you have access to through `ollama` there.)

![llm configuration](../images/anythingllm_llm_config.png)
Expand All @@ -62,7 +66,7 @@ it knows _something_.
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
you have more questions about it raise your hand and one of the helpers would love to talk you about it.

Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense` and `ollama`!
Congratulations! You have AnythingLLM running now, configured to work with `granite4:micro` and `ollama`!

## Creating your own local RAG

Expand All @@ -88,6 +92,11 @@ Not great right? Well now we need to give it a way to look up this data, luckly,
copy of the budget pdf [here](https://github.com/user-attachments/files/18510560/budget_fy2024.pdf).
Go ahead and save it to your local machine, and be ready to grab it.

!!! note
Granite 4 has newer data, so since this lab was created, it DOES have the 2024 data. If you find that's the case, you can try it with the question about 2025 using the 2025 full-year budget using the link below.

![budget_fy2025.pdf](https://www.whitehouse.gov/wp-content/uploads/2024/03/budget_fy2025.pdf)

Now spin up a **New Workspace**, (yes, please a new workspace, it seems that sometimes AnythingLLM has
issues with adding things, so a clean environment is always easier to teach in) and call it
something else.
Expand Down
8 changes: 6 additions & 2 deletions docs/lab-6/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ ollama pull granite3.3:2b

If you didn't know, the supported languages with `granite3.3:2b` now include:

- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.

And the Capabilities also include:

Expand Down Expand Up @@ -82,6 +82,10 @@ The answer should now be correct. (For example, always before it forgets John Ak

![CEO list with RAG](../images/ceo_list_with_rag.png)

If you look near the bottom of the answer, you can see the RAG source that it used, along with some options you can click on, including information on the tokens per second and total tokens.

![Open-WebUI Source](../images/openwebui_rag_source.png)

We can also find and download information to pdf from Wikipedia:
For example: [History of IBM](https://en.wikipedia.org/wiki/History_of_IBM)

Expand All @@ -91,7 +95,7 @@ Then use this History_of_IBM.pdf as a RAG by clicking on the + and select "Histo

Next, use the Open-WebUI to ask more questions about IBM, or have it summarize the document itself. For example:
```bash
Write a short 300 word summary of the History_of_IBM.pdf
Write a short 150 word summary of the History_of_IBM.pdf
```
![Summary of IBM History](../images/History_of_IBM_summary.png)

Expand Down
23 changes: 4 additions & 19 deletions docs/pre-work/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,7 @@ These are the required applications and general installation notes for this work

- [Python](#installing-python)
- [Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer.
- [Visual Studio Code](#installing-visual-studio-code) **(Recommended)** or [any Jetbrains IDE](#installing-jetbrains). This workshop uses VSCode.
- [AnythingLLM](#installing-anythingllm) **(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based.
- [Continue](#installing-continue) - An IDE extension for AI code assistants.

## Installing Python

Expand All @@ -38,6 +36,9 @@ brew install python@3.11

Please confirm that your `python --version` is at least `3.11+` for the best experience.

!!! note
python 3.11 and 3.12 work best. Python 3.13 has trouble with Open-WebUI at the moment.

## Installing Ollama

Most users can simply download Ollama from its [website](https://ollama.com/download).
Expand All @@ -57,23 +58,7 @@ brew install ollama
```

!!! note
You can save time by starting the model download used for the lab in the background by running `ollama pull granite3.1-dense:8b` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.

## Installing Visual Studio Code

You can download and install VSCode from their [website](https://code.visualstudio.com/Download) based on your operating system..

!!! note
You only need one of VSCode or Jetbrains for this lab.

## Installing Jetbrains

Download and install the IDE of your choice [here](https://www.jetbrains.com/ides/#choose-your-ide).
If you'll be using `python` (like this workshop does), pick [PyCharm](https://www.jetbrains.com/pycharm/).

## Installing Continue

Choose your IDE on their [website](https://www.continue.dev/) and install the extension.
You can save time by starting the model download used for the lab in the background by running `ollama pull granite4:micro` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.

## Installing AnythingLLM

Expand Down