Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 18 additions & 24 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,14 @@ logo: images/ibm-blue-background.png

## Open Source AI workshop

Welcome to our workshop! Thank you for trusting us to help you learn about this
new and exciting space. There is a lot going on here, and we want to give you
enough to be able to feel confident in consuming LLM(s) and ideally find success
quickly. In this workshop we'll be using a local AI Model for code completion,
and learning best practices leveraging an Open Source LLM.
Welcome to the Open Source AI workshop! Thank you for trusting us to help you learn about this
new and exciting space. In this workshop, you'll gain the skills and confidence to effectively use LLMs locally through simple exercises and experimentation, and learn best practices for leveraging open source AI.

Our overarching goals of this workshop is as follows:

* Understand what Open Source AI is, and its general use cases
* How to use an Open Source AI model that is built in a verifiable and legal way
* Learn about Prompt Engineering, how to leverage a local LLM in starter daily tasks
* Learn about Open Source AI and its general use cases.
* Use an open source LLM that is built in a verifiable and legal way.
* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.

!!! tip
This workshop may seem short, but a lot of working with AI is exploration and engagement.
Expand All @@ -31,21 +28,19 @@ Our overarching goals of this workshop is as follows:

| Lab | Description |
| :--- | :--- |
| [Lab 0: Pre-work](pre-work/README.md) | Pre-work and set up for the workshop |
| [Lab 1: Building a local AI co-pilot](lab-1/README.md) | Let's get VSCode and our local AI working together |
| [Lab 2: Using the local AI co-pilot](lab-2/README.md) | Let's learn about how to use a local AI co-pilot |
| [Lab 3: Configuring AnythingLLM](lab-3/README.md) | Let's configure AnythingLLM or Open-WebUI |
| [Lab 3.5: Configuring Open-WebUI](lab-3.5/README.md) | Let's configure Open-WebUI or AnythingLLM |
| [Lab 4: Prompt engineering overview](lab-4/README.md) | Let's learn about leveraging and engaging with the `granite3.1-dense` model |
| [Lab 5: Useful prompts and use cases](lab-5/README.md) | Let's get some good over arching prompts and uses cases with `granite3.1-dense` model |
| [Lab 6: Using AnythingLLM for a local RAG](lab-6/README.md) | Let's build a local RAG and use `granite3.1-dense` to talk to it |

!!! success
Thank you SO MUCH for joining us on this workshop, if you have any thoughts or questions
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
to put a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
ASAP.
| [Lab 0: Pre-work](pre-work/README.md) | Install pre-requisites for the workshop |
| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally |
| [Lab 2: Using the local LLM](lab-2/README.md) | Test some general prompt templates |
| [Lab 3: Engineering prompts](lab-3/README.md) | Learn and apply Prompt Engineering concepts |
| [Lab 4: Using AnythingLLM for a local RAG](lab-4/README.md) | Build a simple local RAG |
| [Lab 5: Building an AI co-pilot](lab-5/README.md) | Build a coding assistant |
| [Lab 6: Using your coding co-pilot](lab-6/README.md) | Use your coding assistant for tasks |

Thank you SO MUCH for joining us in this workshop! If you have any thoughts or questions at any point,
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
to open a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
ASAP.

## Compatibility

Expand All @@ -60,4 +55,3 @@ This workshop has been tested on the following platforms:
* [JJ Asghar](https://github.com/jjasghar)
* [Gabe Goodhart](https://github.com/gabe-l-hart)
* [Ming Zhao](https://github.com/mingxzhao)

Binary file modified docs/images/anythingllm_open_screen.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
58 changes: 12 additions & 46 deletions docs/lab-1.5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,50 +5,26 @@ logo: images/ibm-blue-background.png
---

!!! warning
This should be noted that this is optional. You don't need Open-WebUI if you have AnythingLLM already running. This is **optional**.
This is **optional**. You don't need Open-WebUI if you have AnythingLLM already running.

Now that you've gotten [Open-WebUI installed](../pre-work/README.md#open-webui) we need to configure it with `ollama` and Open-WebUI
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
Now that you have [Open-WebUI installed](../pre-work/README.md#installing-open-webui) let's configure it with `ollama` and Open-WebUI to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.

Open up Open-WebUI (assuming you've run `open-webui serve` and nothing else), and you should see something like the following:

Open up Open-WebUI (assuming all you have done is `open-webui serve` and
nothing else), and you should see something like the following:
![default screen](../images/openwebui_open_screen.png)

If you see this that means Open-WebUI is installed correctly, and we can continue configuration, if not, please find a workshop TA or
raise your hand we'll be there to help you ASAP.
If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help.

Before clicking the "Getting Started" button, make sure that `ollama` has
`granite3.1-dense` pulled down.
Before clicking the *Getting Started* button, make sure that `ollama` has `granite3.1-dense` downloaded:

```bash
ollama pull granite3.1-dense:8b
```

Run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.

If you didn't know, the supported languages with `granite3.1-dense` now include:

- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)

And the Capabilities also include:

- Summarization
- Text classification
- Text extraction
- Question-answering
- Retrieval Augmented Generation (RAG)
- Code related tasks
- Function-calling tasks
- Multilingual dialog use cases
- Long-context tasks including long document/meeting summarization, long document QA, etc.

!!! note
We need to figure out a way to copy the models into ollama without downloading.
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.

Click the "Getting Started" button, and fill out the next screen, and click the
"Create Admin Account". This will be your login for your local machine, remember this because
it will also be the Open-WebUI configuration user if want to dig deeper into it after this workshop.
Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.

![user setup screen](../images/openwebui_user_setup_screen.png)

Expand All @@ -57,22 +33,12 @@ the center!

![main screen](../images/openwebui_main_screen.png)

Ask it a question, see that it works as you expect...may I suggest:
Test it out! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.

```
Who is Batman?
```
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.

![batman](../images/openwebui_who_is_batman.png)

Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
you have more questions about it raise your hand and one of the helpers would love to talk you about it.

Congratulations! You have Open-WebUI running now, configured to work with `granite3.1-dense` and `ollama`!

!!! note
This was done on your local machine, take a moment and realize if you
needed to create a shared AI enviroment, this could be easily leveraged
here. This is very out of scope of this workshop, but the TAs can help if
you have some general questions around running this in this "space."
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!

**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!
54 changes: 14 additions & 40 deletions docs/lab-1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,67 +4,41 @@ description: Steps to configure AnythingLLM for usage
logo: images/ibm-blue-background.png
---

Now that you've gotten [AnythingLLM installed](../pre-work/README.md#anythingllm) we need to configure it with `ollama` and AnythingLLM
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
Now that you've got [AnythingLLM installed](../pre-work/README.md#anythingllm), we need to configure it with `ollama`. The following screenshots are taken from a Mac, but the gist of this should be the same on Windows and Linux.

Open up AnyThingLLM, and you should see something like the following:
![default screen](../images/anythingllm_open_screen.png)

If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or
raise your hand we'll be there to help you ASAP.

Next as a sanity check, run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.
First, if you haven't already, download the Granite 3.1 model. Open up a terminal and run the following command:

```bash
ollama pull granite3.1-dense:8b
```

If you didn't know, the supported languages with `granite3.1-dense` now include:

- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)

And the Capabilities also include:

- Summarization
- Text classification
- Text extraction
- Question-answering
- Retrieval Augmented Generation (RAG)
- Code related tasks
- Function-calling tasks
- Multilingual dialog use cases
- Long-context tasks including long document/meeting summarization, long document QA, etc.

!!! note
We need to figure out a way to copy the models into ollama without downloading, conference wifi is never fast enough.
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.

Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama`
but you may want to change it in the future.
Either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.

![wrench icon](../images/anythingllm_wrench_icon.png)

Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3-dense:8b` model. (You should be able to
see all the models you have access to through `ollama` there.)
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.

![llm configuration](../images/anythingllm_llm_config.png)

Click the "Back to workspaces" button where the wrench was. And Click "New Workspace."
Click the *Back to workspaces* button (where the 🔧 was) and head back to the homepage.

Click *New Workspace*.

![new workspace](../images/anythingllm_new_workspace.png)

Name it something like "learning llm" or the name of the event we are right now, something so you know it's somewhere you are learning
how to use this LLM.
Give it a name (e.g. the event you're attending today):

![naming new workspace](../images/anythingllm_naming_workspace.png)

Now we can test our connections _through_ AnythingLLM! I like the "Who is Batman?" question, as a sanity check on connections and that
it knows _something_.
Now, let's test our connection AnythingLLM! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.

![who is batman](../images/anythingllm_who_is_batman.png)
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.

Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
![who is batman](../images/anythingllm_who_is_batman.png)

Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense` and `ollama`!
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!

**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!
Loading