Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,11 @@ Our overarching goals of this workshop is as follows:
| Lab | Description |
| :--- | :--- |
| [Lab 0: Workshop Pre-work](pre-work/README.md) | Install pre-requisites for the workshop |
| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally |
| [Lab 1.5: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally |
| [Lab 1: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally |
| [Lab 2: Chatting with Your Local AI](lab-2/README.md) | Get acquainted with your local LLM |
| [Lab 3: Prompt Engineering](lab-3/README.md) | Learn about prompt engineering techniques |
| [Lab 4: Applying What You Learned](lab-4/README.md) | Refine your prompting skills |
| [Lab 5: Using AnythingLLM for a local RAG](lab-5/README.md) | Build a Granite coding assistant |
| [Lab 6: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite |
| [Lab 5: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite |
| [Lab 7: Using Mellea to help with Generative Computing](lab-7/README.md) | Learn how to leverage Mellea for Advanced AI situations |

Thank you SO MUCH for joining us in this workshop! If you have any questions or feedback,
Expand Down
8 changes: 1 addition & 7 deletions docs/pre-work/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ These are the required applications and general installation notes for this work

- [Python](#installing-python)
- [Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer.
- [AnythingLLM](#installing-anythingllm) **(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based.
- [Open WebUI](#installing-open-webui)

## Installing Python

Expand Down Expand Up @@ -60,12 +60,6 @@ brew install ollama
!!! note
You can save time by starting the model download used for the lab in the background by running `ollama pull granite4:micro` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.

## Installing AnythingLLM

Download and install it from their [website](https://anythingllm.com/desktop) based on your operating system. We'll configure it later in the workshop.

!!! note
You only need one of AnythingLLM or Open-WebUI for this lab.

## Installing Open-WebUI

Expand Down