diff --git a/docs/README.md b/docs/README.md index 10a8e36..7c2e498 100644 --- a/docs/README.md +++ b/docs/README.md @@ -24,13 +24,11 @@ Our overarching goals of this workshop is as follows: | Lab | Description | | :--- | :--- | | [Lab 0: Workshop Pre-work](pre-work/README.md) | Install pre-requisites for the workshop | -| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally | -| [Lab 1.5: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally | +| [Lab 1: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally | | [Lab 2: Chatting with Your Local AI](lab-2/README.md) | Get acquainted with your local LLM | | [Lab 3: Prompt Engineering](lab-3/README.md) | Learn about prompt engineering techniques | | [Lab 4: Applying What You Learned](lab-4/README.md) | Refine your prompting skills | -| [Lab 5: Using AnythingLLM for a local RAG](lab-5/README.md) | Build a Granite coding assistant | -| [Lab 6: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite | +| [Lab 5: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite | | [Lab 7: Using Mellea to help with Generative Computing](lab-7/README.md) | Learn how to leverage Mellea for Advanced AI situations | Thank you SO MUCH for joining us in this workshop! If you have any questions or feedback, diff --git a/docs/pre-work/README.md b/docs/pre-work/README.md index a4ad482..30b4bd3 100644 --- a/docs/pre-work/README.md +++ b/docs/pre-work/README.md @@ -14,7 +14,7 @@ These are the required applications and general installation notes for this work - [Python](#installing-python) - [Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer. -- [AnythingLLM](#installing-anythingllm) **(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based. +- [Open WebUI](#installing-open-webui) ## Installing Python @@ -60,12 +60,6 @@ brew install ollama !!! note You can save time by starting the model download used for the lab in the background by running `ollama pull granite4:micro` in its own terminal. You might have to run `ollama serve` first depending on how you installed it. -## Installing AnythingLLM - -Download and install it from their [website](https://anythingllm.com/desktop) based on your operating system. We'll configure it later in the workshop. - -!!! note - You only need one of AnythingLLM or Open-WebUI for this lab. ## Installing Open-WebUI