diff --git a/docs/README.md b/docs/README.md index 71994f7..f6002bb 100644 --- a/docs/README.md +++ b/docs/README.md @@ -6,17 +6,14 @@ logo: images/ibm-blue-background.png ## Open Source AI workshop -Welcome to our workshop! Thank you for trusting us to help you learn about this -new and exciting space. There is a lot going on here, and we want to give you -enough to be able to feel confident in consuming LLM(s) and ideally find success -quickly. In this workshop we'll be using a local AI Model for code completion, -and learning best practices leveraging an Open Source LLM. +Welcome to the Open Source AI workshop! Thank you for trusting us to help you learn about this +new and exciting space. In this workshop, you'll gain the skills and confidence to effectively use LLMs locally through simple exercises and experimentation, and learn best practices for leveraging open source AI. Our overarching goals of this workshop is as follows: -* Understand what Open Source AI is, and its general use cases -* How to use an Open Source AI model that is built in a verifiable and legal way -* Learn about Prompt Engineering, how to leverage a local LLM in starter daily tasks +* Learn about Open Source AI and its general use cases. +* Use an open source LLM that is built in a verifiable and legal way. +* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks. !!! tip This workshop may seem short, but a lot of working with AI is exploration and engagement. @@ -31,21 +28,19 @@ Our overarching goals of this workshop is as follows: | Lab | Description | | :--- | :--- | -| [Lab 0: Pre-work](pre-work/README.md) | Pre-work and set up for the workshop | -| [Lab 1: Building a local AI co-pilot](lab-1/README.md) | Let's get VSCode and our local AI working together | -| [Lab 2: Using the local AI co-pilot](lab-2/README.md) | Let's learn about how to use a local AI co-pilot | -| [Lab 3: Configuring AnythingLLM](lab-3/README.md) | Let's configure AnythingLLM or Open-WebUI | -| [Lab 3.5: Configuring Open-WebUI](lab-3.5/README.md) | Let's configure Open-WebUI or AnythingLLM | -| [Lab 4: Prompt engineering overview](lab-4/README.md) | Let's learn about leveraging and engaging with the `granite3.1-dense` model | -| [Lab 5: Useful prompts and use cases](lab-5/README.md) | Let's get some good over arching prompts and uses cases with `granite3.1-dense` model | -| [Lab 6: Using AnythingLLM for a local RAG](lab-6/README.md) | Let's build a local RAG and use `granite3.1-dense` to talk to it | - -!!! success - Thank you SO MUCH for joining us on this workshop, if you have any thoughts or questions - the TAs would love answer them for you. If you found any issues or bugs, don't hesitate - to put a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an - [Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it - ASAP. +| [Lab 0: Pre-work](pre-work/README.md) | Install pre-requisites for the workshop | +| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally | +| [Lab 2: Using the local LLM](lab-2/README.md) | Test some general prompt templates | +| [Lab 3: Engineering prompts](lab-3/README.md) | Learn and apply Prompt Engineering concepts | +| [Lab 4: Using AnythingLLM for a local RAG](lab-4/README.md) | Build a simple local RAG | +| [Lab 5: Building an AI co-pilot](lab-5/README.md) | Build a coding assistant | +| [Lab 6: Using your coding co-pilot](lab-6/README.md) | Use your coding assistant for tasks | + +Thank you SO MUCH for joining us in this workshop! If you have any thoughts or questions at any point, +the TAs would love answer them for you. If you found any issues or bugs, don't hesitate +to open a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an +[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it +ASAP. ## Compatibility @@ -60,4 +55,3 @@ This workshop has been tested on the following platforms: * [JJ Asghar](https://github.com/jjasghar) * [Gabe Goodhart](https://github.com/gabe-l-hart) * [Ming Zhao](https://github.com/mingxzhao) - diff --git a/docs/images/anythingllm_open_screen.png b/docs/images/anythingllm_open_screen.png index 0f8f5f6..10c7a0a 100644 Binary files a/docs/images/anythingllm_open_screen.png and b/docs/images/anythingllm_open_screen.png differ diff --git a/docs/lab-1.5/README.md b/docs/lab-1.5/README.md index e9fa459..e352dfa 100644 --- a/docs/lab-1.5/README.md +++ b/docs/lab-1.5/README.md @@ -5,50 +5,26 @@ logo: images/ibm-blue-background.png --- !!! warning - This should be noted that this is optional. You don't need Open-WebUI if you have AnythingLLM already running. This is **optional**. + This is **optional**. You don't need Open-WebUI if you have AnythingLLM already running. -Now that you've gotten [Open-WebUI installed](../pre-work/README.md#open-webui) we need to configure it with `ollama` and Open-WebUI -to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux. +Now that you have [Open-WebUI installed](../pre-work/README.md#installing-open-webui) let's configure it with `ollama` and Open-WebUI to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux. + +Open up Open-WebUI (assuming you've run `open-webui serve` and nothing else), and you should see something like the following: -Open up Open-WebUI (assuming all you have done is `open-webui serve` and -nothing else), and you should see something like the following: ![default screen](../images/openwebui_open_screen.png) -If you see this that means Open-WebUI is installed correctly, and we can continue configuration, if not, please find a workshop TA or -raise your hand we'll be there to help you ASAP. +If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help. -Before clicking the "Getting Started" button, make sure that `ollama` has -`granite3.1-dense` pulled down. +Before clicking the *Getting Started* button, make sure that `ollama` has `granite3.1-dense` downloaded: ```bash ollama pull granite3.1-dense:8b ``` -Run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense) -model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop. - -If you didn't know, the supported languages with `granite3.1-dense` now include: - -- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified) - -And the Capabilities also include: - -- Summarization -- Text classification -- Text extraction -- Question-answering -- Retrieval Augmented Generation (RAG) -- Code related tasks -- Function-calling tasks -- Multilingual dialog use cases -- Long-context tasks including long document/meeting summarization, long document QA, etc. - !!! note - We need to figure out a way to copy the models into ollama without downloading. + The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for. -Click the "Getting Started" button, and fill out the next screen, and click the -"Create Admin Account". This will be your login for your local machine, remember this because -it will also be the Open-WebUI configuration user if want to dig deeper into it after this workshop. +Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop. ![user setup screen](../images/openwebui_user_setup_screen.png) @@ -57,22 +33,12 @@ the center! ![main screen](../images/openwebui_main_screen.png) -Ask it a question, see that it works as you expect...may I suggest: +Test it out! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is. -``` -Who is Batman? -``` +The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster. ![batman](../images/openwebui_who_is_batman.png) -Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If -you have more questions about it raise your hand and one of the helpers would love to talk you about it. - -Congratulations! You have Open-WebUI running now, configured to work with `granite3.1-dense` and `ollama`! - -!!! note - This was done on your local machine, take a moment and realize if you - needed to create a shared AI enviroment, this could be easily leveraged - here. This is very out of scope of this workshop, but the TAs can help if - you have some general questions around running this in this "space." +You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about! +**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab! diff --git a/docs/lab-1/README.md b/docs/lab-1/README.md index b4187f1..eeeaf78 100644 --- a/docs/lab-1/README.md +++ b/docs/lab-1/README.md @@ -4,67 +4,41 @@ description: Steps to configure AnythingLLM for usage logo: images/ibm-blue-background.png --- -Now that you've gotten [AnythingLLM installed](../pre-work/README.md#anythingllm) we need to configure it with `ollama` and AnythingLLM -to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux. +Now that you've got [AnythingLLM installed](../pre-work/README.md#anythingllm), we need to configure it with `ollama`. The following screenshots are taken from a Mac, but the gist of this should be the same on Windows and Linux. -Open up AnyThingLLM, and you should see something like the following: -![default screen](../images/anythingllm_open_screen.png) - -If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or -raise your hand we'll be there to help you ASAP. - -Next as a sanity check, run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense) -model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop. +First, if you haven't already, download the Granite 3.1 model. Open up a terminal and run the following command: ```bash ollama pull granite3.1-dense:8b ``` -If you didn't know, the supported languages with `granite3.1-dense` now include: - -- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified) - -And the Capabilities also include: - -- Summarization -- Text classification -- Text extraction -- Question-answering -- Retrieval Augmented Generation (RAG) -- Code related tasks -- Function-calling tasks -- Multilingual dialog use cases -- Long-context tasks including long document/meeting summarization, long document QA, etc. - !!! note - We need to figure out a way to copy the models into ollama without downloading, conference wifi is never fast enough. + The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for. -Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama` -but you may want to change it in the future. +Either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future. ![wrench icon](../images/anythingllm_wrench_icon.png) -Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3-dense:8b` model. (You should be able to -see all the models you have access to through `ollama` there.) +Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here. ![llm configuration](../images/anythingllm_llm_config.png) -Click the "Back to workspaces" button where the wrench was. And Click "New Workspace." +Click the *Back to workspaces* button (where the 🔧 was) and head back to the homepage. + +Click *New Workspace*. ![new workspace](../images/anythingllm_new_workspace.png) -Name it something like "learning llm" or the name of the event we are right now, something so you know it's somewhere you are learning -how to use this LLM. +Give it a name (e.g. the event you're attending today): ![naming new workspace](../images/anythingllm_naming_workspace.png) -Now we can test our connections _through_ AnythingLLM! I like the "Who is Batman?" question, as a sanity check on connections and that -it knows _something_. +Now, let's test our connection AnythingLLM! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is. -![who is batman](../images/anythingllm_who_is_batman.png) +The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster. -Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If -you have more questions about it raise your hand and one of the helpers would love to talk you about it. +![who is batman](../images/anythingllm_who_is_batman.png) -Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense` and `ollama`! +You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about! +**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab! diff --git a/docs/lab-2/README.md b/docs/lab-2/README.md index 593d094..67bac05 100644 --- a/docs/lab-2/README.md +++ b/docs/lab-2/README.md @@ -1,110 +1,88 @@ --- -title: Useful Prompts and Use Cases +title: Using the local AI co-pilot description: Some general useful prompt templates logo: images/ibm-blue-background.png --- -Now here comes the fun part, and exploration for your Prompt Engineering (PE) journey. -Be sure you have AnythingLLM (or Open-WebUI) available, and open in a _new_ Workspace. -The testing "Who is Batman?" workspace should be left alone for this. -Maybe call it "Learning Prompt Engineering" or the like, just like below. +Now, here comes the fun exploration for your Prompt Engineering (PE) journey. + +Open a brand _new_ Workspace in AnythingLLM (or Open-WebUI) called "Learning Prompt Engineering". ![](../images/anythingllm_learning_pe.png) ## Zero, Single, Multi Shot prompting -Now that we've played with a couple different versions of prompts, lets talk about the differences between them: +Let's talk about different types of prompts with examples. -- Zero Shot: No previous data or guidelines given before completing request. - - Our "brain storming prompt" was a zero shot prompt, it just started with "do this thing." Then we built off of it, and turned it into a Single Shot prompt. -- One Shot: One piece of data or guideline given before completing request. - - Our email option was a One Shot/Single Shot prompt, because we gave more context on the email and referenced the situation. You'll notice that this is where you'll normally start. -- Few Shot: Multiple pieces of data or guidelines given before completing request. - - Finally our resume one is a Few Shot, because hopefully you did some back and forth to build out a great blurb about yourself, and how you can be ready for this next great job. +### Zero-shot Prompting -## Brain storming prompt +These prompts don't have any previous data, structure, or guidelines provided with the request. Here's an example you can try out: -Now lets try our first real prompt, copy the following into the message box: ``` -I'm looking to explore [subject] in a [format]. -Do you have any suggestions on [topics] I can cover? +I want to explore pasta making recipes. +Do you have any suggestions for recipes that are unique and challenging? ``` -This is a good "brain storming idea" prompt. Fill in `[subject]`, `[format]`, and `[topics]` for liking, -I'll be running: -``` -I'm looking to explore pasta making recipes. Do you -have any suggestions on recipes that are unique and challanging? -``` - -As you can see granite-3.1 comes back with some very challenging options: +As you can see, this Granite model comes back with some very challenging options: ![pasta challenges](../images/anythingllm_pasta_challenges.png) -Now if you put the same question in does it give you the same? Or is it different? +Try it for yourself, did you get a different response? Would you be satisfied with it? + +I'm a fan of the "Homemade Ravioli" option in my response, so I'll ask for the recipe to make that. In the message box, in the same _thread_: -I'm a fan of Homemade Ravioli, so lets ask what the recipe is for that, in the message box in this _thread_ I'll write -out: ``` -I do like some homemade ravioli, what is the spinach -ricotta and cheese recipe you suggest? +I do like some homemade ravioli. +What is the spinach, ricotta and cheese recipe you suggest? ``` ![homemade ravioli](../images/anythingllm_homemade_ravioli.png) -Now this may seem odd, or even pointless, but hopefully you can start seeing that if you treat the prompt like -a conversation that you interate on, you can talk back and forth with the granite-3.1 and find interesting -nuggets of knowledge. +These simple back-and-forth questions are examples of zero-shot prompts. Try testing out the model with simple prompts like this about any subject you can think of. Next, we'll start to add some complexity to our prompts -## Client or Customer email generation +## One-Shot and Multi-Shot Prompting -Next create a new "thread" so the context window resets, and lets try something everyone has probably already -done, but give you a "mad libs" prompt that can help just churn them out for you. +First, create a new "thread" so the context window resets. You can think of a *context window* as the amount of information a model can "remember". ![new thread](../images/anythingllm_new_thread.png) -Take the following prompt, and fill it out to your content. Have some fun with it :) +In the following examples, we'll add more guidance in our prompt. By providing **one** example or structure, we achieve *one-shot prompting*. Take the provided prompts, and replace the [words] in brackets with your own choices. Have fun with it! + ``` -I want you to act as a customer support assistant who -is [characteristic]. How would you respond to [text] -as a representative of our [type] company? +I want you to act as a customer support assistant who is [characteristic]. +How would you respond to [text] as a representative of our [type] company? ``` My version will be: ``` -I want you to act as a customer support assistant who -is an expert in shipping logistics. How would you respond -to client who has had their freight lost as a -representative of our company? +I want you to act as a customer support assistant who is an expert in shipping logistics. +How would you respond to client who has had their freight lost as a representative of our company? ``` ![lost freight](../images/anythingllm_lost_freight.png) -Oh, that's not nearly enough, or interesting right? Well it's because we haven't interated on it, we just wrote a "client" with no context, or what they may -have lost. So lets see if we can fill it out more: +That's not a satisfactory or interesting response, right? We need to interate on it, and provide more context about the client, like what they may have lost. **Tip: always think about adding more context!** + ``` -The freight they lost was an industrial refrigerator, -from Burbank, California to Kanas City, MO. I need you to -write out an apology letter, with reference to the -shipping order, of #00234273 and the help line of 18003472845, -with a discount code of OPPSWEDIDITAGAIN for 15% off -shipping their next order. -Mention that sometimes the trucks have accidents and -need to be repaired and we should be able to reach -out in a couple weeks. +The freight they lost was an industrial refrigerator, from Burbank, California to Kanas City, MO. +I need you to write out an apology letter, with reference to the shipping order #00234273 +and the help line of 1-800-347-2845, with a discount code of OPPSWEDIDITAGAIN for 15% off +shipping their next order. Mention that sometimes, the trucks have accidents and need +to be repaired and we should be able to reach out in a couple weeks. ``` ![better lost freight](../images/anythingllm_better_lost_freight.png) -So much better! With more context, and more of a back story to what you are asking for, building off the intial prompt, we got something -that with just a small tweaks we can email to our client. +So much better! By providing more context and more insight into what you are expecting in a response, we can improve the quality of our responses greatly with small tweaks. + +By providing **multiple** examples, you're achieving *multi-shot prompting*!. -## Your work history prompt +## Work History Prompt -You probably have your resume on this machine we are working on right? Lets take it and build a "blurb" about your skill set and who you are -and maybe if you are feeling adventurous you can even get a cover letter out of it. (Don't forget to start a new thread!) +You might have your resume on the laptop you're working on. If you do, you can take it and build a summary about your skill set and who you are. If you are really adventurous, you can even try to make the model write you a cover letter! *Don't forget to start a new thread!* + +Here's a prompt to help you getting started, you can fill in the [words] again. -Here's a prompt to help you getting started: ``` The following text is my resume for my career up until my most recent job. I am [your job now] with @@ -118,29 +96,27 @@ skill set, and my previous expertise ![](../images/anythingllm_resume.png) -Now for mine, it wasn't great, but it at least give me somethings to work off of. Again, this is just a start, but you can build off of this blurb and -see what you can actually accomplish. +My response has room for improvement, but gives me something to work with. Try to build off of and modify this blurb until you're happy with the quality of the response you receive. Think outside of the box! ## Summarization Prompt -Something you'll discover quickly is that leveraging your local AI model to summarize long documents and/or emails can help figure out if you -actually need to read the details of something. Showing the age of the author here, but remember [CliffNotes](https://en.wikipedia.org/wiki/CliffsNotes)? Yep, you have your own -built in CliffNotes bot with AI. +Summarizing long documents or emails is a very popular use case to leverage your local AI model for. + +The author of this workshop is probably older than you, but remember [CliffNotes](https://en.wikipedia.org/wiki/CliffsNotes)? Well, you have your own built-in CliffNotes bot with AI on your laptop now! Here's a prompt to help you set up your AI model to put it "head space" this was inspired from [this website](https://narrato.io/blog/get-precise-insights-with-30-chatgpt-prompts-for-summary-generation/): ``` -Generate an X-word summary of the following document, +Generate an [X]-word summary of the following document, highlighting key insights, notable quotes, and the overall tone of the core point of it. Be sure to add any specific call to actions or things that need to be done by a specific date. ``` -## Role playing prompt +## Role-Playing Prompt -If you noticed in the previous lab we talked about leveraging a single prompt to build a -"single shot" role playing if you skipped it, we'll be going over it again here. +If you're familiar with the role-playing game Dungeons & Dragons, this excercise is for you! ``` Generate a self-contained dungeon adventure for a party of 4 adventurers, @@ -149,14 +125,13 @@ with a clear objective, unique challenges, and a memorable boss encounter, all designed to be completed in a single session of gameplay ``` -The student took inspiration from [this website](https://www.the-enchanted-scribe.com/post/6-steps-one-prompt-using-chatgpt-to-generate-one-shot-d-d-adventures), which goes deeper in depth, and can build out the -whole thing for you if you want. +The student took inspiration from [this website](https://www.the-enchanted-scribe.com/post/6-steps-one-prompt-using-chatgpt-to-generate-one-shot-d-d-adventures), which goes more in-depth, and can build out a whole game for you if you want. -The best part of this prompt is that you can take the output and extend or contract -the portions it starts with, and tailor the story to your adventurers needs! +The best part of this prompt is that you can take the output and extend or shorten +the portions it starts with, and tailor the story to your adventurers' needs! -## Other ideas? +## Other Ideas? We'd love to add more to this workshop for future students, if you've come up with something clever or maybe someone beside you has and you'd like to save it for others we'd love -a [Pull Request](https://github.com/IBM/opensource-ai-workshop/tree/main/docs/lab-5) of it. +a [Pull Request](https://github.com/IBM/opensource-ai-workshop/tree/main/docs/lab-2) of it. diff --git a/docs/pre-work/README.md b/docs/pre-work/README.md index 76d9afb..6146fa2 100644 --- a/docs/pre-work/README.md +++ b/docs/pre-work/README.md @@ -6,113 +6,81 @@ logo: images/ibm-blue-background.png These are the required applications and general installation notes for this workshop. -Please have the "Required" software installed, and then choose from the Student's choice section -per your preferences. If you don't know what to select choose the **SUGGESTED** options. +**Ollama** and **Python** are required for this workshop, but you can choose an IDE and GUI interface from the options provided. If you don't know what to select, just go with the recommended options! -## Required +*Remember, you can **always** ask the teacher for help if you get stuck on any step!* -- [Ollama](#ollama) - This application allows you to locally host an LLM model on your computer. -- [Python](#python) - If you don't already have a proficiently in a language, please follow the `python` steps. +## Required Software -## Student's Choice -- [Visual Studio Code](#visual-studio-code) - **SUGGESTED** We'll be walking through an extension to VSCode in this workshop. -- [One of the Jetbrains IDEs](#jetbrains) - You can choose the one you want, if the wifi is bad, please reach out to a TA to give you a USB stick. -- [AnythingLLM](#anythingllm) - **SUGGESTED** This will be a GUI interface to your LLM(s). -- [Open WebUI](#open-webui) - This is a browser based GUI for your LLM(s). +- [Python](#installing-python) +- [Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer. +- An IDE - either [Visual Studio Code](#installing-visual-studio-code) **(Recommended)** or [any Jetbrains IDE](#installing-jetbrains). This workshop uses VSCode. -## Ollama +- A GUI - either [AnythingLLM](#installing-anythingllm) **(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based. -#### Mac installation steps +## Installing Python -##### Download via the Ollama website +There are multiple ways to install Python, you can follow their [beginner's guide](https://wiki.python.org/moin/BeginnersGuide/Download) based on your operating system. -[Download Ollama](https://ollama.com/download/Ollama-darwin.zip) via the website. +### Using Homebrew (Mac) -Unzip the folder, and move the Ollama app to your applications folder. - -##### Terminal Installation - -Open up a terminal, and install [homebrew](https://brew.sh/). +Install [Homebrew](https://brew.sh/) using the following command: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` -After the installation is complete, install [ollama](https://ollama.com) via `brew`. +Then, install Python via `brew`: ```bash -brew install ollama +brew install python@3.11 ``` -### Windows installation steps +Please confirm that your `python --version` is at least `3.11+` for the best experience. -Install ollama via the website [here](https://ollama.com/download/windows). +## Installing Ollama -## Visual Studio Code +Most users can simply download Ollama from its [website](https://ollama.com/download). -#### Mac installation steps +### Using Homebrew (Mac) -Open up a terminal, and install [homebrew](https://brew.sh/), if you didn't install this during the Ollama step. +Install [Homebrew](https://brew.sh/) using the following command: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` -After the installation is complete, install [vscode](https://code.visualstudio.com/) via `brew`. +Then, install [Ollama](https://ollama.com) via `brew`: ```bash -brew install --cask visual-studio-code +brew install ollama ``` -## Jetbrains - -Head on over to [here](https://www.jetbrains.com/ides/#choose-your-ide) and -download the IDE if you haven't already. If you are leveraging `python` like -this workshop will be, you should pick -[PyCharm](https://www.jetbrains.com/pycharm/) - -## Python - -Python is a whole programming language. There are multiple ways to install it, and -[here is the official website](https://www.python.org). Please take a moment and if you can't run -the following command, reach out to a teaching assistant or instructor to help you - get resolved. - !!! note - If you have an older version of python, or default "OS" versions of python, you'll need to update. + You can save time by starting the model download used for the lab in the background by running `ollama pull granite3.1-dense:8b` in its own terminal. -```bash -python --version -Python 3.11.4 -``` - -#### Mac installation steps - -##### Terminal Installation +## Installing Visual Studio Code -If you need to install Python via `brew` please do the following: -```bash -brew install python@3.11 -``` +You can download and install VSCode from their [website](https://code.visualstudio.com/Download) based on your operating system.. -Please confirm that your `python --version` is at least `3.11+` for the best experience. +!!! note + You only need one of VSCode or Jetbrains for this lab. -## AnythingLLM +## Installing Jetbrains -Head on over [here](https://anythingllm.com/desktop) choose the correct version -for your Operating System. We will configure it later in the workshop. +Download and install the IDE of your choice [here](https://www.jetbrains.com/ides/#choose-your-ide). +If you'll be using `python` (like this workshop does), pick [PyCharm](https://www.jetbrains.com/pycharm/). -## Open-WebUI +## Installing AnythingLLM -If you have decided to run the Web Based/Browser based way to interact with your LLM(s) [open-webui](https://github.com/open-webui/open-webui) -is a fine if not _the_ defacto choice. +Download and install it from their [website](https://anythingllm.com/desktop) based on your operating system. We'll configure it later in the workshop. !!! note - You only need to pick one of AnythingLLM or Open-WebUI, though you could - pick both, it's really up to you! + You only need one of AnythingLLM or Open-WebUI for this lab. -Assuming you've set up [Python](#python) above, you'll need the following commands -to get it installed. +## Installing Open-WebUI + +Assuming you've set up [Python](#installing-python) above, use the following commands to install Open-WebUI: ```bash cd ~ @@ -124,4 +92,6 @@ pip install open-webui open-webui serve ``` -With this you should have the applications you need, let's start the workshop! +Now that you have all of the tools you need, let's start building our local AI co-pilot. + +**Head over to [Lab 1](/docs/lab-1/README.md) if you have AnythingLLM or [Lab 1.5](/docs/lab-1.5/README.md) for Open-WebUI.**