Instructions on how to set up and run your own LLM.
Noah Pursell noahapursell@gmail.com
Ollama is a tool that allows you to download and run Large Language Models (LLMs) on your computer.
- Download Ollama from the Ollama website. Make sure you choose the right version for your OS.

- Run the downloaded executable.

- Verify installation by opening a new terminal on your computer, and running
ollama. If Ollama is installed correctly, you will see something like this:
After downloading Ollama, you can now download an LLM. There are many different ones to choose from. Larger LLMs will generally perform better but require more computing to run. If you are running on a laptop without a dedicated GPU, I recommend using a model with less than 3 billion parameters.
- View Ollama's LLM Library.
- Choose an LLM. In this tutorial, we will use deepseek-r1, 1.5b. This means we are using a 1.5 billion parameter deepseek model.
- Copy the LLM model reference. In this case, it is
deepseek-r1:1.5b
- Go to a new terminal on your computer.
- Run
ollama pull <model reference>. In my case, I ranollama pull deepseek-r1:1.5b. This will download the model.
- Upon a successful download, you will see a success message.

After downloading the model, you can now run it. Running the model will allow you to chat with the model. For now, will will run it in the terminal. Later, we will show you how to interact with a model in a pretty ChatGPT-like UI.
- In a terminal, run
ollama run <model reference>. In my case, I ranollama run deepseek-r1:1.5b. - This will prompt me for a message to start the conversation

- You can now chat with the LLM.

Now that we can talk with our LLM, it would be nice to not have to use the terminal. One tool we can use is Open WebUI - a tool that gives us a ChatGPT-like interface for our local LLM.
There are several ways to install OpenWebUI, including Docker, pip, Kustomize, and Helm. In this tutorial, we will install OpenWebUI using pip.
To install OpenWebUI with pip, we must first create a new Python environment (venv). This venv must be python 3.12. Ensure you have Python 3 installed. Check python version with python --version. Instructions to create and activate a Python environment vary between Windows and macOS. Follow the appropriate instructions below for your operating system.
Windows
- Open Command Prompt.
- Navigate to your project directory and run:
python -m venv openwebui-env. - Activate the virtual environment:
openwebui-env\Scripts\activate.
macOS
- Open terminal.
- Navigate to your project directory and run:
python3 -m venv openwebui-env. - Activate the virtual environment:
source openwebui-env/bin/activate.
After activating your virtual environment, we can now install OpenWebUI using pip.
- Run
pip install open-webui. - Run
open-webui serve. - Wait for the startup process to complete.

- On your machine, go to the local OpenWebUI site

- Click "Get Started"
- Create an account.

- Chat with your model!

In this demo, we have used Ollama to download an run our own LLM, and used OpenWebUI to interact with with a functional UI.
Going forward, feel free to explore different model types and different OpenWebUI functionalities (like tools). Additionally, learn how to connect your code to your local LLM with tools like LangChain.