ModelCraft is an automated PowerShell script designed to simplify the installation and configuration of Ollama, Open WebUI, and pre-configured AI models. This tool is tailored for developers and AI enthusiasts to get their AI environments up and running quickly with minimal effort.
- 🔹 One-Click Installation: Automates the setup of Ollama, Open WebUI, and required models.
- 🔹 Pre-Configured Models: Installs models like DeepSeek-R1:8B and qwen2.5:7b.
- 🔹 Docker Integration: Seamless setup of Open WebUI using Docker.
- 🔹 Automatic Port Management: Dynamically finds available ports to avoid conflicts.
👉 Automated Ollama Installation: Checks for existing installation and installs if not present.
👉 Open WebUI Setup: Automatically pulls and configures Open WebUI in Docker.
👉 Model Pulling: Automatically pulls AI models like DeepSeek-R1:8B and qwen2.5:7b.
👉 Dynamic Port Allocation: Finds available ports to run services without conflicts.
- PowerShell 5.0+ (Pre-installed on Windows 10 and later)
- Docker Desktop (Ensure Docker is installed and running)
git clone https://github.com/your-repo/modelcraft.git
cd modelcraft
.\Initialize.ps1
This script will:
- Check and install Ollama if not already installed.
- Set up Open WebUI via Docker.
- Pull the specified AI models.
The script is pre-configured with default values:
$config = @{
ollamaInstaller = "https://ollama.com/download/OllamaSetup.exe"
ollamaPath = "C:\\Program Files\\Ollama\\ollama.exe"
dockerImage = "ghcr.io/open-webui/open-webui:main"
containerName = "open-webui"
webUIPort = 8080
dataVolume = "open-webui"
models = @('DeepSeek-R1:8B', 'qwen2.5:7b')
}
- Install-Ollama: Downloads and installs Ollama.
- Install-OpenWebUI: Pulls the Open WebUI Docker image and runs the container.
- Pull-Models: Pulls AI models using Ollama.
- Get-AvailablePort: Finds an available port for running the WebUI.
Once the script completes, access the Open WebUI via your browser:
http://localhost:8080
You can interact with the pulled models via Ollama CLI:
ollama run DeepSeek-R1:8B
ollama run qwen2.5:7b
- Docker Not Found: Ensure Docker Desktop is installed and running.
- Port Conflicts: The script dynamically assigns ports. Check if any other application is using port 8080.
- Manual Model Installation: If model pulling fails, manually run:
ollama pull DeepSeek-R1:8B
ollama pull qwen2.5:7b
We welcome contributions! Follow these steps:
- Fork the repository.
- Create a feature branch:
git checkout -b feature-new-model
- Commit your changes:
git commit -m "Add new model support"
- Push and submit a pull request:
git push origin feature-new-model
This project is MIT Licensed. See the LICENSE file for more details.
📧 Email: support@modelcraft.ai
ModelCraft simplifies the process of AI model management. Get started now and streamline your AI workflows! 🚀