Skip to content

Commit bd174aa

Browse files
committed
chore(docs): add documentation about import
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
1 parent 95b6c9b commit bd174aa

File tree

2 files changed

+263
-25
lines changed

2 files changed

+263
-25
lines changed

docs/content/getting-started/models.md

Lines changed: 262 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,46 @@
11
+++
22
disableToc = false
3-
title = "Install and Run Models"
4-
weight = 4
5-
icon = "rocket_launch"
3+
title = "Setting Up Models"
4+
weight = 2
5+
icon = "hub"
6+
description = "Learn how to install, configure, and manage models in LocalAI"
67
+++
78

8-
To install models with LocalAI, you can:
9+
This section covers everything you need to know about installing and configuring models in LocalAI. You'll learn multiple methods to get models running.
910

10-
- Browse the Model Gallery from the Web Interface and install models with a couple of clicks. For more details, refer to the [Gallery Documentation]({{% relref "features/model-gallery" %}}).
11-
- Specify a model from the LocalAI gallery during startup, e.g., `local-ai run <model_gallery_name>`.
12-
- Use a URI to specify a model file (e.g., `huggingface://...`, `oci://`, or `ollama://`) when starting LocalAI, e.g., `local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf`.
13-
- Specify a URL to a model configuration file when starting LocalAI, e.g., `local-ai run https://gist.githubusercontent.com/.../phi-2.yaml`.
14-
- Manually install the models by copying the files into the models directory (`--models`).
11+
## Prerequisites
1512

16-
## Run and Install Models via the Gallery
13+
- LocalAI installed and running (see [Quickstart]({{% relref "getting-started/quickstart" %}}) if you haven't set it up yet)
14+
- Basic understanding of command line usage
1715

18-
To run models available in the LocalAI gallery, you can use the WebUI or specify the model name when starting LocalAI. Models can be found in the gallery via the Web interface, the [model gallery](https://models.localai.io), or the CLI with: `local-ai models list`.
16+
## Method 1: Using the Model Gallery (Easiest)
1917

20-
To install a model from the gallery, use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:
18+
The Model Gallery is the simplest way to install models. It provides pre-configured models ready to use.
19+
20+
### Via WebUI
21+
22+
1. Open the LocalAI WebUI at `http://localhost:8080`
23+
2. Navigate to the "Models" tab
24+
3. Browse available models
25+
4. Click "Install" on any model you want
26+
5. Wait for installation to complete
27+
28+
For more details, refer to the [Gallery Documentation]({{% relref "features/model-gallery" %}}).
29+
30+
### Via CLI
31+
32+
```bash
33+
# List available models
34+
local-ai models list
35+
36+
# Install a specific model
37+
local-ai models install llama-3.2-1b-instruct:q4_k_m
38+
39+
# Start LocalAI with a model from the gallery
40+
local-ai run llama-3.2-1b-instruct:q4_k_m
41+
```
42+
43+
To run models available in the LocalAI gallery, you can use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:
2144

2245
```bash
2346
local-ai run hermes-2-theta-llama-3-8b
@@ -31,7 +54,82 @@ local-ai models install hermes-2-theta-llama-3-8b
3154

3255
Note: The galleries available in LocalAI can be customized to point to a different URL or a local directory. For more information on how to setup your own gallery, see the [Gallery Documentation]({{% relref "features/model-gallery" %}}).
3356

34-
## Run Models via URI
57+
### Browse Online
58+
59+
Visit [models.localai.io](https://models.localai.io) to browse all available models in your browser.
60+
61+
## Method 1.5: Import Models via WebUI
62+
63+
The WebUI provides a powerful model import interface that supports both simple and advanced configuration:
64+
65+
### Simple Import Mode
66+
67+
1. Open the LocalAI WebUI at `http://localhost:8080`
68+
2. Click "Import Model"
69+
3. Enter the model URI (e.g., `https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct-GGUF`)
70+
4. Optionally configure preferences:
71+
- Backend selection
72+
- Model name
73+
- Description
74+
- Quantizations
75+
- Embeddings support
76+
- Custom preferences
77+
5. Click "Import Model" to start the import process
78+
79+
### Advanced Import Mode
80+
81+
For full control over model configuration:
82+
83+
1. In the WebUI, click "Import Model"
84+
2. Toggle to "Advanced Mode"
85+
3. Edit the YAML configuration directly in the code editor
86+
4. Use the "Validate" button to check your configuration
87+
5. Click "Create" or "Update" to save
88+
89+
The advanced editor includes:
90+
- Syntax highlighting
91+
- YAML validation
92+
- Format and copy tools
93+
- Full configuration options
94+
95+
This is especially useful for:
96+
- Custom model configurations
97+
- Fine-tuning model parameters
98+
- Setting up complex model setups
99+
- Editing existing model configurations
100+
101+
## Method 2: Installing from Hugging Face
102+
103+
LocalAI can directly install models from Hugging Face:
104+
105+
```bash
106+
# Install and run a model from Hugging Face
107+
local-ai run huggingface://TheBloke/phi-2-GGUF
108+
```
109+
110+
The format is: `huggingface://<repository>/<model-file>` (<model-file> is optional)
111+
112+
### Examples
113+
114+
```bash
115+
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
116+
```
117+
118+
## Method 3: Installing from OCI Registries
119+
120+
### Ollama Registry
121+
122+
```bash
123+
local-ai run ollama://gemma:2b
124+
```
125+
126+
### Standard OCI Registry
127+
128+
```bash
129+
local-ai run oci://localai/phi-2:latest
130+
```
131+
132+
### Run Models via URI
35133

36134
To run models via URI, specify a URI to a model file or a configuration file when starting LocalAI. Valid syntax includes:
37135

@@ -51,18 +149,45 @@ local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
51149
local-ai run oci://localai/phi-2:latest
52150
```
53151

54-
## Run Models Manually
152+
## Method 4: Manual Installation
55153

56-
Follow these steps to manually run models using LocalAI:
154+
For full control, you can manually download and configure models.
57155

58-
1. **Prepare Your Model and Configuration Files**:
59-
Ensure you have a model file and, if necessary, a configuration YAML file. Customize model defaults and settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation]({{% relref "advanced" %}}).
156+
### Step 1: Download a Model
60157

61-
2. **GPU Acceleration**:
62-
For instructions on GPU acceleration, visit the [GPU Acceleration]({{% relref "features/gpu-acceleration" %}}) page.
158+
Download a GGUF model file. Popular sources:
63159

64-
3. **Run LocalAI**:
65-
Choose one of the following methods to run LocalAI:
160+
- [Hugging Face](https://huggingface.co/models?search=gguf)
161+
162+
Example:
163+
164+
```bash
165+
mkdir -p models
166+
167+
wget https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q4_K_M.gguf \
168+
-O models/phi-2.Q4_K_M.gguf
169+
```
170+
171+
### Step 2: Create a Configuration File (Optional)
172+
173+
Create a YAML file to configure the model:
174+
175+
```yaml
176+
# models/phi-2.yaml
177+
name: phi-2
178+
parameters:
179+
model: phi-2.Q4_K_M.gguf
180+
temperature: 0.7
181+
context_size: 2048
182+
threads: 4
183+
backend: llama-cpp
184+
```
185+
186+
Customize model defaults and settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation]({{% relref "advanced" %}}).
187+
188+
### Step 3: Run LocalAI
189+
190+
Choose one of the following methods to run LocalAI:
66191
67192
{{< tabs >}}
68193
{{% tab title="Docker" %}}
@@ -74,7 +199,6 @@ cp your-model.gguf models/
74199

75200
docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4
76201

77-
78202
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
79203
"model": "your-model.gguf",
80204
"prompt": "A long time ago in a galaxy far, far away",
@@ -121,10 +245,8 @@ git clone https://github.com/go-skynet/LocalAI
121245

122246
cd LocalAI
123247

124-
125248
cp your-model.gguf models/
126249

127-
128250
docker compose up -d --pull always
129251

130252
curl http://localhost:8080/v1/models
@@ -154,6 +276,11 @@ For Kubernetes deployment, see the [Kubernetes installation guide]({{% relref "i
154276

155277
LocalAI binary releases are available on [GitHub](https://github.com/go-skynet/LocalAI/releases).
156278

279+
```bash
280+
# With binary
281+
local-ai --models-path ./models
282+
```
283+
157284
{{% notice tip %}}
158285
If installing on macOS, you might encounter a message saying:
159286

@@ -174,4 +301,115 @@ For instructions on building LocalAI from source, see the [Build from Source gui
174301
{{% /tab %}}
175302
{{< /tabs >}}
176303

304+
### GPU Acceleration
305+
306+
For instructions on GPU acceleration, visit the [GPU Acceleration]({{% relref "features/gpu-acceleration" %}}) page.
307+
177308
For more model configurations, visit the [Examples Section](https://github.com/mudler/LocalAI-examples/tree/main/configurations).
309+
310+
## Understanding Model Files
311+
312+
### File Formats
313+
314+
- **GGUF**: Modern format, recommended for most use cases
315+
- **GGML**: Older format, still supported but deprecated
316+
317+
### Quantization Levels
318+
319+
Models come in different quantization levels (quality vs. size trade-off):
320+
321+
| Quantization | Size | Quality | Use Case |
322+
|-------------|------|---------|----------|
323+
| Q8_0 | Largest | Highest | Best quality, requires more RAM |
324+
| Q6_K | Large | Very High | High quality |
325+
| Q4_K_M | Medium | High | Balanced (recommended) |
326+
| Q4_K_S | Small | Medium | Lower RAM usage |
327+
| Q2_K | Smallest | Lower | Minimal RAM, lower quality |
328+
329+
### Choosing the Right Model
330+
331+
Consider:
332+
333+
- **RAM available**: Larger models need more RAM
334+
- **Use case**: Different models excel at different tasks
335+
- **Speed**: Smaller quantizations are faster
336+
- **Quality**: Higher quantizations produce better output
337+
338+
## Model Configuration
339+
340+
### Basic Configuration
341+
342+
Create a YAML file in your models directory:
343+
344+
```yaml
345+
name: my-model
346+
parameters:
347+
model: model.gguf
348+
temperature: 0.7
349+
top_p: 0.9
350+
context_size: 2048
351+
threads: 4
352+
backend: llama-cpp
353+
```
354+
355+
### Advanced Configuration
356+
357+
See the [Model Configuration]({{% relref "advanced/model-configuration" %}}) guide for all available options.
358+
359+
## Managing Models
360+
361+
### List Installed Models
362+
363+
```bash
364+
# Via API
365+
curl http://localhost:8080/v1/models
366+
367+
# Via CLI
368+
local-ai models list
369+
```
370+
371+
### Remove Models
372+
373+
Simply delete the model file and configuration from your models directory:
374+
375+
```bash
376+
rm models/model-name.gguf
377+
rm models/model-name.yaml # if exists
378+
```
379+
380+
## Troubleshooting
381+
382+
### Model Not Loading
383+
384+
1. **Check backend**: Ensure the required backend is installed
385+
386+
```bash
387+
local-ai backends list
388+
local-ai backends install llama-cpp # if needed
389+
```
390+
391+
2. **Check logs**: Enable debug mode
392+
393+
```bash
394+
DEBUG=true local-ai
395+
```
396+
397+
3. **Verify file**: Ensure the model file is not corrupted
398+
399+
### Out of Memory
400+
401+
- Use a smaller quantization (Q4_K_S or Q2_K)
402+
- Reduce `context_size` in configuration
403+
- Close other applications to free RAM
404+
405+
### Wrong Backend
406+
407+
Check the [Compatibility Table]({{% relref "reference/compatibility-table" %}}) to ensure you're using the correct backend for your model.
408+
409+
## Best Practices
410+
411+
1. **Start small**: Begin with smaller models to test your setup
412+
2. **Use quantized models**: Q4_K_M is a good balance for most use cases
413+
3. **Organize models**: Keep your models directory organized
414+
4. **Backup configurations**: Save your YAML configurations
415+
5. **Monitor resources**: Watch RAM and disk usage

docs/content/getting-started/quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
+++
22
disableToc = false
33
title = "Quickstart"
4-
weight = 3
4+
weight = 1
55
url = '/basics/getting_started/'
66
icon = "rocket_launch"
77
+++

0 commit comments

Comments
 (0)