--help` for more information on each command.
### Basic Usage
```bash
-# Start LocalAI with default settings
./local-ai run
-# Start with custom model path and address
./local-ai run --models-path /path/to/models --address :9090
-# Start with GPU acceleration
./local-ai run --f16
```
### Environment Variables
```bash
-# Using environment variables
export LOCALAI_MODELS_PATH=/path/to/models
export LOCALAI_ADDRESS=:9090
export LOCALAI_F16=true
@@ -165,7 +145,6 @@ export LOCALAI_F16=true
### Advanced Configuration
```bash
-# Start with multiple models, watchdog, and P2P enabled
./local-ai run \
--models model1.yaml model2.yaml \
--enable-watchdog-idle \
@@ -176,6 +155,6 @@ export LOCALAI_F16=true
## Related Documentation
-- See [Advanced Usage]({{%relref "docs/advanced/advanced-usage" %}}) for configuration examples
-- See [VRAM and Memory Management]({{%relref "docs/advanced/vram-management" %}}) for memory management options
+- See [Advanced Usage]({{%relref "advanced/advanced-usage" %}}) for configuration examples
+- See [VRAM and Memory Management]({{%relref "advanced/vram-management" %}}) for memory management options
diff --git a/docs/content/docs/reference/compatibility-table.md b/docs/content/reference/compatibility-table.md
similarity index 92%
rename from docs/content/docs/reference/compatibility-table.md
rename to docs/content/reference/compatibility-table.md
index 87c9dc5fe1bb..2511afcd5986 100644
--- a/docs/content/docs/reference/compatibility-table.md
+++ b/docs/content/reference/compatibility-table.md
@@ -8,29 +8,26 @@ url = "/model-compatibility/"
Besides llama based models, LocalAI is compatible also with other architectures. The table below lists all the backends, compatible models families and the associated repository.
-{{% alert note %}}
+{{% notice note %}}
-LocalAI will attempt to automatically load models which are not explicitly configured for a specific backend. You can specify the backend to use by configuring a model with a YAML file. See [the advanced section]({{%relref "docs/advanced" %}}) for more details.
+LocalAI will attempt to automatically load models which are not explicitly configured for a specific backend. You can specify the backend to use by configuring a model with a YAML file. See [the advanced section]({{%relref "advanced" %}}) for more details.
-{{% /alert %}}
+ {{% /notice %}}
## Text Generation & Language Models
-{{< table "table-responsive" >}}
| Backend and Bindings | Compatible models | Completion/Chat endpoint | Capability | Embeddings support | Token stream support | Acceleration |
|----------------------------------------------------------------------------------|-----------------------|--------------------------|---------------------------|-----------------------------------|----------------------|--------------|
-| [llama.cpp]({{%relref "docs/features/text-generation#llama.cpp" %}}) | LLama, Mamba, RWKV, Falcon, Starcoder, GPT-2, [and many others](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#description) | yes | GPT and Functions | yes | yes | CUDA 11/12, ROCm, Intel SYCL, Vulkan, Metal, CPU |
+| [llama.cpp]({{%relref "features/text-generation#llama.cpp" %}}) | LLama, Mamba, RWKV, Falcon, Starcoder, GPT-2, [and many others](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#description) | yes | GPT and Functions | yes | yes | CUDA 11/12, ROCm, Intel SYCL, Vulkan, Metal, CPU |
| [vLLM](https://github.com/vllm-project/vllm) | Various GPTs and quantization formats | yes | GPT | no | no | CUDA 12, ROCm, Intel |
| [transformers](https://github.com/huggingface/transformers) | Various GPTs and quantization formats | yes | GPT, embeddings, Audio generation | yes | yes* | CUDA 11/12, ROCm, Intel, CPU |
| [exllama2](https://github.com/turboderp-org/exllamav2) | GPTQ | yes | GPT only | no | no | CUDA 12 |
| [MLX](https://github.com/ml-explore/mlx-lm) | Various LLMs | yes | GPT | no | no | Metal (Apple Silicon) |
| [MLX-VLM](https://github.com/Blaizzy/mlx-vlm) | Vision-Language Models | yes | Multimodal GPT | no | no | Metal (Apple Silicon) |
| [langchain-huggingface](https://github.com/tmc/langchaingo) | Any text generators available on HuggingFace through API | yes | GPT | no | no | N/A |
-{{< /table >}}
## Audio & Speech Processing
-{{< table "table-responsive" >}}
| Backend and Bindings | Compatible models | Completion/Chat endpoint | Capability | Embeddings support | Token stream support | Acceleration |
|----------------------------------------------------------------------------------|-----------------------|--------------------------|---------------------------|-----------------------------------|----------------------|--------------|
| [whisper.cpp](https://github.com/ggml-org/whisper.cpp) | whisper | no | Audio transcription | no | no | CUDA 12, ROCm, Intel SYCL, Vulkan, CPU |
@@ -45,28 +42,23 @@ LocalAI will attempt to automatically load models which are not explicitly confi
| [silero-vad](https://github.com/snakers4/silero-vad) with [Golang bindings](https://github.com/streamer45/silero-vad-go) | Silero VAD | no | Voice Activity Detection | no | no | CPU |
| [neutts](https://github.com/neuphonic/neuttsair) | NeuTTSAir | no | Text-to-speech with voice cloning | no | no | CUDA 12, ROCm, CPU |
| [mlx-audio](https://github.com/Blaizzy/mlx-audio) | MLX | no | Text-tospeech | no | no | Metal (Apple Silicon) |
-{{< /table >}}
## Image & Video Generation
-{{< table "table-responsive" >}}
| Backend and Bindings | Compatible models | Completion/Chat endpoint | Capability | Embeddings support | Token stream support | Acceleration |
|----------------------------------------------------------------------------------|-----------------------|--------------------------|---------------------------|-----------------------------------|----------------------|--------------|
| [stablediffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) | stablediffusion-1, stablediffusion-2, stablediffusion-3, flux, PhotoMaker | no | Image | no | no | CUDA 12, Intel SYCL, Vulkan, CPU |
| [diffusers](https://github.com/huggingface/diffusers) | SD, various diffusion models,... | no | Image/Video generation | no | no | CUDA 11/12, ROCm, Intel, Metal, CPU |
| [transformers-musicgen](https://github.com/huggingface/transformers) | MusicGen | no | Audio generation | no | no | CUDA, CPU |
-{{< /table >}}
## Specialized AI Tasks
-{{< table "table-responsive" >}}
| Backend and Bindings | Compatible models | Completion/Chat endpoint | Capability | Embeddings support | Token stream support | Acceleration |
|----------------------------------------------------------------------------------|-----------------------|--------------------------|---------------------------|-----------------------------------|----------------------|--------------|
| [rfdetr](https://github.com/roboflow/rf-detr) | RF-DETR | no | Object Detection | no | no | CUDA 12, Intel, CPU |
| [rerankers](https://github.com/AnswerDotAI/rerankers) | Reranking API | no | Reranking | no | no | CUDA 11/12, ROCm, Intel, CPU |
| [local-store](https://github.com/mudler/LocalAI) | Vector database | no | Vector storage | yes | no | CPU |
| [huggingface](https://huggingface.co/docs/hub/en/api) | HuggingFace API models | yes | Various AI tasks | yes | yes | API-based |
-{{< /table >}}
## Acceleration Support Summary
@@ -87,6 +79,6 @@ LocalAI will attempt to automatically load models which are not explicitly confi
- **Quantization**: 4-bit, 5-bit, 8-bit integer quantization support
- **Mixed Precision**: F16/F32 mixed precision support
-Note: any backend name listed above can be used in the `backend` field of the model configuration file (See [the advanced section]({{%relref "docs/advanced" %}})).
+Note: any backend name listed above can be used in the `backend` field of the model configuration file (See [the advanced section]({{%relref "advanced" %}})).
- \* Only for CUDA and OpenVINO CPU/XPU acceleration.
diff --git a/docs/content/docs/reference/nvidia-l4t.md b/docs/content/reference/nvidia-l4t.md
similarity index 100%
rename from docs/content/docs/reference/nvidia-l4t.md
rename to docs/content/reference/nvidia-l4t.md
diff --git a/docs/content/docs/whats-new.md b/docs/content/whats-new.md
similarity index 88%
rename from docs/content/docs/whats-new.md
rename to docs/content/whats-new.md
index 320d0dca198e..f3b57c17898b 100644
--- a/docs/content/docs/whats-new.md
+++ b/docs/content/whats-new.md
@@ -10,7 +10,6 @@ Release notes have been now moved completely over Github releases.
You can see the release notes [here](https://github.com/mudler/LocalAI/releases).
-# Older release notes
## 04-12-2023: __v2.0.0__
@@ -74,7 +73,7 @@ From this release the `llama` backend supports only `gguf` files (see {{< pr "94
### Image generation enhancements
-The [Diffusers]({{%relref "docs/features/image-generation" %}}) backend got now various enhancements, including support to generate images from images, longer prompts, and support for more kernels schedulers. See the [Diffusers]({{%relref "docs/features/image-generation" %}}) documentation for more information.
+The [Diffusers]({{%relref "features/image-generation" %}}) backend got now various enhancements, including support to generate images from images, longer prompts, and support for more kernels schedulers. See the [Diffusers]({{%relref "features/image-generation" %}}) documentation for more information.
### Lora adapters
@@ -137,7 +136,7 @@ The full changelog is available [here](https://github.com/go-skynet/LocalAI/rele
## 🔥🔥🔥🔥 12-08-2023: __v1.24.0__ 🔥🔥🔥🔥
-This is release brings four(!) new additional backends to LocalAI: [🐶 Bark]({{%relref "docs/features/text-to-audio#bark" %}}), 🦙 [AutoGPTQ]({{%relref "docs/features/text-generation#autogptq" %}}), [🧨 Diffusers]({{%relref "docs/features/image-generation" %}}), 🦙 [exllama]({{%relref "docs/features/text-generation#exllama" %}}) and a lot of improvements!
+This is release brings four(!) new additional backends to LocalAI: [🐶 Bark]({{%relref "features/text-to-audio#bark" %}}), 🦙 [AutoGPTQ]({{%relref "features/text-generation#autogptq" %}}), [🧨 Diffusers]({{%relref "features/image-generation" %}}), 🦙 [exllama]({{%relref "features/text-generation#exllama" %}}) and a lot of improvements!
### Major improvements:
@@ -149,23 +148,23 @@ This is release brings four(!) new additional backends to LocalAI: [🐶 Bark]({
### 🐶 Bark
-[Bark]({{%relref "docs/features/text-to-audio#bark" %}}) is a text-prompted generative audio model - it combines GPT techniques to generate Audio from text. It is a great addition to LocalAI, and it's available in the container images by default.
+[Bark]({{%relref "features/text-to-audio#bark" %}}) is a text-prompted generative audio model - it combines GPT techniques to generate Audio from text. It is a great addition to LocalAI, and it's available in the container images by default.
It can also generate music, see the example: [lion.webm](https://user-images.githubusercontent.com/5068315/230684766-97f5ea23-ad99-473c-924b-66b6fab24289.webm)
### 🦙 AutoGPTQ
-[AutoGPTQ]({{%relref "docs/features/text-generation#autogptq" %}}) is an easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
+[AutoGPTQ]({{%relref "features/text-generation#autogptq" %}}) is an easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
-It is targeted mainly for GPU usage only. Check out the [ documentation]({{%relref "docs/features/text-generation" %}}) for usage.
+It is targeted mainly for GPU usage only. Check out the [ documentation]({{%relref "features/text-generation" %}}) for usage.
### 🦙 Exllama
-[Exllama]({{%relref "docs/features/text-generation#exllama" %}}) is a "A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights". It is a faster alternative to run LLaMA models on GPU.Check out the [Exllama documentation]({{%relref "docs/features/text-generation#exllama" %}}) for usage.
+[Exllama]({{%relref "features/text-generation#exllama" %}}) is a "A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights". It is a faster alternative to run LLaMA models on GPU.Check out the [Exllama documentation]({{%relref "features/text-generation#exllama" %}}) for usage.
### 🧨 Diffusers
-[Diffusers]({{%relref "docs/features/image-generation#diffusers" %}}) is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Currently it is experimental, and supports generation only of images so you might encounter some issues on models which weren't tested yet. Check out the [Diffusers documentation]({{%relref "docs/features/image-generation" %}}) for usage.
+[Diffusers]({{%relref "features/image-generation#diffusers" %}}) is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Currently it is experimental, and supports generation only of images so you might encounter some issues on models which weren't tested yet. Check out the [Diffusers documentation]({{%relref "features/image-generation" %}}) for usage.
### 🔑 API Keys
@@ -201,11 +200,11 @@ Most notably, this release brings important fixes for CUDA (and not only):
* fix: select function calls if 'name' is set in the request by {{< github "mudler" >}} in {{< pr "827" >}}
* fix: symlink libphonemize in the container by {{< github "mudler" >}} in {{< pr "831" >}}
-{{% alert note %}}
+{{% notice note %}}
-From this release [OpenAI functions]({{%relref "docs/features/openai-functions" %}}) are available in the `llama` backend. The `llama-grammar` has been deprecated. See also [OpenAI functions]({{%relref "docs/features/openai-functions" %}}).
+From this release [OpenAI functions]({{%relref "features/openai-functions" %}}) are available in the `llama` backend. The `llama-grammar` has been deprecated. See also [OpenAI functions]({{%relref "features/openai-functions" %}}).
-{{% /alert %}}
+ {{% /notice %}}
The full [changelog is available here](https://github.com/go-skynet/LocalAI/releases/tag/v1.23.0)
@@ -219,15 +218,15 @@ The full [changelog is available here](https://github.com/go-skynet/LocalAI/rele
* feat: backends improvements by {{< github "mudler" >}} in {{< pr "778" >}}
* feat(llama2): add template for chat messages by {{< github "dave-gray101" >}} in {{< pr "782" >}}
-{{% alert note %}}
+{{% notice note %}}
-From this release to use the OpenAI functions you need to use the `llama-grammar` backend. It has been added a `llama` backend for tracking `llama.cpp` master and `llama-grammar` for the grammar functionalities that have not been merged yet upstream. See also [OpenAI functions]({{%relref "docs/features/openai-functions" %}}). Until the feature is merged we will have two llama backends.
+From this release to use the OpenAI functions you need to use the `llama-grammar` backend. It has been added a `llama` backend for tracking `llama.cpp` master and `llama-grammar` for the grammar functionalities that have not been merged yet upstream. See also [OpenAI functions]({{%relref "features/openai-functions" %}}). Until the feature is merged we will have two llama backends.
-{{% /alert %}}
+ {{% /notice %}}
## Huggingface embeddings
-In this release is now possible to specify to LocalAI external `gRPC` backends that can be used for inferencing {{< pr "778" >}}. It is now possible to write internal backends in any language, and a `huggingface-embeddings` backend is now available in the container image to be used with https://github.com/UKPLab/sentence-transformers. See also [Embeddings]({{%relref "docs/features/embeddings" %}}).
+In this release is now possible to specify to LocalAI external `gRPC` backends that can be used for inferencing {{< pr "778" >}}. It is now possible to write internal backends in any language, and a `huggingface-embeddings` backend is now available in the container image to be used with https://github.com/UKPLab/sentence-transformers. See also [Embeddings]({{%relref "features/embeddings" %}}).
## LLaMa 2 has been released!
@@ -272,7 +271,7 @@ The former, ggml-based backend has been renamed to `falcon-ggml`.
### Default pre-compiled binaries
-From this release the default behavior of images has changed. Compilation is not triggered on start automatically, to recompile `local-ai` from scratch on start and switch back to the old behavior, you can set `REBUILD=true` in the environment variables. Rebuilding can be necessary if your CPU and/or architecture is old and the pre-compiled binaries are not compatible with your platform. See the [build section]({{%relref "docs/getting-started/build" %}}) for more information.
+From this release the default behavior of images has changed. Compilation is not triggered on start automatically, to recompile `local-ai` from scratch on start and switch back to the old behavior, you can set `REBUILD=true` in the environment variables. Rebuilding can be necessary if your CPU and/or architecture is old and the pre-compiled binaries are not compatible with your platform. See the [build section]({{%relref "installation/build" %}}) for more information.
[Full release changelog](https://github.com/go-skynet/LocalAI/releases/tag/v1.21.0)
@@ -282,8 +281,8 @@ From this release the default behavior of images has changed. Compilation is not
### Exciting New Features 🎉
-* Add Text-to-Audio generation with `go-piper` by {{< github "mudler" >}} in {{< pr "649" >}} See [API endpoints]({{%relref "docs/features/text-to-audio" %}}) in our documentation.
-* Add gallery repository by {{< github "mudler" >}} in {{< pr "663" >}}. See [models]({{%relref "docs/features/model-gallery" %}}) for documentation.
+* Add Text-to-Audio generation with `go-piper` by {{< github "mudler" >}} in {{< pr "649" >}} See [API endpoints]({{%relref "features/text-to-audio" %}}) in our documentation.
+* Add gallery repository by {{< github "mudler" >}} in {{< pr "663" >}}. See [models]({{%relref "features/model-gallery" %}}) for documentation.
### Container images
- Standard (GPT + `stablediffusion`): `quay.io/go-skynet/local-ai:v1.20.0`
@@ -295,7 +294,7 @@ From this release the default behavior of images has changed. Compilation is not
Updates to `llama.cpp`, `go-transformers`, `gpt4all.cpp` and `rwkv.cpp`.
-The NUMA option was enabled by {{< github "mudler" >}} in {{< pr "684" >}}, along with many new parameters (`mmap`,`mmlock`, ..). See [advanced]({{%relref "docs/advanced" %}}) for the full list of parameters.
+The NUMA option was enabled by {{< github "mudler" >}} in {{< pr "684" >}}, along with many new parameters (`mmap`,`mmlock`, ..). See [advanced]({{%relref "advanced" %}}) for the full list of parameters.
### Gallery repositories
@@ -319,13 +318,13 @@ or a `tts` voice with:
curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ "id": "model-gallery@voice-en-us-kathleen-low" }'
```
-See also [models]({{%relref "docs/features/model-gallery" %}}) for a complete documentation.
+See also [models]({{%relref "features/model-gallery" %}}) for a complete documentation.
### Text to Audio
Now `LocalAI` uses [piper](https://github.com/rhasspy/piper) and [go-piper](https://github.com/mudler/go-piper) to generate audio from text. This is an experimental feature, and it requires `GO_TAGS=tts` to be set during build. It is enabled by default in the pre-built container images.
-To setup audio models, you can use the new galleries, or setup the models manually as described in [the API section of the documentation]({{%relref "docs/features/text-to-audio" %}}).
+To setup audio models, you can use the new galleries, or setup the models manually as described in [the API section of the documentation]({{%relref "features/text-to-audio" %}}).
You can check the full changelog in [Github](https://github.com/go-skynet/LocalAI/releases/tag/v1.20.0)
@@ -353,7 +352,7 @@ We now support a vast variety of models, while being backward compatible with pr
### New features
- ✨ Added support for `falcon`-based model families (7b) ( [mudler](https://github.com/mudler) )
-- ✨ Experimental support for Metal Apple Silicon GPU - ( [mudler](https://github.com/mudler) and thanks to [Soleblaze](https://github.com/Soleblaze) for testing! ). See the [build section]({{%relref "docs/getting-started/build#Acceleration" %}}).
+- ✨ Experimental support for Metal Apple Silicon GPU - ( [mudler](https://github.com/mudler) and thanks to [Soleblaze](https://github.com/Soleblaze) for testing! ). See the [build section]({{%relref "installation/build#Acceleration" %}}).
- ✨ Support for token stream in the `/v1/completions` endpoint ( [samm81](https://github.com/samm81) )
- ✨ Added huggingface backend ( [Evilfreelancer](https://github.com/EvilFreelancer) )
- 📷 Stablediffusion now can output `2048x2048` images size with `esrgan`! ( [mudler](https://github.com/mudler) )
@@ -394,7 +393,7 @@ Two new projects offer now direct integration with LocalAI!
Support for OpenCL has been added while building from sources.
-You can now build LocalAI from source with `BUILD_TYPE=clblas` to have an OpenCL build. See also the [build section]({{%relref "docs/getting-started/build#Acceleration" %}}).
+You can now build LocalAI from source with `BUILD_TYPE=clblas` to have an OpenCL build. See also the [build section]({{%relref "getting-started/build#Acceleration" %}}).
For instructions on how to install OpenCL/CLBlast see [here](https://github.com/ggerganov/llama.cpp#blas-build).
@@ -415,16 +414,13 @@ PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name":
`llama.cpp` models now can also automatically save the prompt cache state as well by specifying in the model YAML configuration file:
```yaml
-# Enable prompt caching
-# This is a file that will be used to save/load the cache. relative to the models directory.
prompt_cache_path: "alpaca-cache"
-# Always enable prompt cache
prompt_cache_all: true
```
-See also the [advanced section]({{%relref "docs/advanced" %}}).
+See also the [advanced section]({{%relref "advanced" %}}).
## Media, Blogs, Social
@@ -437,7 +433,7 @@ See also the [advanced section]({{%relref "docs/advanced" %}}).
- 23-05-2023: __v1.15.0__ released. `go-gpt2.cpp` backend got renamed to `go-ggml-transformers.cpp` updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. This impacts RedPajama, GptNeoX, MPT(not `gpt4all-mpt`), Dolly, GPT2 and Starcoder based models. [Binary releases available](https://github.com/go-skynet/LocalAI/releases), various fixes, including {{< pr "341" >}} .
- 21-05-2023: __v1.14.0__ released. Minor updates to the `/models/apply` endpoint, `llama.cpp` backend updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. `gpt4all` is still compatible with the old format.
-- 19-05-2023: __v1.13.0__ released! 🔥🔥 updates to the `gpt4all` and `llama` backend, consolidated CUDA support ( {{< pr "310" >}} thanks to @bubthegreat and @Thireus ), preliminar support for [installing models via API]({{%relref "docs/advanced#" %}}).
+- 19-05-2023: __v1.13.0__ released! 🔥🔥 updates to the `gpt4all` and `llama` backend, consolidated CUDA support ( {{< pr "310" >}} thanks to @bubthegreat and @Thireus ), preliminar support for [installing models via API]({{%relref "advanced#" %}}).
- 17-05-2023: __v1.12.0__ released! 🔥🔥 Minor fixes, plus CUDA ({{< pr "258" >}}) support for `llama.cpp`-compatible models and image generation ({{< pr "272" >}}).
- 16-05-2023: 🔥🔥🔥 Experimental support for CUDA ({{< pr "258" >}}) in the `llama.cpp` backend and Stable diffusion CPU image generation ({{< pr "272" >}}) in `master`.
diff --git a/docs/data/landing.yaml b/docs/data/landing.yaml
index 95a55f465e0f..1376f16cc6aa 100644
--- a/docs/data/landing.yaml
+++ b/docs/data/landing.yaml
@@ -40,7 +40,7 @@ hero:
ctaButton:
icon: rocket_launch
btnText: "Get Started"
- url: "/basics/getting_started/"
+ url: "/installation/"
cta2Button:
icon: code
btnText: "View on GitHub"
diff --git a/docs/go.mod b/docs/go.mod
index 35b89dd11e43..b8e4544d7d36 100644
--- a/docs/go.mod
+++ b/docs/go.mod
@@ -1,3 +1,8 @@
-module github.com/McShelby/hugo-theme-relearn.git
+module github.com/mudler/LocalAI/docs
go 1.19
+
+require (
+ github.com/McShelby/hugo-theme-relearn v0.0.0-20251117214752-f69a085322cc // indirect
+ github.com/gohugoio/hugo-mod-bootstrap-scss/v5 v5.20300.20400 // indirect
+)
diff --git a/docs/go.sum b/docs/go.sum
index e69de29bb2d1..a891abc884ad 100644
--- a/docs/go.sum
+++ b/docs/go.sum
@@ -0,0 +1,6 @@
+github.com/McShelby/hugo-theme-relearn v0.0.0-20251117214752-f69a085322cc h1:8BvuabGtqXqhT4H01SS7s0zXea0B2R5ZOFEcPugMbNg=
+github.com/McShelby/hugo-theme-relearn v0.0.0-20251117214752-f69a085322cc/go.mod h1:mKQQdxZNIlLvAj8X3tMq+RzntIJSr9z7XdzuMomt0IM=
+github.com/gohugoio/hugo-mod-bootstrap-scss/v5 v5.20300.20400 h1:L6+F22i76xmeWWwrtijAhUbf3BiRLmpO5j34bgl1ggU=
+github.com/gohugoio/hugo-mod-bootstrap-scss/v5 v5.20300.20400/go.mod h1:uekq1D4ebeXgduLj8VIZy8TgfTjrLdSl6nPtVczso78=
+github.com/gohugoio/hugo-mod-jslibs-dist/popperjs/v2 v2.21100.20000/go.mod h1:mFberT6ZtcchrsDtfvJM7aAH2bDKLdOnruUHl0hlapI=
+github.com/twbs/bootstrap v5.3.3+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0=
diff --git a/docs/hugo.toml b/docs/hugo.toml
new file mode 100644
index 000000000000..0f415f3f8ba5
--- /dev/null
+++ b/docs/hugo.toml
@@ -0,0 +1,104 @@
+baseURL = 'https://localai.io/'
+languageCode = 'en-GB'
+defaultContentLanguage = 'en'
+
+title = 'LocalAI'
+
+# Theme configuration
+theme = 'hugo-theme-relearn'
+
+# Enable Git info
+enableGitInfo = true
+enableEmoji = true
+
+[outputs]
+ home = ['html', 'rss', 'print', 'search']
+ section = ['html', 'rss', 'print']
+ page = ['html', 'print']
+
+[markup]
+ defaultMarkdownHandler = 'goldmark'
+ [markup.tableOfContents]
+ endLevel = 3
+ startLevel = 1
+ [markup.goldmark]
+ [markup.goldmark.renderer]
+ unsafe = true
+ [markup.goldmark.parser.attribute]
+ block = true
+ title = true
+
+[params]
+ # Relearn theme parameters
+ editURL = 'https://github.com/mudler/LocalAI/edit/master/docs/content/'
+ description = 'LocalAI documentation'
+ author = 'Ettore Di Giacinto'
+ showVisitedLinks = true
+ disableBreadcrumb = false
+ disableNextPrev = false
+ disableLandingPageButton = false
+ titleSeparator = '::'
+ disableSeoHiddenPages = true
+
+ # Additional theme options
+ disableSearch = false
+ disableGenerator = false
+ disableLanguageSwitchingButton = true
+
+ # Theme variant - dark/blue style
+ themeVariant = [ 'zen-dark' , 'neon', 'auto' ]
+
+ # ordersectionsby = 'weight'
+
+[languages]
+ [languages.en]
+ title = 'LocalAI'
+ languageName = 'English'
+ weight = 10
+ contentDir = 'content'
+ [languages.en.params]
+ landingPageName = ' Home'
+
+# Menu shortcuts
+[[languages.en.menu.shortcuts]]
+ name = ' GitHub'
+ identifier = 'github'
+ url = 'https://github.com/mudler/LocalAI'
+ weight = 10
+
+[[languages.en.menu.shortcuts]]
+ name = ' Discord'
+ identifier = 'discord'
+ url = 'https://discord.gg/uJAeKSAGDy'
+ weight = 20
+
+[[languages.en.menu.shortcuts]]
+ name = ' X/Twitter'
+ identifier = 'twitter'
+ url = 'https://twitter.com/LocalAI_API'
+ weight = 20
+
+
+# Module configuration for theme
+[module]
+ [[module.mounts]]
+ source = 'content'
+ target = 'content'
+ [[module.mounts]]
+ source = 'static'
+ target = 'static'
+ [[module.mounts]]
+ source = 'layouts'
+ target = 'layouts'
+ [[module.mounts]]
+ source = 'data'
+ target = 'data'
+ [[module.mounts]]
+ source = 'assets'
+ target = 'assets'
+ [[module.mounts]]
+ source = '../images'
+ target = 'static/images'
+ [[module.mounts]]
+ source = 'i18n'
+ target = 'i18n'
diff --git a/docs/layouts/partials/menu-footer.html b/docs/layouts/partials/menu-footer.html
new file mode 100644
index 000000000000..d2a822617156
--- /dev/null
+++ b/docs/layouts/partials/menu-footer.html
@@ -0,0 +1,2 @@
+© 2023-2025 Ettore Di Giacinto
+
diff --git a/docs/themes/hugo-theme-relearn b/docs/themes/hugo-theme-relearn
deleted file mode 100644
index 72f933727e11..000000000000
--- a/docs/themes/hugo-theme-relearn
+++ /dev/null
@@ -1 +0,0 @@
-9a020e7eadb7d8203f5b01b18756c72d94773ec9
\ No newline at end of file
diff --git a/docs/themes/hugo-theme-relearn b/docs/themes/hugo-theme-relearn
new file mode 160000
index 000000000000..f69a085322cc
--- /dev/null
+++ b/docs/themes/hugo-theme-relearn
@@ -0,0 +1 @@
+Subproject commit f69a085322cc70b3e14577c0823c5bc781ec6bb2
diff --git a/docs/themes/lotusdocs b/docs/themes/lotusdocs
deleted file mode 160000
index 975da91e839c..000000000000
--- a/docs/themes/lotusdocs
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 975da91e839cfdb5c20fb66961468e77b8a9f8fd