diff --git a/docs/docs/demos/chatbox-vid.mdx b/docs/docs/demos/chatbox-vid.mdx
new file mode 100644
index 000000000..00c04a007
--- /dev/null
+++ b/docs/docs/demos/chatbox-vid.mdx
@@ -0,0 +1,24 @@
+---
+title: Run local chatbox under 1 minute on MacOS with Nitro
+---
+
+
+
+## Links
+
+- [Download Nitro](https://github.com/janhq/nitro/releases)
+- [Download Chatbox](https://github.com/Bin-Huang/chatbox)
+
+## Commands
+
+```bash title="Load model"
+curl http://localhost:3928/inferences/llamacpp/loadmodel \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "llama_model_path": "model/llama-2-7b-chat.Q5_K_M.gguf",
+ "ctx_len": 512,
+ "ngl": 100,
+ }'
+```
+
+For more information, please refer to the [Nitro with Chatbox](examples/chatbox.md) documentation.
\ No newline at end of file
diff --git a/docs/docs/examples/chatbox.md b/docs/docs/examples/chatbox.md
index 2ea69e0d9..965ba80e5 100644
--- a/docs/docs/examples/chatbox.md
+++ b/docs/docs/examples/chatbox.md
@@ -16,16 +16,25 @@ To download and install Chatbox, follow the instructions available at this [link
## Using Nitro as a Backend
-1. Start Nitro server
+**1. Start Nitro server**
Open your command line tool and enter:
```
nitro
```
-> Ensure you are using the latest version of [Nitro](new/install.md)
+**2. Download Model**
-2. Run the Model
+Use these commands to download and save the [Llama2 7B chat model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main):
+
+```bash
+mkdir model && cd model
+wget -O llama-2-7b-model.gguf https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_M.gguf?download=true
+```
+
+> For more GGUF model, please look at [The Bloke](https://huggingface.co/TheBloke).
+
+**3. Run the Model**
To load the model, use the following command:
@@ -39,13 +48,16 @@ curl http://localhost:3928/inferences/llamacpp/loadmodel \
}'
```
-3. Config chatbox
+**4. Config chatbox**
+
Adjust the `settings` in Chatbox to connect with Nitro. Change your settings to match the configuration shown in the image below:

-4. Chat with the Model
+**5. Chat with the Model**
Once the setup is complete, you can start chatting with the model using Chatbox. All functions of Chatbox are now enabled with Nitro as the backend.
-## Video demo
\ No newline at end of file
+## Futher Usage
+
+For convenient usage, you can utilize [Jan](https://jan.ai/), as it is integrated with Nitro.
\ No newline at end of file
diff --git a/docs/docs/examples/jan.md b/docs/docs/examples/jan.md
new file mode 100644
index 000000000..ec22d6fab
--- /dev/null
+++ b/docs/docs/examples/jan.md
@@ -0,0 +1,18 @@
+---
+title: Nitro with Jan
+---
+
+You can effortlessly utilize Nitro through [Jan](https://jan.ai/), as it is fully integrated with all its functions. With Jan, using Nitro becomes straightforward without the need for any coding.
+
+
+
+
+## What is Jan?
+
+Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
+
+Jan uses open-source AI models, stores data in open file formats, is highly customizable via extensions.
+
+For additional details, please consult the [Jan Documenation](https://jan.ai/docs).
+
+> [Download Jan](https://jan.ai/)
\ No newline at end of file
diff --git a/docs/docs/examples/llm.md b/docs/docs/examples/llm.md
deleted file mode 100644
index 3e06dde3c..000000000
--- a/docs/docs/examples/llm.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: Simple chatbot with Nitro
----
-
-This guide provides instructions to create a chatbot powered by Nitro using the GGUF model.
-
-## Step 1: Download the Model
-
-First, you'll need to download the chatbot model.
-
-1. **Navigate to the Models Folder**
- - Open your project directory.
- - Locate and open the `models` folder within the directory.
-
-2. **Select a GGUF Model**
- - Visit the Hugging Face repository at [TheBloke's Models](https://huggingface.co/TheBloke).
- - Browse through the available models.
- - Choose the model that best fits your needs.
-
-3. **Download the Model**
- - Once you've selected a model, download it using a command like the one below. Replace `` with the path of your chosen model.
-
-
-```bash title="Downloading Zephyr 7B Model"
-wget https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q5_K_M.gguf?download=true
-```
-
-## Step 2: Load model
-Now, you'll set up the model in your application.
-
-1. **Open `app.py` File**
-
- - In your project directory, find and open the app.py file.
-
-2. **Configure the Model Path**
-
- - Modify the model path in app.py to point to your downloaded model.
- - Update the configuration parameters as necessary.
-
-```bash title="Example Configuration" {2}
-dat = {
- "llama_model_path": "nitro/interface/models/zephyr-7b-beta.Q5_K_M.gguf",
- "ctx_len": 2048,
- "ngl": 100,
- "embedding": True,
- "n_parallel": 4,
- "pre_prompt": "A chat between a curious user and an artificial intelligence",
- "user_prompt": "USER: ",
- "ai_prompt": "ASSISTANT: "}
-```
-
-Congratulations! Your Nitro chatbot is now set up. Feel free to experiment with different configuration parameters to tailor the chatbot to your needs.
-
-For more information on parameter settings and their effects, please refer to Run Nitro(using-nitro) for a comprehensive parameters table.
\ No newline at end of file
diff --git a/docs/docs/new/about.md b/docs/docs/new/about.md
index 3432ffe35..e33bf9873 100644
--- a/docs/docs/new/about.md
+++ b/docs/docs/new/about.md
@@ -1,6 +1,6 @@
---
title: About Nitro
-slug: /about
+slug: /docs
---
Nitro is a high-efficiency C++ inference engine for edge computing, powering [Jan](https://jan.ai/). It is lightweight and embeddable, ideal for product integration.
diff --git a/docs/docs/new/build-source.md b/docs/docs/new/build-source.md
index c1cc0c9a7..819a141f1 100644
--- a/docs/docs/new/build-source.md
+++ b/docs/docs/new/build-source.md
@@ -72,13 +72,13 @@ Time to build Nitro!
- **On Linux:**
```bash
- make -j $(%NUMBER_OF_PROCESSORS%)
+ make -j $(nproc)
```
- **On Windows:**
```bash
- cmake --build . --config Release
+ make -j $(%NUMBER_OF_PROCESSORS%)
```
## Start process
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 0488c696e..68db83ea6 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -33,10 +33,19 @@ const sidebars = {
{
type: "category",
label: "Features",
+ collapsible: false,
+ collapsed: false,
link: { type: "doc", id: "features/feat" },
items: [
"features/chat",
"features/embed",
+ ],
+ },
+ {
+ type: "category",
+ label: "Advanced Features",
+ link: { type: "doc", id: "features/feat" },
+ items: [
"features/multi-thread",
"features/cont-batch",
"features/load-unload",
@@ -46,10 +55,11 @@ const sidebars = {
},
{
type: "category",
- label: "Guides",
+ label: "Integrations",
collapsible: false,
collapsed: false,
items: [
+ "examples/jan",
"examples/chatbox",
"examples/openai-node",
"examples/openai-python",
@@ -62,7 +72,16 @@ const sidebars = {
// collapsed: false,
// items: [{ type: "doc", id: "new/architecture", label: "Architecture" }],
// },
- "new/faq"
+ {
+ type: "category",
+ label: "Demos",
+ collapsible: true,
+ collapsed: true,
+ items: [
+ "demos/chatbox-vid",
+ ],
+ },
+ "new/faq",
],
apiSidebar: [