diff --git a/docs/docs/features/chat.md b/docs/docs/features/chat.md index 8eb458231..36955b035 100644 --- a/docs/docs/features/chat.md +++ b/docs/docs/features/chat.md @@ -8,7 +8,7 @@ The Chat Completion feature in Nitro provides a flexible way to interact with an To send a single query to your chosen LLM, follow these steps: -
+
```bash title="Nitro" curl http://localhost:3928/v1/chat/completions \ @@ -27,7 +27,7 @@ curl http://localhost:3928/v1/chat/completions \
-
+
```bash title="OpenAI" curl https://api.openai.com/v1/chat/completions \ @@ -52,7 +52,7 @@ This command sends a request to your local LLM, querying about the winner of the For ongoing conversations or multiple queries, the dialog request feature is ideal. Here’s how to structure a multi-turn conversation: -
+
```bash title="Nitro" curl http://localhost:3928/v1/chat/completions \ @@ -82,7 +82,7 @@ curl http://localhost:3928/v1/chat/completions \
-
+
```bash title="OpenAI" curl https://api.openai.com/v1/chat/completions \ @@ -116,7 +116,7 @@ curl https://api.openai.com/v1/chat/completions \ Below are examples of responses from both the Nitro server and OpenAI: -
+
```js title="Nitro" { @@ -145,7 +145,7 @@ Below are examples of responses from both the Nitro server and OpenAI:
-
+
```js title="OpenAI" { diff --git a/docs/docs/features/embed.md b/docs/docs/features/embed.md index 58bde7269..0925c6a6d 100644 --- a/docs/docs/features/embed.md +++ b/docs/docs/features/embed.md @@ -14,7 +14,7 @@ To utilize the embedding feature, include the JSON parameter `"embedding": true` Here’s an example showing how to get the embedding result from the model: -
+
```bash title="Nitro" {1} curl http://localhost:3928/v1/embeddings \ @@ -28,7 +28,7 @@ curl http://localhost:3928/v1/embeddings \ ```
-
+
```bash title="OpenAI request" {1} curl https://api.openai.com/v1/embeddings \ @@ -47,7 +47,7 @@ curl https://api.openai.com/v1/embeddings \ The example response used the output from model [llama2 Chat 7B Q5 (GGUF)](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main) loaded to Nitro server. -
+
```js title="Nitro" { @@ -68,7 +68,7 @@ The example response used the output from model [llama2 Chat 7B Q5 (GGUF)](https
-
+
```js title="OpenAI" { diff --git a/docs/docs/new/about.md b/docs/docs/new/about.md index e33bf9873..ef6e32228 100644 --- a/docs/docs/new/about.md +++ b/docs/docs/new/about.md @@ -21,7 +21,7 @@ One of the significant advantages of using Nitro is its compatibility with OpenA For instance, compare the Nitro inference call: -
+
```bash title="Nitro chat completion" curl http://localhost:3928/v1/chat/completions \ @@ -44,7 +44,7 @@ curl http://localhost:3928/v1/chat/completions \
-
+
```bash title="OpenAI API chat completion" curl https://api.openai.com/v1/chat/completions \ diff --git a/docs/src/styles/components/base.scss b/docs/src/styles/components/base.scss index 851e3c6b0..c27793115 100644 --- a/docs/src/styles/components/base.scss +++ b/docs/src/styles/components/base.scss @@ -77,3 +77,11 @@ } } } + +.code-snippet-left { + @apply w-full lg:w-1/2 float-left; +} + +.code-snippet-right { + @apply w-full lg:w-1/2 float-right; +} diff --git a/docs/src/styles/tweaks/sidebar.scss b/docs/src/styles/tweaks/sidebar.scss index bf45e1c5a..aae55c085 100644 --- a/docs/src/styles/tweaks/sidebar.scss +++ b/docs/src/styles/tweaks/sidebar.scss @@ -16,7 +16,7 @@ } [class*="docItemCol_"] { - @apply px-8; + @apply lg:px-8; } // * Including custom sidebar table of content