diff --git a/docs/docs/examples/jan.md b/docs/docs/examples/jan.md index 365050737..d4b6fd670 100644 --- a/docs/docs/examples/jan.md +++ b/docs/docs/examples/jan.md @@ -1,6 +1,7 @@ --- title: Nitro with Jan description: Nitro integrates with Jan to enable a ChatGPT-like functional app, optimized for local AI. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- You can effortlessly utilize Nitro through [Jan](https://jan.ai/), as it is fully integrated with all its functions. With Jan, using Nitro becomes straightforward without the need for any coding. diff --git a/docs/docs/examples/openai-node.md b/docs/docs/examples/openai-node.md index f12539e0f..dfa515bdf 100644 --- a/docs/docs/examples/openai-node.md +++ b/docs/docs/examples/openai-node.md @@ -1,6 +1,7 @@ --- title: Nitro with openai-node description: Nitro intergration guide for Node.js. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- You can migrate from OAI API or Azure OpenAI to Nitro using your existing NodeJS code quickly diff --git a/docs/docs/examples/openai-python.md b/docs/docs/examples/openai-python.md index be36d6d43..6fb54c2e8 100644 --- a/docs/docs/examples/openai-python.md +++ b/docs/docs/examples/openai-python.md @@ -1,6 +1,7 @@ --- title: Nitro with openai-python description: Nitro intergration guide for Python. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- diff --git a/docs/docs/examples/palchat.md b/docs/docs/examples/palchat.md index fd675eb81..598a18104 100644 --- a/docs/docs/examples/palchat.md +++ b/docs/docs/examples/palchat.md @@ -1,6 +1,7 @@ --- title: Nitro with Pal Chat description: Nitro intergration guide for mobile device usage. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- This guide demonstrates how to use Nitro with Pal Chat, enabling local AI chat capabilities on mobile devices. diff --git a/docs/docs/features/chat.md b/docs/docs/features/chat.md index 229fb8b0e..939c46b07 100644 --- a/docs/docs/features/chat.md +++ b/docs/docs/features/chat.md @@ -1,6 +1,7 @@ --- title: Chat Completion description: Inference engine for chat completion, the same as OpenAI's +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- The Chat Completion feature in Nitro provides a flexible way to interact with any local Large Language Model (LLM). diff --git a/docs/docs/features/cont-batch.md b/docs/docs/features/cont-batch.md index 65a5f950f..d853db933 100644 --- a/docs/docs/features/cont-batch.md +++ b/docs/docs/features/cont-batch.md @@ -1,6 +1,7 @@ --- title: Continuous Batching description: Nitro's continuous batching combines multiple requests, enhancing throughput. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- Continuous batching boosts throughput and minimizes latency in large language model (LLM) inference. This technique groups multiple inference requests, significantly improving GPU utilization. diff --git a/docs/docs/features/embed.md b/docs/docs/features/embed.md index 9e19cd125..77f610981 100644 --- a/docs/docs/features/embed.md +++ b/docs/docs/features/embed.md @@ -1,6 +1,7 @@ --- title: Embedding description: Inference engine for embedding, the same as OpenAI's +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- Embeddings are lists of numbers (floats). To find how similar two embeddings are, we measure the [distance](https://en.wikipedia.org/wiki/Cosine_similarity) between them. diff --git a/docs/docs/features/feat.md b/docs/docs/features/feat.md index 51a526331..bc091a547 100644 --- a/docs/docs/features/feat.md +++ b/docs/docs/features/feat.md @@ -1,6 +1,7 @@ --- title: Nitro Features description: What Nitro supports +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- Nitro enhances the `llama.cpp` research base, optimizing it for production environments with advanced features: diff --git a/docs/docs/features/load-unload.md b/docs/docs/features/load-unload.md index ca3980069..6c4cd5ff0 100644 --- a/docs/docs/features/load-unload.md +++ b/docs/docs/features/load-unload.md @@ -1,6 +1,7 @@ --- title: Load and Unload models description: Nitro loads and unloads local AI models (local LLMs). +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- ## Load model diff --git a/docs/docs/features/multi-thread.md b/docs/docs/features/multi-thread.md index a2ba2583b..2fe9b23d9 100644 --- a/docs/docs/features/multi-thread.md +++ b/docs/docs/features/multi-thread.md @@ -1,6 +1,7 @@ --- title: Multithreading description: Nitro utilizes multithreading to optimize hardware usage. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- Multithreading in programming allows concurrent task execution, improving efficiency and responsiveness. It's key for optimizing hardware and application performance. diff --git a/docs/docs/features/prompt.md b/docs/docs/features/prompt.md index 99418f8ac..28c498671 100644 --- a/docs/docs/features/prompt.md +++ b/docs/docs/features/prompt.md @@ -1,6 +1,7 @@ --- title: Prompt Role Support description: Setting up Nitro prompts to build an AI assistant. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- System, user, and assistant prompt is crucial for effectively utilizing the Large Language Model. These prompts work together to create a coherent and functional conversational flow. diff --git a/docs/docs/features/warmup.md b/docs/docs/features/warmup.md index b709cfd7f..cebf61069 100644 --- a/docs/docs/features/warmup.md +++ b/docs/docs/features/warmup.md @@ -1,6 +1,7 @@ --- title: Warming Up Model description: Nitro warms up the model to optimize delays. +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- Model warming up involves pre-running requests through an AI model to fine-tune its components for production. This step minimizes delays during initial inferences, ensuring readiness for immediate use. diff --git a/docs/docs/new/about.md b/docs/docs/new/about.md index 202336b1e..b49c834a4 100644 --- a/docs/docs/new/about.md +++ b/docs/docs/new/about.md @@ -2,6 +2,7 @@ title: About Nitro slug: /docs description: Efficient LLM inference engine for edge computing +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- Nitro is a high-efficiency C++ inference engine for edge computing, powering [Jan](https://jan.ai/). It is lightweight and embeddable, ideal for product integration. diff --git a/docs/docs/new/architecture.md b/docs/docs/new/architecture.md index e6aae0bd2..f23657465 100644 --- a/docs/docs/new/architecture.md +++ b/docs/docs/new/architecture.md @@ -1,6 +1,7 @@ --- title: Architecture slug: /achitecture +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- ![Nitro Architecture](img/architecture.drawio.png) diff --git a/docs/docs/new/build-source.md b/docs/docs/new/build-source.md index 62e4e55b2..046fd7189 100644 --- a/docs/docs/new/build-source.md +++ b/docs/docs/new/build-source.md @@ -2,6 +2,7 @@ title: Build From Source slug: /build-source description: Install Nitro manually +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- This guide provides step-by-step instructions for building Nitro from source on Linux, macOS, and Windows systems. diff --git a/docs/docs/new/faq.md b/docs/docs/new/faq.md index 0bd25f1a8..c4250cd91 100644 --- a/docs/docs/new/faq.md +++ b/docs/docs/new/faq.md @@ -2,6 +2,7 @@ title: FAQs slug: /faq description: Frequently Asked Questions about Nitro +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] ---
diff --git a/docs/docs/new/install.md b/docs/docs/new/install.md index 4b737c9dd..0a323a57a 100644 --- a/docs/docs/new/install.md +++ b/docs/docs/new/install.md @@ -2,6 +2,7 @@ title: Installation slug: /install description: How to install Nitro +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- # Nitro Installation Guide diff --git a/docs/docs/new/model-cycle.md b/docs/docs/new/model-cycle.md index 06f9b3214..d06ff8dd0 100644 --- a/docs/docs/new/model-cycle.md +++ b/docs/docs/new/model-cycle.md @@ -1,6 +1,7 @@ --- title: Model Life Cycle slug: /model-cycle +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- ## Load model diff --git a/docs/docs/new/quickstart.md b/docs/docs/new/quickstart.md index ecc192733..30326b0c4 100644 --- a/docs/docs/new/quickstart.md +++ b/docs/docs/new/quickstart.md @@ -2,6 +2,7 @@ title: Quickstart slug: /quickstart description: How to use Nitro +keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama] --- ## Step 1: Install Nitro diff --git a/docs/docusaurus.config.js b/docs/docusaurus.config.js index 3ef4a1b8f..4df697ad5 100644 --- a/docs/docusaurus.config.js +++ b/docs/docusaurus.config.js @@ -36,7 +36,6 @@ const config = { markdown: { mermaid: true, }, - // Plugins we added plugins: [ "docusaurus-plugin-sass", @@ -125,13 +124,25 @@ const config = { playgroundPosition: "bottom", }, metadata: [ + { name: 'description', content: 'Nitro is a high-efficiency Large Language Model inference engine for edge computing.'}, - { name: 'keywords', content: 'Nitro, OpenAI compatible, fast inference, local AI, llm, small AI, free, open source, production ready' }, - { property: 'og:title', content: 'Embeddable AI | Nitro' }, + { name: 'keywords', content: 'Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama' }, + + // Canonical URL + { name: 'canonical', content: 'https://nitro.jan.ai/' }, + + // Robots tags + { name: "robots", content: "index, follow" }, + + // Open Graph tags + { property: 'og:title', content: 'Fast inference engine | Nitro' }, { property: 'og:description', content: 'Nitro is a high-efficiency Large Language Model inference engine for edge computing.' }, + { property: 'og:type', content: 'website'}, + + // Twitter card tags { property: 'twitter:card', content: 'summary_large_image' }, { property: 'twitter:site', content: '@janhq_' }, - { property: 'twitter:title', content: 'Embeddable AI | Nitro' }, + { property: 'twitter:title', content: 'Fast inference engine | Nitro' }, { property: 'twitter:description', content: 'Nitro is a high-efficiency Large Language Model inference engine for edge computing.' }, ], headTags: [ diff --git a/docs/static/robots.txt b/docs/static/robots.txt index 6f27bb66a..14267e903 100644 --- a/docs/static/robots.txt +++ b/docs/static/robots.txt @@ -1,2 +1,2 @@ User-agent: * -Disallow: \ No newline at end of file +Allow: / \ No newline at end of file