diff --git a/.github/ISSUE_TEMPLATE/epic-request.md b/.github/ISSUE_TEMPLATE/epic-request.md
index 486ae90b6..9056ec3dc 100644
--- a/.github/ISSUE_TEMPLATE/epic-request.md
+++ b/.github/ISSUE_TEMPLATE/epic-request.md
@@ -6,12 +6,56 @@ labels: 'type: epic'
assignees: ''
---
+## Goal
-**Problem**
+## Success Criteria
-**Success Criteria**
--
-**Tasklist**
+## Tasklist
- [ ]
+
+## API / CLI Documentation
+### API
+#### 1. Feature
+```
+GET /v1/endpoint
+```
+
+Body:
+```json
+{
+ "key": "value"
+}
+```
+**Response**
+```json
+200
+{
+}
+Error
+{
+}
+```
+
+### CLI
+#### 1. Feature
+```
+GET /v1/endpoint
+```
+Response:
+```
+```
+#### Help Command
+```
+❯ cortex ...
+Usage:
+cortex [options] [subcommand]
+Options:
+ -h,--help Print this help message and exit
+ ... ...
+
+Subcommands:
+ start Start a model by ID
+ ... ...
+```
\ No newline at end of file
diff --git a/README.md b/README.md
index 335bbc5b6..14c9d959a 100644
--- a/README.md
+++ b/README.md
@@ -26,14 +26,15 @@
Cortex is a Local AI API Platform that is used to run and customize LLMs.
Key Features:
-- Straightforward CLI (inspired by Ollama)
-- Full C++ implementation, packageable into Desktop and Mobile apps
- Pull from Huggingface, or Cortex Built-in Models
- Models stored in universal file formats (vs blobs)
- Swappable Engines (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp), future: [`ONNXRuntime`](https://github.com/janhq/cortex.onnx), [`TensorRT-LLM`](https://github.com/janhq/cortex.tensorrt-llm))
- Cortex can be deployed as a standalone API server, or integrated into apps like [Jan.ai](https://jan.ai/)
-Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.
+Coming soon; now available on [cortex-nightly](#beta--nightly-versions):
+- Engines Management (install specific llama-cpp version and variants)
+- Nvidia Hardware detection & activation (current: Nvidia, future: AMD, Intel, Qualcomm)
+- Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.
## Local Installation
@@ -44,19 +45,19 @@ Cortex also has a [Network Installer](#network-installer) which downloads the ne
- For Linux: Download the installer and run the following command in terminal:
@@ -74,12 +75,21 @@ Cortex also has a [Network Installer](#network-installer) which downloads the ne
After installation, you can run Cortex.cpp from the command line by typing `cortex --help`.
```
+# Run a Model
cortex pull llama3.2
cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
-cortex run llama3.2
-cortex models ps
-cortex models stop llama3.2
-cortex stop
+cortex run llama3.2
+
+# Resource Management
+cortex ps (view active models & RAM/VRAM used)
+cortex models stop llama3.2
+
+# Available on cortex-nightly:
+cortex engines install llama-cpp -m (lists versions and variants)
+cortex hardware list (hardware detection)
+cortex hardware activate
+
+cortex stop
```
Refer to our [Quickstart](https://cortex.so/docs/quickstart/) and
@@ -92,9 +102,7 @@ Refer to our [API documentation](https://cortex.so/api-reference) for more detai
## Models
-Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access.
-
-Currently Cortex supports pulling from:
+Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access:
- [Hugging Face](https://huggingface.co): GGUF models eg `author/Model-GGUF`
- Cortex Built-in Models
@@ -141,41 +149,15 @@ Select a model (1-9):
```
## Advanced Installation
-
-### Network Installer (Stable)
-
-Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
-
-
-
-
-
-
+### Beta & Nightly Versions (Local Installer)
-### Beta & Nightly Versions
-
-Cortex releases 2 preview versions for advanced users to try new features early (we appreciate your feedback!)
-- Beta (early preview)
- - CLI command: `cortex-beta`
-- Nightly (released every night)
- - CLI Command: `cortex-nightly`
+Cortex releases Beta and Nightly versions for advanced users to try new features (we appreciate your feedback!)
+- Beta (early preview): CLI command: `cortex-beta`
+- Nightly (released every night): CLI Command: `cortex-nightly`
- Nightly automatically pulls the latest changes from upstream [llama.cpp](https://github.com/ggerganov/llama.cpp/) repo, creates a PR and runs tests.
- If all test pass, the PR is automatically merged into our repo, with the latest llama.cpp version.
-#### Local Installer (Default)
-#### Network Installer
+### Network Installer
+
+Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
@@ -236,24 +220,45 @@ Cortex releases 2 preview versions for advanced users to try new features early
| MacOS |
Linux |
+
+ | Stable (Recommended) |
+
+
+
+ cortex.exe
+
+ |
+
+
+
+ cortex.pkg
+
+ |
+
+
+
+ cortex.deb
+
+ |
+
| Beta (Preview) |
-
- cortex-beta-windows-network-installer.exe
+
+ cortex.exe
|
- cortex-beta-mac-network-installer.pkg
+ cortex.pkg
|
- cortex-beta-linux-network-installer.deb
+ cortex.deb
|
@@ -262,19 +267,19 @@ Cortex releases 2 preview versions for advanced users to try new features early
- cortex-nightly-windows-network-installer.exe
+ cortex.exe
|
- cortex-nightly-mac-network-installer.pkg
+ cortex.pkg
|
- cortex-nightly-linux-network-installer.deb
+ cortex.deb
|
diff --git a/docs/docs/cli/config.mdx b/docs/docs/cli/config.mdx
new file mode 100644
index 000000000..471a7a04a
--- /dev/null
+++ b/docs/docs/cli/config.mdx
@@ -0,0 +1,78 @@
+---
+title: Cortex Config
+description: Cortex config command
+slug: "config"
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+# `cortex config`
+
+This command allows you to update server configurations such as CORS and Allowed Headers.
+
+## Usage
+:::info
+You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`.
+:::
+
+
+
+ ```sh
+ cortex config [options] [subcommand]
+ ```
+
+
+ ```sh
+ cortex.exe config [options] [subcommand]
+ ```
+
+
+
+**Options**:
+
+| Option | Description | Required | Default value | Example |
+|------------------|-------------------------------------------|----------|----------------------|---------|
+| `--cors` | Toggle CORS | No | true | `on`, `off` |
+| `--allowed_origins`| Allowed origins for CORS | No | `http://localhost:39281`, `http://127.0.0.1:39281` | `http://localhost:3000` |
+| `-h`, `--help` | Display help information for the command. | No | - | `-h` |
+
+---
+# Subcommands:
+
+## `cortex config status`
+:::info
+This CLI command calls the following API endpoint:
+- [Get Configurations](/api-reference#tag/configurations/get/v1/configs)
+:::
+This command returns all server configurations.
+
+**Usage**:
+
+
+ ```sh
+ cortex config status
+ ```
+
+
+ ```sh
+ cortex.exe config status
+
+ ```
+
+
+
+For example, it returns the following:
+
+```
++-------------------------------------------------------------------------------------+
+| Config name | Value |
++-------------------------------------------------------------------------------------+
+| allowed_origins | http://localhost:39281 |
++-------------------------------------------------------------------------------------+
+| allowed_origins | http://127.0.0.1:39281/ |
++-------------------------------------------------------------------------------------+
+| cors | true |
++-------------------------------------------------------------------------------------+
+
+```
\ No newline at end of file
diff --git a/docs/docs/cli/start.mdx b/docs/docs/cli/start.mdx
index 91a8e2819..703e5f535 100644
--- a/docs/docs/cli/start.mdx
+++ b/docs/docs/cli/start.mdx
@@ -35,7 +35,7 @@ You can use the `--verbose` flag to display more detailed output of the internal
| ---------------------------- | ----------------------------------------- | -------- | ------------- | ----------------------------- |
| `-h`, `--help` | Display help information for the command. | No | - | `-h` |
| `-p`, `--port ` | Port to serve the application. | No | - | `-p 39281` |
-| `--loglevel ` | Setup loglevel for cortex server, supported levels are TRACE, DEBUG, INFO, WARN, ERROR | No | - | `--loglevel DEBUG` |
+| `--loglevel ` | Setup loglevel for cortex server, in the priority of `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE` | No | - | `--loglevel INFO` will display ERROR, WARN and INFO logs|
diff --git a/docs/sidebars.ts b/docs/sidebars.ts
index 205d2a205..5249c743d 100644
--- a/docs/sidebars.ts
+++ b/docs/sidebars.ts
@@ -166,6 +166,7 @@ const sidebars: SidebarsConfig = {
{ type: "doc", id: "cli/cortex", label: "cortex" },
{ type: "doc", id: "cli/start", label: "cortex start" },
{ type: "doc", id: "cli/run", label: "cortex run" },
+ { type: "doc", id: "cli/config", label: "cortex config" },
// { type: "doc", id: "cli/embeddings", label: "cortex embeddings" },
// { type: "doc", id: "cli/presets", label: "cortex presets" },
{ type: "doc", id: "cli/pull", label: "cortex pull" },