Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 48 additions & 4 deletions .github/ISSUE_TEMPLATE/epic-request.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,56 @@ labels: 'type: epic'
assignees: ''

---
## Goal

**Problem**

## Success Criteria

**Success Criteria**
-

**Tasklist**
## Tasklist
- [ ]

## API / CLI Documentation
### API
#### 1. Feature
```
GET /v1/endpoint
```

Body:
```json
{
"key": "value"
}
```
**Response**
```json
200
{
}
Error
{
}
```

### CLI
#### 1. Feature
```
GET /v1/endpoint
```
Response:
```
```
#### Help Command
```
❯ cortex ...
Usage:
cortex [options] [subcommand]
Options:
-h,--help Print this help message and exit
... ...

Subcommands:
start Start a model by ID
... ...
```
119 changes: 62 additions & 57 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,15 @@
Cortex is a Local AI API Platform that is used to run and customize LLMs.

Key Features:
- Straightforward CLI (inspired by Ollama)
- Full C++ implementation, packageable into Desktop and Mobile apps
- Pull from Huggingface, or Cortex Built-in Models
- Models stored in universal file formats (vs blobs)
- Swappable Engines (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp), future: [`ONNXRuntime`](https://github.com/janhq/cortex.onnx), [`TensorRT-LLM`](https://github.com/janhq/cortex.tensorrt-llm))
- Cortex can be deployed as a standalone API server, or integrated into apps like [Jan.ai](https://jan.ai/)

Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.
Coming soon; now available on [cortex-nightly](#beta--nightly-versions):
- Engines Management (install specific llama-cpp version and variants)
- Nvidia Hardware detection & activation (current: Nvidia, future: AMD, Intel, Qualcomm)
- Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.

## Local Installation

Expand All @@ -44,19 +45,19 @@ Cortex also has a [Network Installer](#network-installer) which downloads the ne
<h4>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:15px; width: 15px" />
Windows:
<a href='https://app.cortexcpp.com/download/latest/windows-amd64-local'><b>cortex-windows-local-installer.exe</b></a>
<a href='https://app.cortexcpp.com/download/latest/windows-amd64-local'><b>cortex.exe</b></a>
</h4>

<h4>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
MacOS (Silicon/Intel):
<a href='https://app.cortexcpp.com/download/latest/mac-universal-local'><b>cortex-mac-local-installer.pkg</b></a>
<a href='https://app.cortexcpp.com/download/latest/mac-universal-local'><b>cortex.pkg</b></a>
</h4>

<h4>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:15px; width: 15px" />
Linux:
<a href='https://app.cortexcpp.com/download/latest/linux-amd64-local'><b>cortex-linux-local-installer.deb</b></a>
<a href='https://app.cortexcpp.com/download/latest/linux-amd64-local'><b>cortex.deb</b></a> (Coming soon: Linux installation script)
</h4>

- For Linux: Download the installer and run the following command in terminal:
Expand All @@ -74,12 +75,21 @@ Cortex also has a [Network Installer](#network-installer) which downloads the ne
After installation, you can run Cortex.cpp from the command line by typing `cortex --help`.

```
# Run a Model
cortex pull llama3.2
cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
cortex run llama3.2
cortex models ps
cortex models stop llama3.2
cortex stop
cortex run llama3.2

# Resource Management
cortex ps (view active models & RAM/VRAM used)
cortex models stop llama3.2

# Available on cortex-nightly:
cortex engines install llama-cpp -m (lists versions and variants)
cortex hardware list (hardware detection)
cortex hardware activate

cortex stop
```

Refer to our [Quickstart](https://cortex.so/docs/quickstart/) and
Expand All @@ -92,9 +102,7 @@ Refer to our [API documentation](https://cortex.so/api-reference) for more detai

## Models

Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access.

Currently Cortex supports pulling from:
Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access:
- [Hugging Face](https://huggingface.co): GGUF models eg `author/Model-GGUF`
- Cortex Built-in Models

Expand Down Expand Up @@ -141,41 +149,15 @@ Select a model (1-9):
```

## Advanced Installation

### Network Installer (Stable)

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

<h4>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:14px; width: 14px" />
Windows:
<a href='https://app.cortexcpp.com/download/latest/windows-amd64-local'><b>cortex-windows-network-installer.exe</b></a>
</h4>

<h4>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
MacOS (Universal):
<a href='https://app.cortexcpp.com/download/latest/mac-universal-network'><b>cortex-mac-network-installer.pkg</b></a>
</h4>

<h4>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:14px; width: 15px" />
Linux:
<a href='https://app.cortexcpp.com/download/latest/linux-amd64-network'><b>cortex-linux-network-installer.deb</b></a>
</h4>

### Beta & Nightly Versions (Local Installer)

### Beta & Nightly Versions

Cortex releases 2 preview versions for advanced users to try new features early (we appreciate your feedback!)
- Beta (early preview)
- CLI command: `cortex-beta`
- Nightly (released every night)
- CLI Command: `cortex-nightly`
Cortex releases Beta and Nightly versions for advanced users to try new features (we appreciate your feedback!)
- Beta (early preview): CLI command: `cortex-beta`
- Nightly (released every night): CLI Command: `cortex-nightly`
- Nightly automatically pulls the latest changes from upstream [llama.cpp](https://github.com/ggerganov/llama.cpp/) repo, creates a PR and runs tests.
- If all test pass, the PR is automatically merged into our repo, with the latest llama.cpp version.

#### Local Installer (Default)
<table>
<tr style="text-align:center">
<td style="text-align:center"><b>Version</b></td>
Expand All @@ -188,19 +170,19 @@ Cortex releases 2 preview versions for advanced users to try new features early
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/windows-amd64-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:14px; width: 14px" />
cortex-beta-windows-local-installer.exe
cortex.exe
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/mac-universal-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
cortex-beta-mac-local-installer.pkg
cortex.pkg
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/linux-amd64-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:14px; width: 14px" />
cortex-beta-linux-local-installer.deb
cortex.deb
</a>
</td>
</tr>
Expand All @@ -209,25 +191,27 @@ Cortex releases 2 preview versions for advanced users to try new features early
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/nightly/windows-amd64-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:14px; width: 14px" />
cortex-nightly-windows-local-installer.exe
cortex.exe
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/nightly/mac-universal-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
cortex-nightly-mac-local-installer.pkg
cortex.pkg
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/nightly/linux-amd64-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:14px; width: 14px" />
cortex-nightly-linux-local-installer.deb
cortex.deb
</a>
</td>
</tr>
</table>

#### Network Installer
### Network Installer

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

<table>
<tr style="text-align:center">
Expand All @@ -236,24 +220,45 @@ Cortex releases 2 preview versions for advanced users to try new features early
<td style="text-align:center"><b>MacOS</b></td>
<td style="text-align:center"><b>Linux</b></td>
</tr>
<tr style="text-align:center">
<td style="text-align:center"><b>Stable (Recommended)</b></td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/latest/windows-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:15px; width: 15px" />
cortex.exe
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/latest/mac-universal-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
cortex.pkg
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/latest/linux-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:15px; width: 15px" />
cortex.deb
</a>
</td>
</tr>
<tr style="text-align:center">
<td style="text-align:center"><b>Beta (Preview)</b></td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/windows-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:14px; width: 14px" />
cortex-beta-windows-network-installer.exe
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:15px; width: 15px" />
cortex.exe
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/mac-universal-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
cortex-beta-mac-network-installer.pkg
cortex.pkg
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/linux-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:15px; width: 15px" />
cortex-beta-linux-network-installer.deb
cortex.deb
</a>
</td>
</tr>
Expand All @@ -262,19 +267,19 @@ Cortex releases 2 preview versions for advanced users to try new features early
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/nightly/windows-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:15px; width: 15px" />
cortex-nightly-windows-network-installer.exe
cortex.exe
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/nightly/mac-universal-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
cortex-nightly-mac-network-installer.pkg
cortex.pkg
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/nightly/linux-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:15px; width: 15px" />
cortex-nightly-linux-network-installer.deb
cortex.deb
</a>
</td>
</tr>
Expand Down
78 changes: 78 additions & 0 deletions docs/docs/cli/config.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: Cortex Config
description: Cortex config command
slug: "config"
---

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

# `cortex config`

This command allows you to update server configurations such as CORS and Allowed Headers.

## Usage
:::info
You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`.
:::

<Tabs>
<TabItem value="MacOs/Linux" label="MacOs/Linux">
```sh
cortex config [options] [subcommand]
```
</TabItem>
<TabItem value="Windows" label="Windows">
```sh
cortex.exe config [options] [subcommand]
```
</TabItem>
</Tabs>

**Options**:

| Option | Description | Required | Default value | Example |
|------------------|-------------------------------------------|----------|----------------------|---------|
| `--cors` | Toggle CORS | No | true | `on`, `off` |
| `--allowed_origins`| Allowed origins for CORS | No | `http://localhost:39281`, `http://127.0.0.1:39281` | `http://localhost:3000` |
| `-h`, `--help` | Display help information for the command. | No | - | `-h` |

---
# Subcommands:

## `cortex config status`
:::info
This CLI command calls the following API endpoint:
- [Get Configurations](/api-reference#tag/configurations/get/v1/configs)
:::
This command returns all server configurations.

**Usage**:
<Tabs>
<TabItem value="MacOs/Linux" label="MacOs/Linux">
```sh
cortex config status
```
</TabItem>
<TabItem value="Windows" label="Windows">
```sh
cortex.exe config status

```
</TabItem>
</Tabs>

For example, it returns the following:

```
+-------------------------------------------------------------------------------------+
| Config name | Value |
+-------------------------------------------------------------------------------------+
| allowed_origins | http://localhost:39281 |
+-------------------------------------------------------------------------------------+
| allowed_origins | http://127.0.0.1:39281/ |
+-------------------------------------------------------------------------------------+
| cors | true |
+-------------------------------------------------------------------------------------+

```
Loading
Loading