Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 54 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Cortex.cpp

<p align="center">
<img width="1280" alt="Cortex cpp's Readme Banner" src="https://github.com/user-attachments/assets/a27c0435-b3b4-406f-b575-96ac4f12244c">

Expand All @@ -21,45 +22,56 @@
> ⚠️ **Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.**

## Overview

Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like [Jan.ai](https://jan.ai/).

Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but also supports the following:

- [`llamacpp`](https://github.com/janhq/cortex.llamacpp)
- [`onnx`](https://github.com/janhq/cortex.onnx)
- [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm)

## Installation

This Local Installer packages all required dependencies, so that you don’t need an internet connection during the installation process.

Alternatively, Cortex is available with a [Network Installer](#network-installer) which downloads the necessary dependencies from the internet during the installation.

### Stable
### Windows:

### Windows:

<a href='https://app.cortexcpp.com/download/latest/windows-amd64-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/windows.png' style="height:14px; width: 14px" />
<b>cortex-local-installer.exe</b>
</a>

### MacOS:
### MacOS:

<a href='https://app.cortexcpp.com/download/latest/mac-universal-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
<b>cortex-local-installer.pkg</b>
</a>

### Linux:
### Linux:

<a href='https://app.cortexcpp.com/download/latest/linux-amd64-local'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:14px; width: 14px" />
<b>cortex-local-installer.deb</b>
</a>

Download the installer and run the following command in terminal:

```bash
sudo apt install ./cortex-local-installer.deb
# or
sudo apt install ./cortex-network-installer.deb
```

The binary will be installed in the `/usr/bin/` directory.

## Usage

After installation, you can run Cortex.cpp from the command line by typing `cortex --help`. For Beta preview, you can run `cortex-beta --help`.

## Built-in Model Library
Expand All @@ -68,33 +80,36 @@ Cortex.cpp supports various models available on the [Cortex Hub](https://hugging

Example models:

| Model | llama.cpp<br >`:gguf` | TensorRT<br >`:tensorrt` | ONNXRuntime<br >`:onnx` | Command |
|------------------|-----------------------|------------------------------------------|----------------------------|-------------------------------|
| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf |
| llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
| mistral | ✅ | ✅ | ✅ | cortex run mistral |
| qwen2 | ✅ | | | cortex run qwen2:7b-gguf |
| codestral | ✅ | | | cortex run codestral:22b-gguf |
| command-r | ✅ | | | cortex run command-r:35b-gguf |
| gemma | ✅ | | ✅ | cortex run gemma |
| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf |
| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium |
| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini |
| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf |
| Model | llama.cpp<br >`:gguf` | TensorRT<br >`:tensorrt` | ONNXRuntime<br >`:onnx` | Command |
| -------------- | --------------------- | ------------------------ | ----------------------- | ----------------------------- |
| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf |
| llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
| mistral | ✅ | ✅ | ✅ | cortex run mistral |
| qwen2 | ✅ | | | cortex run qwen2:7b-gguf |
| codestral | ✅ | | | cortex run codestral:22b-gguf |
| command-r | ✅ | | | cortex run command-r:35b-gguf |
| gemma | ✅ | | ✅ | cortex run gemma |
| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf |
| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium |
| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini |
| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf |

> **Note**:
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.

## Cortex.cpp CLI Commands

For complete details on CLI commands, please refer to our [CLI documentation](https://cortex.so/docs/cli).

## REST API

Cortex.cpp includes a REST API accessible at `localhost:39281`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference).

## Advanced Installation

### Local Installer: Beta & Nightly Versions
### Local Installer: Beta & Nightly Versions

Beta is an early preview for new versions of Cortex. It is for users who want to try new features early - we appreciate your feedback.

Nightly is our development version of Brave. It is released every night and may contain bugs and experimental features.
Expand Down Expand Up @@ -172,6 +187,7 @@ Nightly is our development version of Brave. It is released every night and may
</table>

### Network Installer

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

<table>
Expand All @@ -189,13 +205,13 @@ Cortex.cpp is available with a Network Installer, which is a smaller installer b
<b>cortex-network-installer.exe</b>
</a>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/mac-universal-network'>
<a href='https://app.cortexcpp.com/download/latest/mac-universal-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
<b>cortex-network-installer.pkg</b>
</a>
</td>
<td style="text-align:center">
<a href='https://app.cortexcpp.com/download/beta/linux-amd64-network'>
<a href='https://app.cortexcpp.com/download/latest/linux-amd64-network'>
<img src='https://github.com/janhq/docs/blob/main/static/img/linux.png' style="height:14px; width: 14px" />
<b>cortex-network-installer.deb</b>
</a>
Expand Down Expand Up @@ -248,6 +264,7 @@ Cortex.cpp is available with a Network Installer, which is a smaller installer b
### Build from Source

#### Windows

1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:
Expand All @@ -257,21 +274,25 @@ cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
```

4. Build the Cortex.cpp inside the `build` folder:

```bash
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
```

5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.

```sh
# Get the help information
cortex -h
```

#### MacOS

1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:
Expand All @@ -281,6 +302,7 @@ cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
```

4. Build the Cortex.cpp inside the `build` folder:

```bash
Expand All @@ -289,14 +311,17 @@ cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
```

5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.

```sh
# Get the help information
cortex -h
```

#### Linux

1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:
Expand All @@ -306,6 +331,7 @@ cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
```

4. Build the Cortex.cpp inside the `build` folder:

```bash
Expand All @@ -314,6 +340,7 @@ cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
```

5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.

Expand All @@ -323,25 +350,32 @@ cortex
```

## Uninstallation

### Windows

1. Open the Windows Control Panel.
2. Navigate to `Add or Remove Programs`.
3. Search for `cortexcpp` and double click to uninstall. (for beta and nightly builds, search for `cortexcpp-beta` and `cortexcpp-nightly` respectively)

### MacOs

Run the uninstaller script:

```bash
sudo sh cortex-uninstall.sh
```

For MacOS, there is a uninstaller script comes with the binary and added to the `/usr/local/bin/` directory. The script is named `cortex-uninstall.sh` for stable builds, `cortex-beta-uninstall.sh` for beta builds and `cortex-nightly-uninstall.sh` for nightly builds.

### Linux

```bash
# For stable builds
sudo apt remove cortexcpp
```

## Contact Support

- For support, please file a [GitHub ticket](https://github.com/janhq/cortex.cpp/issues/new/choose).
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
- For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).