diff --git a/README.md b/README.md index c3e7c35da..7f68cad54 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@ # Cortex.cpp +
@@ -21,45 +22,56 @@
> ⚠️ **Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.**
## Overview
+
Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like [Jan.ai](https://jan.ai/).
Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but also supports the following:
+
- [`llamacpp`](https://github.com/janhq/cortex.llamacpp)
- [`onnx`](https://github.com/janhq/cortex.onnx)
- [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm)
## Installation
+
This Local Installer packages all required dependencies, so that you don’t need an internet connection during the installation process.
Alternatively, Cortex is available with a [Network Installer](#network-installer) which downloads the necessary dependencies from the internet during the installation.
+
### Stable
-### Windows:
+
+### Windows:
+
cortex-local-installer.exe
-### MacOS:
+### MacOS:
+
cortex-local-installer.pkg
-### Linux:
+### Linux:
+
cortex-local-installer.deb
Download the installer and run the following command in terminal:
+
```bash
sudo apt install ./cortex-local-installer.deb
# or
sudo apt install ./cortex-network-installer.deb
```
+
The binary will be installed in the `/usr/bin/` directory.
## Usage
+
After installation, you can run Cortex.cpp from the command line by typing `cortex --help`. For Beta preview, you can run `cortex-beta --help`.
## Built-in Model Library
@@ -68,33 +80,36 @@ Cortex.cpp supports various models available on the [Cortex Hub](https://hugging
Example models:
-| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command |
-|------------------|-----------------------|------------------------------------------|----------------------------|-------------------------------|
-| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf |
-| llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
-| mistral | ✅ | ✅ | ✅ | cortex run mistral |
-| qwen2 | ✅ | | | cortex run qwen2:7b-gguf |
-| codestral | ✅ | | | cortex run codestral:22b-gguf |
-| command-r | ✅ | | | cortex run command-r:35b-gguf |
-| gemma | ✅ | | ✅ | cortex run gemma |
-| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf |
-| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
-| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium |
-| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini |
-| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf |
+| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command |
+| -------------- | --------------------- | ------------------------ | ----------------------- | ----------------------------- |
+| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf |
+| llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
+| mistral | ✅ | ✅ | ✅ | cortex run mistral |
+| qwen2 | ✅ | | | cortex run qwen2:7b-gguf |
+| codestral | ✅ | | | cortex run codestral:22b-gguf |
+| command-r | ✅ | | | cortex run command-r:35b-gguf |
+| gemma | ✅ | | ✅ | cortex run gemma |
+| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf |
+| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
+| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium |
+| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini |
+| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf |
> **Note**:
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.
## Cortex.cpp CLI Commands
+
For complete details on CLI commands, please refer to our [CLI documentation](https://cortex.so/docs/cli).
## REST API
+
Cortex.cpp includes a REST API accessible at `localhost:39281`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference).
## Advanced Installation
-### Local Installer: Beta & Nightly Versions
+### Local Installer: Beta & Nightly Versions
+
Beta is an early preview for new versions of Cortex. It is for users who want to try new features early - we appreciate your feedback.
Nightly is our development version of Brave. It is released every night and may contain bugs and experimental features.
@@ -172,6 +187,7 @@ Nightly is our development version of Brave. It is released every night and may
### Network Installer
+
Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
-
+
cortex-network-installer.pkg
|
-
+
cortex-network-installer.deb
@@ -248,6 +264,7 @@ Cortex.cpp is available with a Network Installer, which is a smaller installer b
### Build from Source
#### Windows
+
1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:
@@ -257,6 +274,7 @@ cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
```
+
4. Build the Cortex.cpp inside the `build` folder:
```bash
@@ -264,6 +282,7 @@ mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
```
+
5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.
@@ -271,7 +290,9 @@ cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcp
# Get the help information
cortex -h
```
+
#### MacOS
+
1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:
@@ -281,6 +302,7 @@ cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
```
+
4. Build the Cortex.cpp inside the `build` folder:
```bash
@@ -289,6 +311,7 @@ cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
```
+
5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.
@@ -296,7 +319,9 @@ make -j4
# Get the help information
cortex -h
```
+
#### Linux
+
1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:
@@ -306,6 +331,7 @@ cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
```
+
4. Build the Cortex.cpp inside the `build` folder:
```bash
@@ -314,6 +340,7 @@ cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
```
+
5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.
@@ -323,25 +350,32 @@ cortex
```
## Uninstallation
+
### Windows
+
1. Open the Windows Control Panel.
2. Navigate to `Add or Remove Programs`.
3. Search for `cortexcpp` and double click to uninstall. (for beta and nightly builds, search for `cortexcpp-beta` and `cortexcpp-nightly` respectively)
### MacOs
+
Run the uninstaller script:
+
```bash
sudo sh cortex-uninstall.sh
```
+
For MacOS, there is a uninstaller script comes with the binary and added to the `/usr/local/bin/` directory. The script is named `cortex-uninstall.sh` for stable builds, `cortex-beta-uninstall.sh` for beta builds and `cortex-nightly-uninstall.sh` for nightly builds.
### Linux
+
```bash
# For stable builds
sudo apt remove cortexcpp
```
## Contact Support
+
- For support, please file a [GitHub ticket](https://github.com/janhq/cortex.cpp/issues/new/choose).
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
- For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).
|