From d52e06df424dd3635c43feed563849055ddba7ff Mon Sep 17 00:00:00 2001 From: irfanpena Date: Fri, 12 Jul 2024 12:45:14 +0700 Subject: [PATCH 1/4] Update the README --- README.md | 67 +++++++++---------------------------------- cortex-js/README.md | 69 ++++++++++----------------------------------- 2 files changed, 29 insertions(+), 107 deletions(-) diff --git a/README.md b/README.md index 09afaa0fc..cf0ddfee7 100644 --- a/README.md +++ b/README.md @@ -11,22 +11,26 @@ > ⚠️ **Cortex is currently in Development**: Expect breaking changes and bugs! ## About -Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library. +Cortex is an OpenAI-compatible AI engine for building LLM apps. It features Docker-inspired CLI and client libraries and can be used as a standalone server or an importable library. -Cortex currently supports 3 inference engines: - -- Llama.cpp -- ONNX Runtime -- TensorRT-LLM +## Cortex Engines +Cortex supports the following engines: +- [`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU. +- [`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs. +- [`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference. ## Quicklinks - [Homepage](https://cortex.so/) - [Docs](https://cortex.so/docs/) +- [CLI Reference Docs](https://cortex.so/docs/cli) ## Quickstart ### Prerequisites -Ensure that your system meets the following requirements to run Cortex: +- **OS**: + - MacOSX 13.6 or higher. + - Windows 10 or higher. + - Ubuntu 22.04 and later. - **Dependencies**: - **Node.js**: Version 18 and above is required to run the installation. - **NPM**: Needed to manage packages. @@ -35,15 +39,8 @@ Ensure that your system meets the following requirements to run Cortex: ```bash sudo apt install openmpi-bin libopenmpi-dev ``` -- **OS**: - - MacOSX 13.6 or higher. - - Windows 10 or higher. - - Ubuntu 22.04 and later. - -> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started. ### NPM -Install using NPM package: ``` bash # Install using NPM npm i -g cortexso @@ -54,7 +51,6 @@ npm uninstall -g cortexso ``` ### Homebrew -Install using Homebrew: ``` bash # Install using Brew brew install cortexso @@ -63,9 +59,10 @@ cortex run mistral # To uninstall using Brew brew uninstall cortexso ``` -> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases). +### Installer +Download the Cortex installer on the [GitHub Releases](https://github.com/janhq/cortex/releases). -To run Cortex as an API server: +## Cortex Server ```bash cortex serve @@ -101,42 +98,6 @@ chmod +x '[path-to]/cortex/cortex-js/dist/src/command.js' npm link ``` -## Cortex CLI Commands - -The following CLI commands are currently available. -See [CLI Reference Docs](https://cortex.so/docs/cli) for more information. - -```bash - - serve Providing API endpoint for Cortex backend. - chat Send a chat request to a model. - init|setup Init settings and download cortex's dependencies. - ps Show running models and their status. - kill Kill running cortex processes. - pull|download Download a model. Working with HuggingFace model id. - run [options] EXPERIMENTAL: Shortcut to start a model and chat. - models Subcommands for managing models. - models list List all available models. - models pull Download a specified model. - models remove Delete a specified model. - models get Retrieve the configuration of a specified model. - models start Start a specified model. - models stop Stop a specified model. - models update Update the configuration of a specified model. - benchmark Benchmark and analyze the performance of a specific AI model using your system. - presets Show all the available model presets within Cortex. - telemetry Retrieve telemetry logs for monitoring and analysis. - embeddings Creates an embedding vector representing the input text. - engines Subcommands for managing engines. - engines get Get an engine details. - engines list Get all the available Cortex engines. - engines init Setup and download the required dependencies to run cortex engines. - configs Subcommands for managing configurations. - configs get Get a configuration details. - configs list Get all the available configurations. - configs set Set a configuration. -``` - ## Contact Support - For support, please file a GitHub ticket. - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). diff --git a/cortex-js/README.md b/cortex-js/README.md index ac26f8595..cf0ddfee7 100644 --- a/cortex-js/README.md +++ b/cortex-js/README.md @@ -11,22 +11,26 @@ > ⚠️ **Cortex is currently in Development**: Expect breaking changes and bugs! ## About -Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library. +Cortex is an OpenAI-compatible AI engine for building LLM apps. It features Docker-inspired CLI and client libraries and can be used as a standalone server or an importable library. -Cortex currently supports 3 inference engines: - -- Llama.cpp -- ONNX Runtime -- TensorRT-LLM +## Cortex Engines +Cortex supports the following engines: +- [`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU. +- [`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs. +- [`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference. ## Quicklinks - [Homepage](https://cortex.so/) - [Docs](https://cortex.so/docs/) +- [CLI Reference Docs](https://cortex.so/docs/cli) ## Quickstart ### Prerequisites -Ensure that your system meets the following requirements to run Cortex: +- **OS**: + - MacOSX 13.6 or higher. + - Windows 10 or higher. + - Ubuntu 22.04 and later. - **Dependencies**: - **Node.js**: Version 18 and above is required to run the installation. - **NPM**: Needed to manage packages. @@ -35,16 +39,8 @@ Ensure that your system meets the following requirements to run Cortex: ```bash sudo apt install openmpi-bin libopenmpi-dev ``` -- **OS**: - - MacOSX 13.6 or higher. - - Windows 10 or higher. - - Ubuntu 22.04 and later. - -> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started. - ### NPM -Install using NPM package: ``` bash # Install using NPM npm i -g cortexso @@ -55,7 +51,6 @@ npm uninstall -g cortexso ``` ### Homebrew -Install using Homebrew: ``` bash # Install using Brew brew install cortexso @@ -64,9 +59,10 @@ cortex run mistral # To uninstall using Brew brew uninstall cortexso ``` -> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases). +### Installer +Download the Cortex installer on the [GitHub Releases](https://github.com/janhq/cortex/releases). -To run Cortex as an API server: +## Cortex Server ```bash cortex serve @@ -102,43 +98,8 @@ chmod +x '[path-to]/cortex/cortex-js/dist/src/command.js' npm link ``` -## Cortex CLI Commands - -The following CLI commands are currently available. -See [CLI Reference Docs](https://cortex.so/docs/cli) for more information. - -```bash - - serve Providing API endpoint for Cortex backend. - chat Send a chat request to a model. - init|setup Init settings and download cortex's dependencies. - ps Show running models and their status. - kill Kill running cortex processes. - pull|download Download a model. Working with HuggingFace model id. - run [options] EXPERIMENTAL: Shortcut to start a model and chat. - models Subcommands for managing models. - models list List all available models. - models pull Download a specified model. - models remove Delete a specified model. - models get Retrieve the configuration of a specified model. - models start Start a specified model. - models stop Stop a specified model. - models update Update the configuration of a specified model. - benchmark Benchmark and analyze the performance of a specific AI model using your system. - presets Show all the available model presets within Cortex. - telemetry Retrieve telemetry logs for monitoring and analysis. - embeddings Creates an embedding vector representing the input text. - engines Subcommands for managing engines. - engines get Get an engine details. - engines list Get all the available Cortex engines. - engines init Setup and download the required dependencies to run cortex engines. - configs Subcommands for managing configurations. - configs get Get a configuration details. - configs list Get all the available configurations. - configs set Set a configuration. -``` - ## Contact Support - For support, please file a GitHub ticket. - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). + From 8fb4a032d25af9db51b3168ed30ca8f85371b282 Mon Sep 17 00:00:00 2001 From: irfanpena Date: Fri, 12 Jul 2024 12:48:22 +0700 Subject: [PATCH 2/4] add link for github ticket issue --- README.md | 2 +- cortex-js/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index cf0ddfee7..fe3cbb73d 100644 --- a/README.md +++ b/README.md @@ -99,7 +99,7 @@ npm link ``` ## Contact Support -- For support, please file a GitHub ticket. +- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). diff --git a/cortex-js/README.md b/cortex-js/README.md index cf0ddfee7..fe3cbb73d 100644 --- a/cortex-js/README.md +++ b/cortex-js/README.md @@ -99,7 +99,7 @@ npm link ``` ## Contact Support -- For support, please file a GitHub ticket. +- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). From 181c8d3cc0a710e9fbfa762bccf06d7a29e4210b Mon Sep 17 00:00:00 2001 From: irfanpena Date: Fri, 12 Jul 2024 14:05:11 +0700 Subject: [PATCH 3/4] nits --- README.md | 47 ++++++++++++++++++++++++++++++++++++++++----- cortex-js/README.md | 47 ++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 84 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index fe3cbb73d..f93528417 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ > ⚠️ **Cortex is currently in Development**: Expect breaking changes and bugs! ## About -Cortex is an OpenAI-compatible AI engine for building LLM apps. It features Docker-inspired CLI and client libraries and can be used as a standalone server or an importable library. +Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library. ## Cortex Engines Cortex supports the following engines: @@ -23,7 +23,6 @@ Cortex supports the following engines: - [Homepage](https://cortex.so/) - [Docs](https://cortex.so/docs/) -- [CLI Reference Docs](https://cortex.so/docs/cli) ## Quickstart ### Prerequisites @@ -40,6 +39,8 @@ Cortex supports the following engines: sudo apt install openmpi-bin libopenmpi-dev ``` +> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started. + ### NPM ``` bash # Install using NPM @@ -59,8 +60,7 @@ cortex run mistral # To uninstall using Brew brew uninstall cortexso ``` -### Installer -Download the Cortex installer on the [GitHub Releases](https://github.com/janhq/cortex/releases). +> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases). ## Cortex Server ```bash @@ -98,8 +98,45 @@ chmod +x '[path-to]/cortex/cortex-js/dist/src/command.js' npm link ``` +## Cortex CLI Commands + +The following CLI commands are currently available. +See [CLI Reference Docs](https://cortex.so/docs/cli) for more information. + +```bash + + serve Providing API endpoint for Cortex backend. + chat Send a chat request to a model. + init|setup Init settings and download cortex's dependencies. + ps Show running models and their status. + kill Kill running cortex processes. + pull|download Download a model. Working with HuggingFace model id. + run [options] EXPERIMENTAL: Shortcut to start a model and chat. + models Subcommands for managing models. + models list List all available models. + models pull Download a specified model. + models remove Delete a specified model. + models get Retrieve the configuration of a specified model. + models start Start a specified model. + models stop Stop a specified model. + models update Update the configuration of a specified model. + benchmark Benchmark and analyze the performance of a specific AI model using your system. + presets Show all the available model presets within Cortex. + telemetry Retrieve telemetry logs for monitoring and analysis. + embeddings Creates an embedding vector representing the input text. + engines Subcommands for managing engines. + engines get Get an engine details. + engines list Get all the available Cortex engines. + engines init Setup and download the required dependencies to run cortex engines. + configs Subcommands for managing configurations. + configs get Get a configuration details. + configs list Get all the available configurations. + configs set Set a configuration. +``` + ## Contact Support -- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). +- For support, please file a GitHub ticket. - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). + diff --git a/cortex-js/README.md b/cortex-js/README.md index fe3cbb73d..f93528417 100644 --- a/cortex-js/README.md +++ b/cortex-js/README.md @@ -11,7 +11,7 @@ > ⚠️ **Cortex is currently in Development**: Expect breaking changes and bugs! ## About -Cortex is an OpenAI-compatible AI engine for building LLM apps. It features Docker-inspired CLI and client libraries and can be used as a standalone server or an importable library. +Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library. ## Cortex Engines Cortex supports the following engines: @@ -23,7 +23,6 @@ Cortex supports the following engines: - [Homepage](https://cortex.so/) - [Docs](https://cortex.so/docs/) -- [CLI Reference Docs](https://cortex.so/docs/cli) ## Quickstart ### Prerequisites @@ -40,6 +39,8 @@ Cortex supports the following engines: sudo apt install openmpi-bin libopenmpi-dev ``` +> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started. + ### NPM ``` bash # Install using NPM @@ -59,8 +60,7 @@ cortex run mistral # To uninstall using Brew brew uninstall cortexso ``` -### Installer -Download the Cortex installer on the [GitHub Releases](https://github.com/janhq/cortex/releases). +> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases). ## Cortex Server ```bash @@ -98,8 +98,45 @@ chmod +x '[path-to]/cortex/cortex-js/dist/src/command.js' npm link ``` +## Cortex CLI Commands + +The following CLI commands are currently available. +See [CLI Reference Docs](https://cortex.so/docs/cli) for more information. + +```bash + + serve Providing API endpoint for Cortex backend. + chat Send a chat request to a model. + init|setup Init settings and download cortex's dependencies. + ps Show running models and their status. + kill Kill running cortex processes. + pull|download Download a model. Working with HuggingFace model id. + run [options] EXPERIMENTAL: Shortcut to start a model and chat. + models Subcommands for managing models. + models list List all available models. + models pull Download a specified model. + models remove Delete a specified model. + models get Retrieve the configuration of a specified model. + models start Start a specified model. + models stop Stop a specified model. + models update Update the configuration of a specified model. + benchmark Benchmark and analyze the performance of a specific AI model using your system. + presets Show all the available model presets within Cortex. + telemetry Retrieve telemetry logs for monitoring and analysis. + embeddings Creates an embedding vector representing the input text. + engines Subcommands for managing engines. + engines get Get an engine details. + engines list Get all the available Cortex engines. + engines init Setup and download the required dependencies to run cortex engines. + configs Subcommands for managing configurations. + configs get Get a configuration details. + configs list Get all the available configurations. + configs set Set a configuration. +``` + ## Contact Support -- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). +- For support, please file a GitHub ticket. - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). + From 9befce0cd2b9765d6bea70604a8adc8f2185e10b Mon Sep 17 00:00:00 2001 From: irfanpena Date: Fri, 12 Jul 2024 14:23:03 +0700 Subject: [PATCH 4/4] links --- README.md | 2 +- cortex-js/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f93528417..660664159 100644 --- a/README.md +++ b/README.md @@ -135,7 +135,7 @@ See [CLI Reference Docs](https://cortex.so/docs/cli) for more information. ``` ## Contact Support -- For support, please file a GitHub ticket. +- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). diff --git a/cortex-js/README.md b/cortex-js/README.md index f93528417..660664159 100644 --- a/cortex-js/README.md +++ b/cortex-js/README.md @@ -135,7 +135,7 @@ See [CLI Reference Docs](https://cortex.so/docs/cli) for more information. ``` ## Contact Support -- For support, please file a GitHub ticket. +- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).