From 6a77e82cb33f64877f9317b537deb33c2b674e93 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 20 Jan 2025 00:19:52 +1100 Subject: [PATCH 01/22] added syntax highlighting and improved readability --- docs/docs/overview.mdx | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/docs/overview.mdx b/docs/docs/overview.mdx index 95fec519b..e8545a195 100644 --- a/docs/docs/overview.mdx +++ b/docs/docs/overview.mdx @@ -29,13 +29,13 @@ Key Features: - Swappable Inference Backends (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp), future: [`ONNXRuntime`](https://github.com/janhq/cortex.onnx), [`TensorRT-LLM`](https://github.com/janhq/cortex.tensorrt-llm)) - Cortex can be deployed as a standalone API server, or integrated into apps like [Jan.ai](https://jan.ai/) -Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs. +Cortex's roadmap includes implementing full compatibility with OpenAI API's and that includes Tools, Runs, Multi-modal and Realtime APIs. ## Inference Backends - Default: [llama.cpp](https://github.com/ggerganov/llama.cpp): cross-platform, supports most laptops, desktops and OSes -- Future: [ONNX Runtime](https://github.com/microsoft/onnxruntime): supports Windows Copilot+ PCs & NPUs -- Future: [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM): supports Nvidia GPUs +- Future: [ONNX Runtime](https://github.com/microsoft/onnxruntime): supports Windows Copilot+ PCs & NPUs and traditional machine learning models +- Future: [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM): supports a variety of model architectures on Nvidia GPUs If GPU hardware is available, Cortex is GPU accelerated by default. @@ -45,26 +45,26 @@ Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexib - [Cortex Built-in Models](https://cortex.so/models) > **Note**: -> As a very general guide: You should have >8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models. +> As a very general guide: For quantized models you should have >8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models. ### Cortex Built-in Models & Quantizations | Model /Engine | llama.cpp | Command | | -------------- | --------------------- | ----------------------------- | -| phi-3.5 | ✅ | cortex run phi3.5 | -| llama3.2 | ✅ | cortex run llama3.2 | -| llama3.1 | ✅ | cortex run llama3.1 | -| codestral | ✅ | cortex run codestral | -| gemma2 | ✅ | cortex run gemma2 | -| mistral | ✅ | cortex run mistral | -| ministral | ✅ | cortex run ministral | -| qwen2 | ✅ | cortex run qwen2.5 | -| openhermes-2.5 | ✅ | cortex run openhermes-2.5 | -| tinyllama | ✅ | cortex run tinyllama | +| phi-3.5 | ✅ | `cortex run phi3.5` | +| llama3.2 | ✅ | `cortex run llama3.2` | +| llama3.1 | ✅ | `cortex run llama3.1` | +| codestral | ✅ | `cortex run codestral` | +| gemma2 | ✅ | `cortex run gemma2` | +| mistral | ✅ | `cortex run mistral` | +| ministral | ✅ | `cortex run ministral` | +| qwen2 | ✅ | `cortex run qwen2.5` | +| openhermes-2.5 | ✅ | `cortex run openhermes-2.5` | +| tinyllama | ✅ | `cortex run tinyllama` | View all [Cortex Built-in Models](https://cortex.so/models). Cortex supports multiple quantizations for each model. -``` +```sh ❯ cortex-nightly pull llama3.2 Downloaded models: llama3.2:3b-gguf-q2-k From 2de30cadb8fdfaa1a929ba818fd0bd06774095ed Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 20 Jan 2025 00:22:21 +1100 Subject: [PATCH 02/22] added api docs line to key features --- docs/docs/overview.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/docs/overview.mdx b/docs/docs/overview.mdx index e8545a195..1e2005e04 100644 --- a/docs/docs/overview.mdx +++ b/docs/docs/overview.mdx @@ -28,6 +28,7 @@ Key Features: - Models stored in universal file formats (vs blobs) - Swappable Inference Backends (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp), future: [`ONNXRuntime`](https://github.com/janhq/cortex.onnx), [`TensorRT-LLM`](https://github.com/janhq/cortex.tensorrt-llm)) - Cortex can be deployed as a standalone API server, or integrated into apps like [Jan.ai](https://jan.ai/) +- Automatic API docs for your server Cortex's roadmap includes implementing full compatibility with OpenAI API's and that includes Tools, Runs, Multi-modal and Realtime APIs. From 8da4791c2d0304f03645ed2deae3e0415611f138 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 20 Jan 2025 00:43:19 +1100 Subject: [PATCH 03/22] general improvements on format and readability --- docs/docs/quickstart.mdx | 63 ++++++++++++++++++++++++++-------------- 1 file changed, 42 insertions(+), 21 deletions(-) diff --git a/docs/docs/quickstart.mdx b/docs/docs/quickstart.mdx index 874309ad4..173e23835 100644 --- a/docs/docs/quickstart.mdx +++ b/docs/docs/quickstart.mdx @@ -15,13 +15,15 @@ Cortex.cpp is in active development. If you have any questions, please reach out ::: ## Local Installation -Cortex has an Local Installer that packages all required dependencies, so that no internet connection is required during the installation process. + +Cortex has a **Local Installer** with all of the required dependencies, so that once you've downloaded it, no internet connection is required during the installation process. - [Windows](https://app.cortexcpp.com/download/latest/windows-amd64-local) - [Mac (Universal)](https://app.cortexcpp.com/download/latest/mac-universal-local) - [Linux](https://app.cortexcpp.com/download/latest/linux-amd64-local) -## Start Cortex.cpp API Server -This command starts the Cortex.cpp API server at `localhost:39281`. +## Start a Cortex Server + +This command starts the Cortex's' API server at `localhost:39281`. ```sh @@ -35,48 +37,63 @@ This command starts the Cortex.cpp API server at `localhost:39281`. -## Pull a Model & Select Quantization +## Pull Models + This command allows users to download a model from these Model Hubs: - [Cortex Built-in Models](https://cortex.so/models) - [Hugging Face](https://huggingface.co) (GGUF): `cortex pull ` It displays available quantizations, recommends a default and downloads the desired quantization. + + The following two options will show you all of the available models under those names. Cortex will first search + on its own hub for models like `llama3.3`, and in huggingface for hyper specific ones like `bartowski/Meta-Llama-3.1-8B-Instruct-GGU`. ```sh - $ cortex pull llama3.2 - $ cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF + cortex pull llama3.3 + ``` + or, + + ```sh + cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF ``` ```sh - $ cortex pull llama3.2 - $ cortex.exe pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF + cortex pull llama3.3 + ``` + ```sh + cortex.exe pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF ``` ## Run a Model -This command downloads the default `gguf` model format from the [Cortex Hub](https://huggingface.co/cortexso), starts the model, and chat with the model. + +This command downloads the default `gguf` model (if not available in your file system) from the [Cortex Hub](https://huggingface.co/cortexso), +starts the model, and chat with the model. + ```sh - cortex run llama3.2 + cortex run llama3.3 ``` ```sh - cortex.exe run llama3.2 + cortex.exe run llama3.3 ``` + :::info All model files are stored in the `~/cortex/models` folder. ::: ## Using the Model + ### API -```curl +```sh curl http://localhost:39281/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ @@ -101,7 +118,9 @@ curl http://localhost:39281/v1/chat/completions \ Refer to our [API documentation](https://cortex.so/api-reference) for more details. ## Show the System State -This command displays the running model and the hardware system status (RAM, Engine, VRAM, Uptime) + +This command displays the running model and the hardware system status (RAM, Engine, VRAM, Uptime). + ```sh @@ -116,22 +135,24 @@ This command displays the running model and the hardware system status (RAM, Eng ## Stop a Model + This command stops the running model. ```sh - cortex models stop llama3.2 + cortex models stop llama3.3 ``` ```sh - cortex.exe models stop llama3.2 + cortex.exe models stop llama3.3 ``` -## Stop Cortex.cpp API Server -This command starts the Cortex.cpp API server at `localhost:39281`. +## Stop a Cortex Server + +This command stops the Cortex.cpp API server at `localhost:39281` or whichever other port you used to start cortex. ```sh @@ -168,8 +189,8 @@ This command starts the Cortex.cpp API server at `localhost:39281`. --> ## What's Next? -Now that Cortex.cpp is set up, here are the next steps to explore: +Now that Cortex is set up, you can continue on to any of the following sections: -1. Adjust the folder path and configuration using the [`.cortexrc`](/docs/architecture/cortexrc) file. -2. Explore the Cortex.cpp [data folder](/docs/architecture/data-folder) to understand how it stores data. -3. Learn about the structure of the [`model.yaml`](/docs/capabilities/models/model-yaml) file in Cortex.cpp. +- Adjust the folder path and configuration using the [`.cortexrc`](/docs/architecture/cortexrc) file. +- Explore the Cortex's [data folder](/docs/architecture/data-folder) to understand how data gets stored. +- Learn about the structure of the [`model.yaml`](/docs/capabilities/models/model-yaml) file in Cortex. From 3a1fd86c1850a086a4c92e70fb1ab8c883e83328 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Tue, 21 Jan 2025 11:40:21 +1100 Subject: [PATCH 04/22] fixed general note reagarding RAM/model size --- docs/docs/overview.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/overview.mdx b/docs/docs/overview.mdx index 1e2005e04..c1386d2a8 100644 --- a/docs/docs/overview.mdx +++ b/docs/docs/overview.mdx @@ -46,7 +46,7 @@ Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexib - [Cortex Built-in Models](https://cortex.so/models) > **Note**: -> As a very general guide: For quantized models you should have >8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models. +> As a very general guide: You should have >8 GB of RAM available to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 14B models. ### Cortex Built-in Models & Quantizations | Model /Engine | llama.cpp | Command | From ef135a6ebef0169c4b8182087a9abff2655822e9 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Thu, 23 Jan 2025 16:28:09 +1100 Subject: [PATCH 05/22] polished installation section and enhanced docker section --- docs/docs/installation.mdx | 26 +- docs/docs/installation/docker.mdx | 372 +++++++++++++++++++---------- docs/docs/installation/linux.mdx | 18 +- docs/docs/installation/mac.mdx | 57 +++-- docs/docs/installation/windows.mdx | 48 ++-- 5 files changed, 330 insertions(+), 191 deletions(-) diff --git a/docs/docs/installation.mdx b/docs/docs/installation.mdx index 80409e009..68de8e0f7 100644 --- a/docs/docs/installation.mdx +++ b/docs/docs/installation.mdx @@ -9,23 +9,24 @@ import TabItem from '@theme/TabItem'; import Admonition from '@theme/Admonition'; :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in our codebase. ::: ## Cortex.cpp Installation + ### Cortex.cpp offers four installer types -- Network Installers download a minimal system and require an internet connection to fetch packages during installation. -- Local Installers include all necessary packages, enabling offline installation without internet access. -- Dockerfile Installers are used to build a Docker image for Cortex. -- Binary files without package management. +- **Network Installers** download a minimal script and require an internet connection to fetch packages during installation. +- **Local Installers** include all necessary packages, enabling offline installation without internet access. +- **Dockerfile** Installers are used to build a Docker image with Cortex ready to go. +- **Binary files** without package management. ### Cortex.cpp supports three channels -- Stable: The latest stable release on github. -- Beta: The release candidate for the next stable release, available on github release with the tag `vx.y.z-rc1` -- Nightly: The nightly build of the latest code on dev branch, available on [discord](https://discord.com/channels/1107178041848909847/1283654073488379904). +- **Stable**: The latest stable release on github. +- **Beta**: The release candidate for the next stable release, available on github release with the tag `vx.y.z-rc1` +- **Nightly**: The nightly build of the latest commit on dev branch, available on [discord](https://discord.com/channels/1107178041848909847/1283654073488379904). -For more information, please check out [different channels](#different-channels). +For more information, please check out the [different channels](#different-channels). ### Download URLs @@ -45,7 +46,8 @@ For other versions, please look at [cortex.cpp repo](https://github.com/janhq/co ### OS - MacOS 12 or later - Windows 10 or later -- Linux: Ubuntu 20.04 or later, Debian 11 or later (For other distributions, please use the Dockerfile installer or binary files, we have not tested on other distributions yet.) +- Linux: Ubuntu 20.04 or later, Debian 11 or later, and any of the latest versions of Arch (for other distributions, +please use the Dockerfile installer or binary files, we have not tested on other distributions yet.) ### Hardware #### CPU @@ -81,11 +83,11 @@ Having at least 6GB VRAM when using NVIDIA, AMD, or Intel Arc GPUs is recommende - [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) version 12.0 or higher. *Note: Cortex.cpp will automatically detect and install the required version of cudart to the user's machine.* ::: #### Disk -- At least 10GB for app storage and model download. +- At least 10GB of free space for downloading models. ## Different channels -Different channels have different features, stability levels, binary file name, app folder and data folder. +Different channels have different features, stability levels, binary file name, app and data folders. ### Stable - App name: `cortexcpp` diff --git a/docs/docs/installation/docker.mdx b/docs/docs/installation/docker.mdx index 154281be5..821298300 100644 --- a/docs/docs/installation/docker.mdx +++ b/docs/docs/installation/docker.mdx @@ -1,6 +1,6 @@ --- -title: Docker -description: Install Cortex using Docker. +title: Docker Installation Guide +description: Comprehensive guide for installing and running Cortex using Docker --- import Tabs from '@theme/Tabs'; @@ -8,135 +8,202 @@ import TabItem from '@theme/TabItem'; import Admonition from '@theme/Admonition'; :::warning -🚧 **Cortex.cpp is currently in development.** The documentation describes the intended functionality, which may not yet be fully implemented. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended +behavior of Cortex, which may not yet be fully implemented in the codebase. ::: -## Setting Up Cortex with Docker +## Getting Started with Cortex on Docker -This guide walks you through the setup and running of Cortex using Docker. +This guide provides comprehensive instructions for installing and running Cortex in a Docker environment, +including sensible defaults for security and performance. ### Prerequisites -- Docker or Docker Desktop -- `nvidia-container-toolkit` (for GPU support) +Before beginning, ensure you have: +- [Docker](https://docs.docker.com/engine/install/) (version 20.10.0 or higher) or [Docker Desktop](https://docs.docker.com/desktop/) +- At least 8GB of RAM and 10GB of free disk space +- For GPU support, make sure you install `nvidia-container-toolkit`. Here is an example on how to do so for Ubuntu: + ```bash + # Install NVIDIA Container Toolkit + curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg + ``` + ```bash + # Add repository + curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ + sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ + sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list + ``` + ```bash + # Install + sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit + sudo nvidia-ctk runtime configure --runtime=docker + sudo systemctl restart docker + ``` -### Setup Instructions +### Installation Methods -#### Build Cortex Docker Image from source or Pull from Docker Hub +#### Method 1: Using Pre-built Image (Recommended) -##### Pull Cortex Docker Image from Docker Hub +```bash +# Pull the latest stable release +docker pull menloltd/cortex:latest +``` +```bash +# Or pull a specific version (recommended for production) +docker pull menloltd/cortex:nightly-1.0.1-224 +``` - ```bash - # Pull the latest image - docker pull menloltd/cortex:latest +:::info Version Tags +- `latest`: Most recent stable release +- `nightly`: Latest development build +- `x.y.z` (e.g., `1.0.1`): Specific version release +::: - # Pull a specific version - docker pull menloltd/cortex:nightly-1.0.1-224 - ``` +#### Method 2: Building from Source -##### Build and Run Cortex Docker Container from Dockerfile +1. **Clone the repo:** +```bash +git clone https://github.com/janhq/cortex.cpp.git +cd cortex.cpp +git submodule update --init +``` -1. **Clone the Cortex Repository** - ```bash - git clone https://github.com/janhq/cortex.cpp.git - cd cortex.cpp - git submodule update --init - ``` +2. **Build the Docker image:** + + + ```bash + docker build -t menloltd/cortex:local \ + --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) \ + -f docker/Dockerfile . + ``` + + + ```bash + docker build \ + --build-arg CORTEX_LLAMACPP_VERSION=0.1.34 \ + --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) \ + -t menloltd/cortex:local \ + -f docker/Dockerfile . + ``` + + -2. **Build the Docker Image** +### Running Cortex (Securely) - - - ```sh - docker build -t cortex --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) -f docker/Dockerfile . - ``` - - - ```sh - docker build --build-arg CORTEX_LLAMACPP_VERSION=0.1.34 --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) -t cortex -f docker/Dockerfile . - ``` - - +1. **[Optional] Create a dedicated user and data directory:** +```bash +# Create a dedicated user +sudo useradd -r -s /bin/false cortex +export CORTEX_UID=$(id -u cortex) +``` +```bash +# Create data directory with proper permissions +sudo mkdir -p /opt/cortex/data +sudo chown -R ${CORTEX_UID}:${CORTEX_UID} /opt/cortex +``` -#### Run Cortex Docker Container +2. **Set up persistent storage:** +```bash +docker volume create cortex_data +``` -1. **Run the Docker Container** - - Create a Docker volume to store models and data: - ```bash - docker volume create cortex_data - ``` +3. **Launch the container:** + + + ```bash + docker run --gpus all -d \ + --name cortex \ + --user ${CORTEX_UID}:${CORTEX_UID} \ + --memory=4g \ + --memory-swap=4g \ + --security-opt=no-new-privileges \ + -v cortex_data:/root/cortexcpp:rw \ + -v /opt/cortex/data:/data:rw \ + -p 127.0.0.1:39281:39281 \ + menloltd/cortex:latest + ``` + + + ```bash + docker run -d \ + --name cortex \ + --user ${CORTEX_UID}:${CORTEX_UID} \ + --memory=4g \ + --memory-swap=4g \ + --security-opt=no-new-privileges \ + -v cortex_data:/root/cortexcpp:rw \ + -v /opt/cortex/data:/data:rw \ + -p 127.0.0.1:39281:39281 \ + menloltd/cortex:latest + ``` + + - - - ```sh - # requires nvidia-container-toolkit - docker run --gpus all -it -d --name cortex -v cortex_data:/root/cortexcpp -p 39281:39281 cortex - ``` - - - ```sh - docker run -it -d --name cortex -v cortex_data:/root/cortexcpp -p 39281:39281 cortex - ``` - - +### Verification and Testing -2. **Check Logs (Optional)** - ```bash - docker logs cortex - ``` +1. **Check container status:** +```bash +docker ps | grep cortex +docker logs cortex +``` -3. **Access the Cortex Documentation API** - - Open [http://localhost:39281](http://localhost:39281) in your browser. +Expected output should show: +``` +Cortex server starting... +Initialization complete +Server listening on port 39281 +``` -4. **Access the Container and Try Cortex CLI** - ```bash - docker exec -it cortex bash - cortex --help - ``` +2. **Test the API:** +```bash +curl http://127.0.0.1:39281/healthz +``` -### Usage +### Working with Cortex -With Docker running, you can use the following commands to interact with Cortex. Ensure the container is running and `curl` is installed on your machine. +Once your container is running, here's how to interact with Cortex. Make sure you have `curl` installed on your system. -#### 1. List Available Engines +#### 1. Check Available Engines ```bash curl --request GET --url http://localhost:39281/v1/engines --header "Content-Type: application/json" ``` -- **Example Response** - ```json - { - "data": [ - { - "description": "This extension enables chat completion API calls using the Onnx engine", - "format": "ONNX", - "name": "onnxruntime", - "status": "Incompatible" - }, - { - "description": "This extension enables chat completion API calls using the LlamaCPP engine", - "format": "GGUF", - "name": "llama-cpp", - "status": "Ready", - "variant": "linux-amd64-avx2", - "version": "0.1.37" - } - ], - "object": "list", - "result": "OK" - } - ``` +You'll see something like: +```json +{ + "data": [ + { + "description": "This extension enables chat completion API calls using the Onnx engine", + "format": "ONNX", + "name": "onnxruntime", + "status": "Incompatible" + }, + { + "description": "This extension enables chat completion API calls using the LlamaCPP engine", + "format": "GGUF", + "name": "llama-cpp", + "status": "Ready", + "variant": "linux-amd64-avx2", + "version": "0.1.37" + } + ], + "object": "list", + "result": "OK" +} +``` -#### 2. Pull Models from Hugging Face +#### 2. Download Models -- Open a terminal and run `websocat ws://localhost:39281/events` to capture download events, follow [this instruction](https://github.com/vi/websocat?tab=readme-ov-file#installation) to install `websocat`. -- In another terminal, pull models using the commands below. +First, set up event monitoring: +- Install `websocat` following [these instructions](https://github.com/vi/websocat?tab=readme-ov-file#installation) +- Open a terminal and run: `websocat ws://localhost:39281/events` + +Then, in a new terminal, download your desired model: ```sh - # requires nvidia-container-toolkit curl --request POST --url http://localhost:39281/v1/models/pull --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}' ``` @@ -147,36 +214,93 @@ curl --request GET --url http://localhost:39281/v1/engines --header "Content-Typ -- After pull models successfully, run command below to list models. - ```bash - curl --request GET --url http://localhost:39281/v1/models - ``` +To see your downloaded models: +```bash +curl --request GET --url http://localhost:39281/v1/models +``` -#### 3. Start a Model and Send an Inference Request +#### 3. Using the Model -- **Start the model:** - ```bash - curl --request POST --url http://localhost:39281/v1/models/start --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}' - ``` +First, start your model: +```bash +curl --request POST --url http://localhost:39281/v1/models/start --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}' +``` -- **Send an inference request:** - ```bash - curl --request POST --url http://localhost:39281/v1/chat/completions --header 'Content-Type: application/json' --data '{ - "frequency_penalty": 0.2, - "max_tokens": 4096, - "messages": [{"content": "Tell me a joke", "role": "user"}], - "model": "tinyllama:gguf", - "presence_penalty": 0.6, - "stop": ["End"], - "stream": true, - "temperature": 0.8, - "top_p": 0.95 - }' - ``` +Then, send it a query: +```bash +curl --request POST --url http://localhost:39281/v1/chat/completions --header 'Content-Type: application/json' --data '{ + "frequency_penalty": 0.2, + "max_tokens": 4096, + "messages": [{"content": "Tell me a joke", "role": "user"}], + "model": "tinyllama:gguf", + "presence_penalty": 0.6, + "stop": ["End"], + "stream": true, + "temperature": 0.8, + "top_p": 0.95 + }' +``` -#### 4. Stop a Model +#### 4. Shutting Down -- To stop a running model, use: - ```bash - curl --request POST --url http://localhost:39281/v1/models/stop --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}' - ``` +When you're done, stop the model: +```bash +curl --request POST --url http://localhost:39281/v1/models/stop --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}' +``` + +### Maintenance and Troubleshooting + +#### Common Issues + +1. **Permission Denied Errors:** +```bash +sudo chown -R ${CORTEX_UID}:${CORTEX_UID} /opt/cortex/data +docker restart cortex +``` + +2. **Container Won't Start:** +```bash +docker logs cortex +docker system info # Check available resources +``` + +#### Cleanup + +```bash +# Stop and remove container +docker stop cortex +docker rm cortex +``` + +```bash +# Remove data (optional) +docker volume rm cortex_data +sudo rm -rf /opt/cortex/data +``` + +```bash +# Remove image +docker rmi cortexai/cortex:latest +``` + +### Updating Cortex + +```bash +# Pull latest version +docker pull cortexai/cortex:latest +``` + +```bash +# Stop and remove old container +docker stop cortex +docker rm cortex + +# Start new container (use run command from above) +``` + +:::tip Best Practices +- Always use specific version tags in production +- Regularly backup your data volume +- Monitor container resources using `docker stats cortex` +- Keep your Docker installation updated +::: diff --git a/docs/docs/installation/linux.mdx b/docs/docs/installation/linux.mdx index a14450f47..cf7dc354d 100644 --- a/docs/docs/installation/linux.mdx +++ b/docs/docs/installation/linux.mdx @@ -9,7 +9,8 @@ import TabItem from '@theme/TabItem'; import Admonition from '@theme/Admonition'; :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended +behavior of Cortex, which may not yet be fully implemented in the codebase yet. ::: ## Cortex.cpp Installation @@ -28,18 +29,15 @@ This instruction is for stable releases. For beta and nightly releases, please r 1. Install cortex with one command -- Linux debian base distros +- Network installer for all linux distros ```bash - # Network installer - curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s - - # Local installer - curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s -- --deb_local + curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s ``` -- Other linux distros +- Local installer for Debian-based distros ```bash - curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s + # Local installer + curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s -- --deb_local ``` - Parameters @@ -113,4 +111,4 @@ sudo /usr/bin/cortex-uninstall.sh ```bash sudo cortex update -``` \ No newline at end of file +``` diff --git a/docs/docs/installation/mac.mdx b/docs/docs/installation/mac.mdx index 51c4760a4..9f3dfef82 100644 --- a/docs/docs/installation/mac.mdx +++ b/docs/docs/installation/mac.mdx @@ -8,30 +8,30 @@ import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. ::: ## Cortex.cpp Installation :::info Before installation, make sure that you have met the [minimum requirements](/docs/installation#minimum-requirements) to run Cortex. -This instruction is for stable releases. For beta and nightly releases, please replace `cortex` with `cortex-beta` and `cortex-nightly`, respectively. +The instructions below are for stable releases only. For beta and nightly releases, please replace `cortex` with `cortex-beta` and `cortex-nightly`, respectively. ::: 1. Download the Linux installer: - - From release: https://github.com/janhq/cortex.cpp/releases - - From quick download links: - - Local installer `.deb`: - - Stable: https://app.cortexcpp.com/download/latest/mac-universal-local - - Beta: https://app.cortexcpp.com/download/beta/mac-universal-local - - Nightly: https://app.cortexcpp.com/download/nightly/mac-universal-local - - Network installer `.deb`: - - Stable: https://app.cortexcpp.com/download/latest/mac-universal-network - - Beta: https://app.cortexcpp.com/download/beta/mac-universal-network - - Nightly: https://app.cortexcpp.com/download/nightly/mac-universal-network - - Binary: - - Stable: https://app.cortexcpp.com/download/latest/mac-universal-binary - - Beta: https://app.cortexcpp.com/download/beta/mac-universal-binary - - Nightly: https://app.cortexcpp.com/download/nightly/mac-universal-binary +- From release: https://github.com/janhq/cortex.cpp/releases +- From quick download links: + - Local installer `.deb`: + - Stable: https://app.cortexcpp.com/download/latest/mac-universal-local + - Beta: https://app.cortexcpp.com/download/beta/mac-universal-local + - Nightly: https://app.cortexcpp.com/download/nightly/mac-universal-local + - Network installer `.deb`: + - Stable: https://app.cortexcpp.com/download/latest/mac-universal-network + - Beta: https://app.cortexcpp.com/download/beta/mac-universal-network + - Nightly: https://app.cortexcpp.com/download/nightly/mac-universal-network + - Binary: + - Stable: https://app.cortexcpp.com/download/latest/mac-universal-binary + - Beta: https://app.cortexcpp.com/download/beta/mac-universal-binary + - Nightly: https://app.cortexcpp.com/download/nightly/mac-universal-binary 2. Install Cortex.cpp by double-clicking the pkg downloaded file. @@ -42,21 +42,29 @@ This instruction is for stable releases. For beta and nightly releases, please r ``` ### Data Folder -By default, Cortex.cpp is installed in the following directory: -``` + +By default, Cortex.cpp is installed in the `bin` directory: + +```sh # Binary Location /usr/local/bin/cortex /usr/local/bin/cortex-server /usr/local/bin/cortex-uninstall.sh +``` -# Application Data (Engines, Models and Logs folders) +The application data which includes Engines, Models and Logs will be installed in your home directory. +```sh /Users//cortexcpp +``` -# Configuration File +The configuration file, `.cortexrc`, will also be in your home directory. +```sh /Users//.cortexrc ``` ## Uninstall Cortex.cpp + Run the uninstaller script: + ```bash sudo sh cortex-uninstall.sh ``` @@ -100,17 +108,22 @@ The script requires sudo permission. -3. Verify that Cortex.cpp is builded correctly by getting help information. +3. Verify that Cortex.cpp was built correctly by using `-h` flag to call the help info. ```sh # Get the help information ./build/cortex -h ``` -## Update cortex to latest version +## Update Cortex + +Cortex can be updated in-place without any additional scripts. In addition, cortex will let you know if there is a new version of itself the next +time you start a server. + :::info The script requires sudo permission. ::: + ```bash sudo cortex update ``` \ No newline at end of file diff --git a/docs/docs/installation/windows.mdx b/docs/docs/installation/windows.mdx index 39855d44e..a5c2c2d86 100644 --- a/docs/docs/installation/windows.mdx +++ b/docs/docs/installation/windows.mdx @@ -9,36 +9,37 @@ import TabItem from '@theme/TabItem'; import Admonition from '@theme/Admonition'; :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended behavior of +Cortex, which may not yet be fully implemented in the codebase. ::: ## Overview -For Windows, Cortex.cpp can be installed in two ways: -- [Windows](#windows) -- [Windows Subsystem for Linux (WSL)](#windows-subsystem-linux) +For Windows, Cortex.cpp can be installed in two ways, by downloading the [windows](#windows) installer or +via the [Windows Subsystem for Linux (WSL)](#windows-subsystem-linux). ## Windows ### Install Cortex.cpp :::info Before installation, make sure that you have met the [minimum requirements](/docs/installation#minimum-requirements) to run Cortex. -This instruction is for stable releases. For beta and nightly releases, please replace `cortex` with `cortex-beta` and `cortex-nightly`, respectively. +The instructions below are for stable releases only. For beta and nightly releases, please replace `cortex` with `cortex-beta` +and `cortex-nightly`, respectively. ::: Download the windows installer: - - From release: https://github.com/janhq/cortex.cpp/releases - - From quick download links: - - Local installer `.deb`: - - Stable: https://app.cortexcpp.com/download/latest/windows-amd64-local - - Beta: https://app.cortexcpp.com/download/beta/windows-amd64-local - - Nightly: https://app.cortexcpp.com/download/nightly/windows-amd64-local - - Network installer `.deb`: - - Stable: https://app.cortexcpp.com/download/latest/windows-amd64-network - - Beta: https://app.cortexcpp.com/download/beta/windows-amd64-network - - Nightly: https://app.cortexcpp.com/download/nightly/windows-amd64-network - - Binary: - - Stable: https://app.cortexcpp.com/download/latest/windows-amd64-binary - - Beta: https://app.cortexcpp.com/download/beta/windows-amd64-binary - - Nightly: https://app.cortexcpp.com/download/nightly/windows-amd64-binary +- From release: https://github.com/janhq/cortex.cpp/releases +- From quick download links: + - Local installer `.deb`: + - Stable: https://app.cortexcpp.com/download/latest/windows-amd64-local + - Beta: https://app.cortexcpp.com/download/beta/windows-amd64-local + - Nightly: https://app.cortexcpp.com/download/nightly/windows-amd64-local + - Network installer `.deb`: + - Stable: https://app.cortexcpp.com/download/latest/windows-amd64-network + - Beta: https://app.cortexcpp.com/download/beta/windows-amd64-network + - Nightly: https://app.cortexcpp.com/download/nightly/windows-amd64-network + - Binary: + - Stable: https://app.cortexcpp.com/download/latest/windows-amd64-binary + - Beta: https://app.cortexcpp.com/download/beta/windows-amd64-binary + - Nightly: https://app.cortexcpp.com/download/nightly/windows-amd64-binary #### Data Folder @@ -58,14 +59,15 @@ C:\Users\\.cortexrc To uninstall Cortex.cpp: 1. Open the **Control Panel**. 1. Navigate to **Add or Remove program**. -2. Search for cortexcpp and click **Uninstall**. +2. Search for `cortexcpp` and click **Uninstall**. ## Windows Subsystem Linux :::info -Windows Subsystem Linux allows running Linux tools and workflows seamlessly alongside Windows applications. For more information, please see this [article](https://learn.microsoft.com/en-us/windows/wsl/faq). +Windows Subsystem Linux allows running Linux tools and workflows seamlessly alongside Windows applications. For more +information, please see this [article](https://learn.microsoft.com/en-us/windows/wsl/faq). ::: -Follow [linux installation steps](linux) to install Cortex.cpp on Windows Subsystem Linux. +Follow the [linux installation steps](linux) to install Cortex.cpp on the WSL. ## Build from Source @@ -84,7 +86,7 @@ Follow [linux installation steps](linux) to install Cortex.cpp on Windows Subsys cd cortex.cpp git submodule update --init ``` -2. Build the Cortex.cpp : +2. Build Cortex.cpp from source: ```cmd cd engine From c666840f015dbdcd4dfcf832692241d4f428e03e Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 27 Jan 2025 11:23:18 +1100 Subject: [PATCH 06/22] polished wording and added results examples --- docs/docs/basic-usage/cortex-js.md | 16 ++- docs/docs/basic-usage/cortex-py.md | 4 +- docs/docs/basic-usage/index.mdx | 166 ++++++++++++++++++++++------- 3 files changed, 139 insertions(+), 47 deletions(-) diff --git a/docs/docs/basic-usage/cortex-js.md b/docs/docs/basic-usage/cortex-js.md index 4e5a4a774..698e9e011 100644 --- a/docs/docs/basic-usage/cortex-js.md +++ b/docs/docs/basic-usage/cortex-js.md @@ -3,19 +3,17 @@ title: cortex.js description: How to use the Cortex.js Library --- -[Cortex.js](https://github.com/janhq/cortex.js) is a Typescript client library that can be used to interact with the Cortex API. - -This is still a work in progress, and we will let the community know once a stable version is available. - :::warning -🚧 Cortex.js is currently under development, and this page is a stub for future development. +🚧 Cortex.js is currently under development, and this page is a stub for future development. ::: +[Cortex.js](https://github.com/janhq/cortex.js) is a Typescript client library that can be used to +interact with the Cortex API. It is a fork of the OpenAI Typescript library with additional methods for Local AI. -7nWuRF^4|aPx0~v} z(0@<{KB{DFZ<0RcR#vL|Uf-B*>F2tkKV47f%WPF9FyAw>t*WYO$G^AZE;?{Xl@I3F z-?ygd8V$U?KN!PnHuBQN)3amdh- zNCH*qpu*vs8zo0-Pb}o+=AM$|w`?DmbX83G#PwGBshN?0^WvA1wxX7VD@=$?N(^OQ z)ma^meW~f$GuP@Hl-Oxd^Skj1T&0`K9rh2GA_QNFiim_43ftFY2ToGO2)l1-E4tNv z419DzcNFX!ymK5Iby^vS;5 zh_Emp-_$hS5?A0+2G5E8s@~XukW3jh)g(nDLZk+t@j>*4L%zp$&3sq~W~`zjYnggk z)N$tKR@4*Nz*9Pou-23@@|$ht@75-S6<0gE2GZ+PxM_SUh0bkTiJ#r^r#;M)!2E34 z=e$l{o9**_+}d+w08M%m5D?f;hbP3I6Wu9En3a9pn#d^Rv5XwD-FSG$rzgBA3VO9r`63x`MIbwA5TgY4SykS|BlcLd` znZ~1)k!;dJFg7-BuZC{A*hxS(uB~Q%pt}J-a%>mn!q01=2;bkEuW)8 zBO;>Q=jPJOY{{u*fl6OgJ+t+frk^Zq#UVpOaXA_VyxHply+T>{sQ^h@o$dr-&gP%)KaC)^}TGuK%Cnp4cq~mn*Zx~`5jI0P&G47=5L3b(`&zyoZ z5Y_5iU;n*BF1Vp;|AEbLCmx;6G#gBw)lebWW8UC5Y;dPLL@Uv&^Y&y?6g$6ediO&{ zh5*&LwpoupNGF0hGMPXdraVL0cm_M%uv1w5`2LJSeuR7ipCw5Bg~d!L-hNkLa-#eL zu_9M&X5iUZTU#qFlCOEndIoLSWJA;h85q&Idz0Vr@$eVa9_f8@?usu%OWCmKz%w3}l-H0caB5 zzCQhB2_gmK^9Kua&{bW>Pxd*vEmrt5BDPPXCJnFMpRkQHp?tYm>E*Kzvo-wd*Dvs( z<%{wbDxcYXEXs#4KY32lOPb~|zzJbScB}Cb$w7urhUoj8NyO?A-mWh2l~?x`AVcNh zl#zO&i60xYEG+M?R&brzcot4|!EM2$`9$o$qgI3TqnhpVIo zcS-2>7B64=U=b>P6+AyRKFv|0+x2@P<+q(e0MWL&l4T$I&e6Qv$<|ht1C)g-DQetn zD=SGeqwP8N3C!Zr{MTk7hQ>^)hm%Ry$p&UTEGBAVSy#-v09^*=QqZNn2!sX5&cqQo zr22A@MIPIein&t_A)H5jzCMY2MV!NOd8nqgbtS7Py^YrlsHsnSwK_^LiR&`Qrzj^u zwLRnaEvBqBN0wbr&$?O`-<7J9gf;y(cATNha|&N>)TmZnQv*pgLa;l(4c!H?mDJSK zx@&+Y)5)-S6E~*RiCo5|q?6|Q`mi}mi{;Nf+_$@}d#-c+`fj{mjwH%c9dtv^@^o$c=rvNoz-W=r;lkS=c@A8R+yhHsw}(~HHF+Mk-fE1mjyG@pI%vI?k{KfY^;5(iJGlo<`%EIpTI^5g6|aQ}bEjL37d z!@G7x*F~jB=sJz}Eb+Chn&btwV4TsMP98f8u~Jkm5oK5H6tc5zN`hIm-Q0fGJuz^1 zFD5MnXqR!X3X`Y4r&>nG?j4_h@bIB)aYh5Le)`A9IH~9?+h;ZA3%}QL?wQqtt6w>D zWovfC^L><;{Z6`2b({~$)2Ut@uHbWC<@3fro@8vzQ0-AbU&Q{3p#I~)r`HFJUR@c5 z7u`1cdF)RE{(t3?ktemexr;RZMhc#zb!E)fmK(R;BYPX0^+p`Ab1;Hmf^9Y>)9TMm zCsm`vJw;B0Y5Y*icdey5vJxAY_KkG-=X=e#H-56~JHDJo1A7CZ634GM7VhUwB#bMo zsz4XPce~E$CfVD1v^jFi)Z<4sY#OoKd@T3T8dOtM6z;5{61W#yr8sJ@vIF+YRrJM1e93Skb* zRk7L1ST#Kl+;q1ND_^0-dD4r;us;x-F+_>de3t5nw$0NiARkjguXgzHU6ZNyc5_F>1W&I@Hm8S?k0Zv-5Cv=nk$`{s;)jNl8I^Z4xgqF|imf2x_JZf_lw42_j1F z-&EvS)z{d6n$9{fM9t<83} zGpuBbJTbmi{H*uXOn!Pgst2cX|J{WJA$^C7YBUPwd=KW-Hc?+XL~Z@GmRfs&Qm$%dq@@wEj8@Q5~ z5LW&^KGhrk)MZ=CexcplxNWx*7z~A~e04loCB>=CiN{AVOntQHtaVG0Qi4fKSb|7| z1Kzx=@OXr-TSiilk|S$Xenwnij93mE{}y~MBu?JHzd)}1bG5&M)z1AhE+-<+09&=5 z9;A{{pWMTPpyXkEFuSbCnt}x zUWhJ2+=3k~S@~hRHRoH?-pqUb3%;WpQ^?LFoksl|S3}+?7OxRW@02(?I$B^6<9gJg z77q+mgEd)Q^oO?%3Mqh|TowNy8p@^PMqx-`Pz-8zqg%_*;Kd%?R7i+@YATvC%Z{Izq8C$7V4@d87iO)jluAVVNqvGa+ol+yJ{+zg&T;M_wM>qU z=4|Mo{tY3X0WuVNVI&>mM5hD{6AJEakbNO=OmRpHEVf^&zaZmJ;;u{04 zVruJBVdr-53~G8VtsLW2_;}wFcV&fyrisRrS17+t+uqyV&%PW}d3D|*Rn_3>Q&!6u z0HYF|@Tc^SkbXaCP7E!&xJ^E0W9vk~VLH}^D38CWlGmLhxEF~_G71R2S&}Ii7t09? zU0q#xFlO{d)&dZWUKRR?Pf+s~R^kaO7`PgW% z#GtMvE?brDa=3_=p@ko61V6I>=?G)oR|7kHd+Moi7T)w+%MM&NQaABoAm<+OVwA}H z`#ejUAv~fP@^~EV12RV7dom|BOaMF}prc*rMCgpG4KmJRVqy}qJdDOvHl8?ff|;3F zB}K$gx2X_BEnr1rQ*h0@A7};r37n44fS~S9AJT3hX&2bti+#**-uVL{oa&Zj0UR7B zG8vCk+HMJB-*?F)vU=2wZkN9ww@(s@l25_1f%J)i=p^}aHfbjfJ0)6{JTuB`O_-0D z3omXTG3tmI@urP^ItLHY1%BhZ@D#&ACvG?1_jFG1fda}zhqC0_MJz9WdgRC($=98$ zD+0(u2YmZA-6Yrkp0nv9Q_W01UoBI^1xkv=La&Egisz#a@<196Yh z&Q%)3gV&-ta0=IDo_X_Tt#V{m;3-`Hp%~t4Y_ps^3e{RKF146wUo}Esyy#>&^y}R_ z9Psw@qb6&FK7^pD170)7n-&J66Ib1m(0QI>$nI({v+-Mbs$7`Rw?|k<$LPfVulBAi zsL5_4U_St}tp!I^9?@fT^hOn@ z(zbWNRSJ~Jx;4r)lb)_hzSKZ}`3iccWta-;I-vr}M#N-jU@H^cfM^I_RBi9n#s&Pf zgj9jkR(*`}FYf%rDq8LCxI;JKGex!E+3u+U_hP8Y|3tFaxMxE|a#3f>ThxBEo?K2p z@guqXf05|iqF}%u=HX<@FYOG)$Ado;9fIa_FLtOj` zwZu;36f6d;9>-98*ojO5rKMIjsUGgu)%dps@KLHz^HojFBVKgPE&A>)Cc>n>qD1~! z?3B5=8?p){bUrw$&Dh?#GqDt|q@?t=^N#h6GmVaFt*>P_D)B9db)k2cku{yL^(%+0 zRt`-Am^jr}T!)^_0iT#*Iu^G3^*rc-$i@o+9_CRR@O~!KUZNA^UQ=A%7RF)R{szsH zHe`REpU7r(dLrMnzii_GHWYl@JYBbw-fXCSR9fvjz@c7+MIG=VItllru-O$^l9kl~ z7)rUW7rKKr;$pk(PwHR$6_9P*+@UKn#Wysw$%bPVheAs$q z6^FRkLwq9DyvSQFzy2j{FQ7?NNdy9+>lPAXo}oAc*xTIKrsb6tqIX@?k+Ym{rX4nF zVOum+41@d9O9}&Z0g(+>N!h%wpc5gql(1WP2VlR3KI^rjC= zs(Yg7JQS?1PMj_8?Jd5|ga7uYeC!kQML67L&1$mc46>*dr|#2S;t0#y1Kebq-o4sD z-HV~tQ3g@@P`$@gbrVo<5Tt$lkCl+vQy56KbMGr=q&*PW960)}3PdP*CUUS>5Lsdw zt`VlXw6U2z)ZE0P-ew76A2yp$Fo_7zR1C0Th}6gaIi4QU;o(C=M$Oy{hxJ6-?UKxk zPQ$&1H8RHLtxCL)cqvB+pgY~TwR?%84iJfgd{!lVn7SE zBL)exoSet;s{ovVW`>B!$XrE`=>gwJ8b(r3q z=Kv9Qx-gd(xzY+mm`0!*==6Cc4H$X2aGK@uBDpS@qZ!sxdE9^3qVP7N@cKy7^bXcm z%&tt?2`TUO#zuBPN-)06m}EY zt(LeOwZWP`jK?=%4gEcz3u^7MC(z?*Oc>uya+W+7sKBIG%e!rhHgZ9nP z3ese_GB@Hb_`7ogYj0%0I}3V&n3MFC`b*z2J&TLLcHCL5=BB2sTHpI+;-3;CToHn; zZi36Fk&ivfB^Nwr#wYFo2_~QJy{3|;-SG~wT&MRmS{so>svwcNJS!#T9+%jXTiX~j z#EypkI$n(7+o1nK-G9H8e$nx=LpK+1%91Tzq|X5PT?X`>9`Gm5wwa8#9$7=xJl!{Q z;hRmmC7qs{hY8$j#&Er^ii(O33RQO|SbW*YSp3BDRoD)jz4H0H4r<>4B_1=8{An9g zQxykD2Pj?$3p<^tpkw2#TFVkU4s^dIMrMaMKLmi(6y78`3U%`;?WO9=M)0ZIFma7QF$A(DpND3wOtOJ+B0V%Gqsh&OpgCPgqwfD+?$Pjq<;sXsNM!57M zgugHtRs|TqBI%sRmKdFoySc1mX;W=&bv}om*VpSFe-s$-jyYLsNAtFzf~LA4pAZKK z4>iui-}OwHGUZX7Oad1>y~BNf9fUEYHF0T2-#C7@t; z%5wulC@dJ|gQK7QU2&en?D`uuY&-@dwd;sA-UNH<29TR@xPIMYbv2wl{Q0!&S4!90KG?(Rl2Ah{~_nW2C^}c!c8d<-XEcc-&pLXMMagCf{;VK9iK}t zwOsGrqX4Y`{#@5s7weL`dN-vi!|R}GDS(uya(HPRXyKtzau&ZGZwY&ri!cz^2O2k(t>cvP;@rpw@ptJ*a59gXjCw-Vbdknu&)$pDiJU{dlS)- zS{t(di5P6Bn`W4J4|s|2!Zc|`P?OU1HfY$t4xbXF>a+3abA0Nc;mp9H{Tklm)8r&B zL>_+1Gor85j&I1@b6DMZE(Z@zX+Q1zr=U!$I`d*A?goF>lVEj9*9o1FMs+l965E}4 zbO1iWI7f_SvuGXBj}0z{K!Z9TlJ{nz>w`Fa5UJpwd?ph8L9d)ySzk2i#3&7y6&@D; zDiaQy_WIGuWTE)4sBYzDphR{+p{m#=p9^Q!+@$-4c?a)lU!^klC(%DHOo@u(v$v~I z)%y)&KeP_#l%O8Ozflz2R8_wx`7JBzECf8iyn2(los>SA-&t@$$?@&{`3u!*hnLJ8 zL~sjh7=}_5cw%d?RTMS#zM@ss9|f;V(3vuZvDPl(%#5wOajno}zGCZpai#AYTk=X! z+cuDAOVEL}5C5W_2U1uuzCE1X+5~J6w!%;17+i&OMhr(UfQ=BwMezGf!Mr?Z2el2# z#A7IAvYX9?pP#=Vlks?rUeeZel~^vN&#IbNd~sxX6+{sj(_0HFu#F%o}5pdqiw{-^7wOg#?yIZ{Y*q zBB_4**+GmO1Nv1eEgh6T-?6bBuo=1}@%GDiVNRcyX?*Y2JrCK9Do>)t*E_QswR{C6 ze~k;(m6a8m)s+npVjFF4E%@uUw;voergm#%%u$YN6S>DW)y#xYOPQ^z`kiGF`u%8r z-Lp+kR;%L|ub!}+7_yI*vXq=%n%*hn7orqnOiOkI_;YFJNueo5PY0BRtvfdn$)XyN zGsL^{<&uA!+IP1yJ#^zz_HbxK_V}a^c#_>!D3Nq7Lw42X`;pb3$1HZR%+70qaSJkb z=+HC^VIlN;5IX0@X$*_I zifNH9H@*6GD~Yy2XayM#FbLKq_Sdx>uKz5{{{Mlkjrq6i$E*Z)vn*z2$zqGiVzQW-*_Or3%*-ifu$Yx%W@cu}K2@%&`)2y< z+i#{{&vbtIL;GYZ&)KnKN5opOcA%`ZC_F3{EEE(Jyx12Zc_^s2O;AuTrJ-K}R|5R0 zo}i$e0t7LqQTnC`?Hy^+D>3uuwF445?UTUa?%{2VYX= zJywaaMVetvl6{dVpFWI4+|G;p8aITQa6Vt2vhZuj4p;pPfLkRyFF@Nqs`e%Sepa=cW06v1@_1}Z6&)tT70OPH>V~S46f@k~ZUip~lN4vJ|ALre+7r`5v6k=cYO~zF^J;x^5EnNAQQ&3@f&W^hci9E#a0n$xa(u zhYiIt{iKI0ji+l>Gi#BEswW6~Pd}o24}t<=mA`&na(g_#?S6R8UqIzhSAjz|Bb7uD zFcVn#LPsukiiLk6RY=$;#_i_T&blo&?Oo-05N0fO3Ww+JYU|5KckLkY5slV(W!XhF z$a`63*V~(e)o8Xu>Pv^32N*awPYOy7r@Q4PBeB=~A(-93GC zht_cq4y~4haakm*;Mi^s>hO=bojI&}Uc)e~dDCeNLYnd76Gp@(K5l)&-NCroAFC-V zd!E>NeASyhUJ2W8ch5>9SR*6z!jqS4rbX}_xvI?62_=o3R6e@_V~?q z%2*!4G%ZR4VyW=|c06ZRnH zxoL@Js^X&Xv{t}h=o{o!WLE`UYlC-wkbzeu88PNX`7JNfqF)dqFBy;+Nlob;QW4Tt z8i2*pe=#jCY9LgFVjlRue)Bx*u&B3VV@X@qU7MF{@G-3^?1;l!T1dq@u++ptbEzF= zOdQr0E!uAn;V_#`c^t3(47Evu`?G;h)Z3U!8j~zdih6Y=M0YQ#=s+4x z#6wi8GQ3)84QPI$V7JpTD#tQ681Dq8C3?2DK8J}PdPY_Q;t~m1?@6IMOv-HP%A)v1 ziFuj@1_!9(XQeZFmBNq5{bjVSPGO}6Z_SzXykimw_>qElFr7{)KVmq}>A4vzNw;UY zGv2GoZ|}L8gWKWs<7xw2TbhPzF(5>1sSDGcM#LwW`zF}oW2R?|k$Y`? zH%%L4866FJUp*jaK9me~MYkb+L!nK;@OMWCH+7*1pa!`=K zj1#^c#Wf8T=+ns+*X2$rL+87$7HKgstt6NhhmTXew)nMPmBb9bS5r+X{bNTe%V0^r z#s_|+NVYV6_h$j5u3bXY59>s<$zhY8$vjw|Yhk_Ti64kwisAEWH(p!s=S0TFPTiz` zS`3#WgB41sG-)+9v73a`g|nO52nCpoH>ELQ86!@g1GsO zk(~}*c#T7QMOL!Ap`WFTVL6@C!Kej>fe!L8E;T<+c74B>>hD=l? ze>HS`j%0FDJ4&w%nve_c2Y0?L>}_?&_Ml562q?dBdS^;5Gm34$IfM)Pb+P&OMlYHy z6qkr>;j|~JU;)g#%KOb-+C6?*TPm{R zmhwzcz;^paIvOdo{+d3&oO#D&eNv2gvr!EB^_Yy}nzgh2;Q_>9CQUFN;f@-AQRYWT zrZ9|^d_Nby2cj_EwHB*m{dR+y%tJ9H9+%tCL%X&JD*xvAFnk^Ywze-ClCvIe#$qlX z`H>LQTa=Uz$8uOZ$6i(0!FG(SkU$er(eY=4)=vqa`j^dYkDNiq;%Z%3CJ#lI>l&OAeoPbTb4n3Ia+Z}QIyk=gXF?B4|h!=#4<5K3%(n1E|_ zPi0nm*u8Tz^>Eb+xSwgB^j^2Mm8TKBh&M?IZ?rOJx4&A)yiA&B>o?*0N$Yq(+1@+o z#`sNfqbCshJd&bxTPt6GDa?ge-06G2&zgZjtP_3zTuk{nOKNR*!1O<;VYVtW25LGCINcdnsPpbjd{j_rv0+UO%_KGHfq}=!ipQ!z&ko(S(%YncaT-g<^Nu57 z);!PTR7%RCV?MFl1F`iwZY-YFFI_7j-g0MZTCC8s#PskR2^$mO-=^#vLt>0OT|JkZ zz1M+iNGpvt#D|(q84hNYPFrFn-$QN3uCo7rd*^t&a|X!qpSpywtm3XXytd+y(1_pt z6|HXJYqWdHa~IkZ7;!;v+vgrIa_xzRdJ~Mc=dCd;{TR!Tha>ywv14h&e4=AmnA2fK zwlALPXSq27qJlOZ7^&GtR;Zp7nhkv&`iH*?+O$5rws~5H<5_yq>2TlT(K`fCOS}K0pnNqZuv21i;7K4$tyq73_r-*i zNP3I=iK@G}gM$=H%e7l$`PH+_HKPe|&+3pAWvdB)_^+6JBE4+48j23^{7Ck$Qkou5 zDw+;3Hj(Qf0}Bf|6bA@GW?5NTbC$S-loo%hx>?gu3M=dVbTreL3>HXI^4Zv=NLfMQ zA-CSXd70p2ZkpvnxIB)spr*-Z-T8Qs0UBv`eSQ72>tzr@0Gri>WY3-_1?@wMvOb54 zzNW=e={uOM*vmO>2~C{5v}S96k!<*xy_x1(F2wbvxFI0LZ63Wlqioq43Kf02qCf6F zKpY0+E>uPR?XjLlM{x{80~85HEbZSWe@(^oXOo`82C2K4)LwHj!)EoI*+SsLfjY%z0xAdjSd%oyyVc%l>slc@66Bp z$TSE?&fsHWBQP;yD9Nc_-JZy ztVovYzj}ZXZ)sGOVJfdlySy@+^DV&0d$K#;S3o9w5!~-uJMB%9yA__+@_btOogg5Y zp(6UdpL8lSA}D@#>6&89B@BCdA|_z9>@|udb3iuv1rukL&M|AKpE`k zJ;eJPhK5@lBMfm!=oHkH9FEr_{5E$G*DK>oUF?nSS_Nn4dcGahx412m%gWB59_P*N zeo7T@$Hpmg2TOe2TdESX2rQh-r(I0r)~P>ZU)tYK0-_@uesfVtnaSB=O^*JVLR0CqiDn@OF3u{(Zye2ZihM_0Mqz33gPOb})wavw0a|4R8PRdwt+?^u&vy&nt5)s*djxe_HTHiVCbb{i z%^sa&_~#*ifxSsVvrXMEkAA4}asLZG-)5$@g;+HgV4^#Sog?#@n09%uM;PCM9 z{^n#we=C+s`FOcyp&W!kcP!SX)q}@aEe=d89N**)hlp78wRaaN@K^~(kdL||=PTC-0K|w^&SzeImo}QlZ z@$s6Pn!UX}bULk}{(eF>CN{R2(b0f_fCNT^poj=rOUq+lICNehZ6WzYMn-045@vU3 zHaSM(<8eD9AtUo6b>)afSJ%`C3kmHX9KgZC9xpartarm9B>i2=LP0?~JbUOF7~~`* zf-BgaPrto*@#7sL9@6W~tSm;i>%)YE1TvX4XjoV*Y-}!0&imWjuS!Z$*b~Oga&mHB zUS13g3{+H9W@ZPeTuwk{xWe`(K%-IX8XjKSnvI5=>x*TNU7G=*NN_$)cd3{tmvE=bxp3f}i!e5aZ|PH#FP^Zj18agR`@9OiWDQ zU7voM96!I#>aQQ6b@4GV`tD_=r3UV_l$1o!0s;cwCOxP9(PT+UNe9!VgTuo#l#~Ml z1FrX%6qfY=&@{%PhhTA1Xd*&FoyB?^%|`pNsj1T9Vz={6n1(cwY#z_Y+sC_$Bh*c& zbUv@^^K&d_W7%*#o==}XiG*UYxL)qAuC6XFE?)EeLkaz|3eI*S;d8wV0J0G#qmY6E z69-2FFguO(`1qL?H>V!MmoHz^($dPy%j5J=5)jBLE8pC#c!>%L`S|$(4xsS)bH-T} z>Oa>1E{J3BzPPv;aA{C>_C&EV$OxRqFuhW5ldTNWn#h-lkBd8=tIGfR^XK>PP zXo`x8EZQb|dK>HO0RV=XGDr`4`bOs*=3oZo4%Q+vIe@h^D8fXJKv* z3k$p4>M^KtcYU<5(Hkz4!fvxWn#sb#l9-t2kBDDZSg6BO(i@JiuBys_@B}Oe;Or=5 z(g}c-6B1NZRi~z>J@3x9*1Ll#C@4yl;r?00Nn?OyQ>Pxq3z%#D=VDv^4&4Se-M)#L zH8a)UG!hh)C&Pa!cV=%=x_UQwi;w-UsoV1|{x9-&-*ZEBR95$MX7!lz!c5JM%#?8KK2M()e@N}YpUb)cSb_|5agdb@esBdhjF*8;ufw4w) z?KJO{5Z(Q}ARD;Tyt^kaW^xF|&{9fipg+j}txLgWBpn@S5`(wRCh|GpnUMb=$sAN_ z1j(&W*{g`-l1^ms&``#N;cVHqIPaz5>Ri{#udtOhb#tPg#?2z6rPWYUx|7k}Ck2hh zRO5_fU(Fq6u-RtRCqSoaMJL8$yAthRc0ZIb`w-#vest?Y^vh;JJi7SS5?V)lL`g%F z&gw?oKu$;Ia9~QW^#)BYiSNy!b*snCiGzA429-;r^@F($4)0e54axe}R$fpB`8GdJ1HY+jT21PX zZ?yVTbBdDl>x$(bNSuv=awNX)$y){Uo@HZePW_gMh+q1}dGWM6vv{eT0=+C%!l*08 z`*bV|T)%nQUsm3^A#2=ZtX7DqsGCMsz1O)#lSjX_*pQLi--U0yGb?1FIHDw$Q`QER z=?Q`gEfi`lO5o#ZE1O+6&X#{w4T#&~m0dhGr}knrmD|d~@;9YkiNbT@IPR^-zH1bm zRHjX?SumT6Nj%^PnO>Oi+}Cf>Bg=+xd*$b+Tn3RSVJ{x><{0n=OJrI?J8&P3j5{)I zTnxVR$K!K5sm@HB>uIYr&Qa-u5sWr$er8*>>?Y(v z*vfgZ>?6`?((wJ(v88x8D_V$u@5)DB&T9_8mE-3<{mStyLVhz}-!~~s*Qg(3nO8oG z%U4%cHkQJ$dlQt<^yD^!t+h}RlG2_SRV}Hscp&&P+I;^mr>YBQ0 z-QFb})TLc2oRp3#1a%s}ga7K5{Ag2rQW#U6(iHw|vw%cS@Qcl}1i?Wtq342WkdFH5 znlC;z8Z@z6E9An_kKFhw45w*PUynvAcE?7n={i+L3hag!UpeFfNeX@Fwd-{d;NaWmT4F?D3=!iNMnSYydAzq+K0Rw2n<`o7Ou0 zP6OXp-c9lN9S_o=)6x_3P3*ClaJULP0WpWV^+#c%s@9AaU&+;Fw;)G5sWs_Qf?P9Z9wzq9T@)0=Y!rw%a{KdPVWElr8p;%(?_R@bi!L zS+ND%5RpO5I*OfoA||^;Mz}8MIC2bL;SR5tbZ7e`!nnM_ zRtBm?%2s*HnXZ@^2YM(eG08PU)%R{=^{<*zAdr4h- zN1T<;x>C|lQ?WzKXIj~Q7tqFPWsQowPDi14g^F`&H@3U7({n_YCz35J>~jtm)FCZ` z3*97+Bs8ShmS(F7<6TftVrz3_r_La%rBz#2rUf)yY*(gRN!R?RuTS!;znxvr-hTBa z&_GY7sb*U_MnQAjuT3er4V7>sqFvE!J0d6cYb!e(=^Lt0t*(OCoPK|0TyD`)n)!?b z0~2}Qcz>AUtzcw@!|fVM01XOtRGR2zV%6i`JD2|&GB|%?mu{pUVpE+w-O^-lA$7Fi zw8wRm;*66l$H`>BE&6q#V&~7W6dCfwL%iufzdvwPSM}c<+Sn#OLX` z{LPCkjdl@twDAND1WRCtMDyVzG9{>B{$T6!UhK(6#+#g`I3n(Y(N;zYc1B@}wye;$-)vYML4X~yQ;h4)sda_?K1nuPbNUTh zen?2lE1fY(Yyn&_SW-Z8)FIHtNwpCD%nFn0q4;KGXuzV?W~h^$20r#w&chb@CKo=4x+?L*QfWRAm1r&7stp>Q13t)%wkiu^j$s9HeGOzAf9;5GkH zZ4!pCGZxk3V{Li))+IZjU?ui1nwv`y5$TS&sHw6I($#Jro0%}gS?aL& zahI8k&Ii>|Y;Ti`{3L-RRZqZ7;qWp`*$bM3bHU93r z3b4Hh-teN3(~@b^L%5c7dd;+nuMA7-@yP}e*~8PknT3xIC#T$@v1L|M*Vwv&z>NAv6Kw#JV^-hL;Qn$1`E_JM)RgiSD0a}Wh-#1|PhakY zWy03Iqc|p1tG5pqC7Q`n_`ZRf_2QbZcb1f#nUkD3DSB2#SzVoq{1?!o(bsPkmk=~* zmLxBZvF=K=UK9)IgGOHI$WmJ3CF|E5}9NB?of;oI=ED9nCXVN5%3i z7IjLzvm@i0l?IqjvO4Rj#_vJmuE|*A+<4OOM`e!#W~c@oqDhONuD7GdcMrm`WN6V) zP^7_qCav*JJCTcv0v8&46=4Fk`8#dDk&-9b9peel`|6;pSpCso4%>3OFcil z;|t*;!snH6y*oXv^xeiw7M#V?DR9EcELK%eXd3e1<45W*Oz#cB(a!*NiWig}Ad*uJ zM8*2}tt7F!gv7ng2kD-yDC;ChLrkK3-3LZcJ|g3k6=S-cK+(~GFY3E`l55zN{=3y= zzvn8?8=JVSUi+Q%cwrp^m_-g)qhsXqc+Ln28NL zGOlw&Nt=z&1l?NQ-MNqhUdv;kD^!U1&2t+9`s+fNXKC>oTa}bYqBB_(831LgG4p^T{ zYW;!UpSNdBF`R6{Y!znXz~a%n%p_H3#+@7eX6h2FXD6z1z`JAJvJ&D7t8bv_5N%I5 z0qPHJIs@VpbtT&%m@chtw)iZp?xamz`i>o_0S_lF91J*s9iQPia9X>q(WOo7P-7{4 z+}}6UKS<}Ap5(GVB!xy+I_MC;S6npv$!4)-;pSP#$|rM~fr~3$R}Z#vzQ{~zL{2sW zWp#%Hlr>|uTvbWzyQ@x@Wz{l@Ken>$7~~#a2H*#sUXYKkHn<-$U=L)2F(YDp`EY6T zlfO`QTX>kqg%}kl#DN@jeI+M8RyiOzkbbG8D2bMkaNaq$JmNFIj!l!#+8(dbR;6nG z$g=fKjw{)#;4#X}7{<|=2HPusYh!y;*X8J)lem&)WtCfpye(%m><(RSzP7CO5B&VR z`xd$z3h@m`kV^`y0_iqVVR!fD#9?`9oFXfZH~pJtL?jeS%F19cn2L&cEN6FM%KvF9 z_N@u|{DL$s1)bKpF*ZDg zyt62XIr@XWM)mVM;*AecKrx`C6q9PoZ0~dYy-xk9)9+~2<~+FYM}pVeSKkV&bPfOwKOk$ z0o~p4piJTD1C30)6j!q_eFL9f+ zv8Kj>?U>tY0vVexTR|v5TU`Fo4&mbAq@ay-+AupmF>DUqz4mla86*|_`ql;O1RLYS zRqk%_63&KPXU#F+;jg>CkXchxvsZ#y8q;sVl`EFa9yJ*m-^ZLam9B>eMh;2|X6lQ8 zTsQy9)k&jzAgEH8NI(QdM5AT5$t1P&fbB%t>f7Gdr=?aNBqpN!z4-NWt9a2SsmZIQ zR-a%I_4URb>sgWw7~*Y}eWoSfdu*w*Kk_x}!rR{|8zPk&!SHyoC@;)a&LQbE6XlWGH~b*Q1t(f}DCDdx$%Y z3Ei@oeDLJTb{ubX0Z2Xz_;6K~7q7yrALA0x!WP7_W&F$V3%R}(JMBKl73pGG1W)DV zY&DBLelukr=pRwZ%#SIn9q>%gwY6oeJLzGx0P1%LhiI7Spkr)`DRcFi_|c1&P|^@~ zPvg$0H_;zm#o=eE((9%IC?)ME_`diWgGZ7_zYQBv zCO(@C#JB{+;;f=ci`a`pGO*SWkG080oM0tm)H7E+>)g}w&-$~sg2KYSKLt6j-wJvY zJhr4}j~rX4u!(6&SGJiV&g3#RTHE!KJW}<3l~bKsZzb$4E1(=axlt|3&edmEyO8~! zTpsc5{NXSx*s)}p;LRHjqq-%^pWE+ow#vUH(gl-UDkzo3==GM{*qka`NuyOpnMA$H*Yy z%blwAKuR&yKj%L(pWQW!;$j*s(#9)=C4akB9QI5E5LsojEq2kJLzpNjI77vIEP&%< z*Wr!OQ5&dP7~j&!Arng231$Zp?_MU=_2bvw;TSOc+-IPEz3+pAfia&ew;uHK4T%MB zL{VF?*?l;woN}9P97(m(Yt$`t@ZgDj;#lE3y=?7_Ph|DsOoOBV*+W!Do2Hzh#lD6R z`<8pHlgP*_o~+F#49OID3?OCKi#C0;&Grg zbdA||XBB$3!}qtZ630u1ind99k(PcM(oF%-w!HkKXVZ#)wp?_m2=Z&2TjNi6%NRjp zzZ{Nu5ob6MfCu9Mmz5R{4?{jU?`jBLPV~fThcxfB+R|nx#%JmF-9V>@y8i*cypA_d zBcv{ql45Q!Y0a+VViXxeg9rSWW4fN1@&eAO z6DV^=NYl%HbJyUcR04l#kxlneK5sZjR$j@(ydkc_xXqpI>iYfv4SApZ1c>4vC{SL_ z07df7m|TY5?%?pVC)aNxL3TW8YVHR;85Iqdi)-&Nmkj1fgpRnH{B1{IDIH2wJP5{o zq5|Q*!T9oIg1zuk4+HZ(?5SprX{uNB5bk2|a`ZLBf@dOK;v1l+yHTqVBKxYp-MH-rP*o;jQ-c5lBvNZe3id?4u*m zKxKJOjvBu=0u*1Q_7R@o_w<4AwqE-ClK3QX5Rm0@TYG7Rc-X%nirm zkM7M5S62oGAMq#n@ZX09G|ttc&;Ru^k42Z&EdP-MKLN&^YnuO|Q7|F!|ARWsUdJH2 z{MS^YAPbu%hx7LMPu8hc!|o7{X%X+2?yvcQG%HPSr^hC(1k~fkyu#@f#uiOysv6>= z_?J$OqhN!@S;7!u0E9`gX7Cd8ox3x@U$tfZ#IijYrzH{QIKPN3MvMbi-G-@F^wnRc zxCMQq(EE}-X8f9eBdez7x{%a$3=qAz;2Bb}veqOd#YBHZgg=+vBi7c^Dr9i?+#q^F zeiHPaW|`Ac2R}>La7^gYIHfS^-Pe~fez12PUf2v*cl7ISk=Bd3H+LMx=8)Bp^pOVj z4-LTcAUoWzdnhH9JN4==5n`>ccL2?L$frEZb66>{LBilqU9Vf7tw%9S0Ot9tme$?< zup9UYK6jFw^G72gYHk?Sbp-xxow0SB!sza!G;zz)#1fx*BD4n$#lnTT4RxU1 zFDRoiw~jKRB4dLo>%`@<3x&VaS_I4QPlv-{^Rv@W7MsoC;~n9(C4KiF#2~7)e+l}B z4+-W|mu9zAcewL^7h&(?3`xLt=SmY5It&X_2^j=@>T)$qin*3#h*mLCsBWk1tu7|Y zn1PM=t}I6l;%ZeG5EU`>kt$Y=#=dhGjqw~4D$T{GPhG-K&8e8V%W2mNWCxmIp5JsU zTnme>3=FQ`IyGhfj7e0&u3m8QZXadZvBFAhIG%SC(3G;Nn_(2$D=U@G7J2`k$FtkI z^5Xcr!qffr^+H2Rr*sCubM1ls?lr3XyLlmbJRNHxA_Wg&3YNUWddRv z-cG|;#G12vvv}}-t@UbKmR`Ev?U7}UG${}iK2$5IsQj9vJ1jp6_gvVHdq7L5i`i~7TVV=)97-a!+1*f?zP<*NBsMq4>D|X@qbRJ-A(EA)LMP$iI{~4)G~jL zvcA_rLrpz9Cud)>k{SAYZl_e4NwJ5mi?(Yona-I~wX*Z{Ch8}>de;Hg>!mlC)-i#W zlrrXyVrGW#aX%H{-O`p|qT)UfvsKYEmD4jBGu}wM0Boo!7JNp2y^n{v zWSu{?F}u0g+`+F_w5jXB($dDSl@p^1rAmYoqp>Z0~P3PqO?A80UU#-ykSPXvTB z?nBhK33mx{^SN+N4j*~QAXI-@CQQ%+zFKAAQlrXAHqT>Z!grzH^Zv=-Mvg80B>O@yZ}O^pl8m%sl~Ys-KD@RVRy~sf?XV-Am!=>1?wrq4{u5@2ZcRx{F_8L3CDp z3Ib$Zgr~IV@@HfOKBW6Dg`C#XRIk}-o5L*vK6;s zZrb!8(M2-(t}2sv2FJ7%-14kKK=A9P_j+`@soD{f{z&D)C2ubMf@g;Ii#uA}?adfh zyazz6tu?~{xWlbQIVLWaPNQ{Vz|dnmfAQ+r@q_M=l~*o+p?7-J~I?nQM~kNFu%kpL$e4aMH8miGih@z4b9H1$3=d7 z_V~ znv~_^8-T$)Wt!3CdAaE}K&@%q;8bSnH@X#^XJ=zGN4F%r+-Tf39Dy<&tBh|=fb}r;ozxtS^|F+#{qc!pq zg^ArcFJJ-Nq^e^KlAt4wC=*l0PnClHI})Q3>VHaNpnv$2#9+C*%Sazw{bAbrBHwcH zxv4nhR^uSB-EE20%{L{~y-)WWU?ZTGZN+ez0m%DkqR_Pqpd~{Vf_tdX$Fsg^h=QQ? zOMD3_Z}`!pdV1%W)=q)XeI5mMY+b8V^KCx`<|Hl^CFLbIZNFVFjL7_fNY^o@;aeQT z_ik4)!*t}jL-u(ylOjv*BVxX<`TPDPB>EDpB=(o3g3c0Ket2UE3nqloPo7mPi7yR> z?W+9kN#koyK{YuS+WrEB&r+_cF1|o3YGSykXz#-Zi0hLtz$_>^;_)K^5&Qk6gLFzG z*5i?1&E#b>K;bL}&oco~A{bLp=7vKFZ9^ulwboRI*mjvAg1zY`3uTOPLTnUpi4 z`)dH6({1Ba+X?|8f+##S$k9M-!mP zNigJ=lvY}tH?KZ2j~fa`j44*O^IB=^jaoN`=p2I%8qT~SH%INdIeKm2?4(>6FR>OM^`A;@zP z&-IB~t!p<9LY3S92wa;_n*p5^(*u4Atd0Tb&bRx9j7KW0f0t^bss)B5O>}h{^}j+Y zzVQB{qTyNiH5uDDzFVc`q1eX9GSo5jt78a|c=!5%jVN~X(Z@THup;`T|xWpBrdk&J}Vtv_%QXx924odnQ8P&E?3i$$=MdA#W_RO zuWpZ%{f|{YZyLwyjnnO5JszdGWo|zsb(Kmq#h|vW2(rx?PXb^U;0t)M8a!^&C$rn< z`utzMbideuiEr%N&k@zKsB{3{-u z+TBkb|Ld3#RPCOLnSDpH%wHcvL9OQd{^w2s{YM=5Uwhc-e+Qgf2W&^b)uwj_z4U_f zKG^dya}Mvi@aY!cccs@FcWOy%5EA6l>L_7s)cf*K7SKe{|9aAMz4kU}O~w7KA~C}h z#0nK5hQ>lKEv=DZ84GT`9v!}*SFflj>~0g-((e}h$qi6D@$t&0@6%W6zW}>5BEvzd zc4oQ?BCj!WsWOP0Ka{{*Rvhv=RqKNFZX+&L#|AWkk;|Kmdntl|ijI;wVa8ZxD<~tB z+UlGhOM19eq2d(aS6^?fs=CD&SeV=_pfGORM+C#b58}hIe_I?J1REQ#-0FF;Z09CO zu`ngi?9=FlqkKp>gAq2(T@B9e-jLsUGtXl-I^SSOGKp3(yHg~-G=jH6Nkom|x;}uG zZr$S=H2D4go)8gtbz@Okk#P6amzd1bdKPI{C|__Ips6HlP45-&O)7W}6%JzXeeppx zV;MC>(r+E3s&h`$a7XG+o?9%uQ3p1CLm?ERNEX>6!hr% zCUJ&-jN|J7LV#TTs*UK1q3&7ZT=x$*p-?z-%Tk!QC^_P$OIh6C*{bd!4!ENK(;GQo zN>oMdaJUrj&6Ypcz5e-s4wj6S&M1g$Bs9N~$hv1Cyzc*>a$Pwj^WA>*GkzcAoc zKEZX{8=qfuwcq9@oK%Me zrwys*pM-YM<1DXX8E~@oxGk-diTBUVtFEF_j`C{uA3??Z*GXj2M@&ytPBpXa7sAL3uebp2*%oMz5ckru zX^HXtVa}<%%zno(8)0F^U+uw?2Pv#BLl^O4X;F>Z)@R4x;w-O++70 z8E6cOeo9}KCiwVx$#1L3=ra5%)BKJgz(O_mn?>F<;%pI->dmvExcgrEeYGkWH!nC4 z*1&V%>I$*Vl?;FxMo!*2xm|t3)I;xqcakKpLwO>*Rc&BNh!9K5w$%DN@R8`V*z;t) zEg6^I4$1VJH^RhZwMV?c+y5DS{A%RC03T0y{^`9XmolOmt#8$+4P49SxX34XclJ7t z^!c8iJL?OQ!A@>Z#R5QVXk57RyTNA7fw<=%kr*5;-YrZoVC^GDMj#b}ioqik><-sG z?b$PZS#xN5I7fL7C$yUC3J=$s_ai)HfRH3wm_hl;b-;lHIxDMkpnC~y(T(HcGS}GX zdPR7Fr*-2Mec9gbg1CW1r;D)Md@;Fn0Zlf=Xb}ML{y=pP$PY5cjMFP^e5x=i~GZ*X93=FJvAV4$Ge zCIhIIDt;J8uNvqMM`%>(j!o_|31+G&A2+Ny5e2^0G;C}cK;YGRqrOwW7 z@xu5JU_F-AQy3X-5qyN}SZ-e=4cxsEt~e`{njY*A04xxdzRIU-=AQ zs_(sqT|o`+&Ar}_?@%lF3+m1bpUvW5Ojy>X8zlj)4Z9=nE+d^lFZ>p}IVXTx8A&eJ zZ`TLpOaP-7%B0EQ_QPoCXFaB&-zEk*0Dl+Ozk`Wz*nNHu3jt?*^ww6YBFvS6Lm<+&pqrra z%5&dAC$+f}@MBEV@{+WGWJD}-Ud@l-#+XLJ((a84ju;#$6s`PbvHM>MrTwmr5uEeW z3CX*(Hr39F5!?oT3->hdD5;85QmGTsY#0v}lqJrx%XopG|W3|d$yY@gXr=o z0Q{4N1?%UureZ4KTR zJJ8-=B>REh*35tb(^TvhMmVL_Fi@_;faF`**-NlCef>cG15jY-a^{QgHL zdQyqk6T6UXL|bd?^GJ)zO>LWChE0tmc855`{`LMdfP{Rq=BMx7H)UC(1Bns*MD({% zSyqZ`2EUus6&I3!`4Zrh-f+B+Ot7M0xF;|Oh@liFDmjMyDR1^vRaHF>FNYsH#sM~B z2L>NQ7-)i}@j#58YO^=jJB`HX@bR>2M;G<%zHDH=5}j3LBQrDSWg525D)06b%}ACD<3pIf3M7?Qr46?EzZzo^92|W6tDJjX$lMR1z@NEmdh@}Kk>OT8jn`>&v`oRf4 zpN9dnjn7UJM?QEd@`OTsE#Os0&5&Pf?9uP28V}-X1!P(T^oG z%_9DZ1F|3_p_=D4FyI&~DQ)5m%f)f`MN0Tb^cVID&jq<2{FMJGSrZTc|D3ET-xJJW z_5;Vqp{CJ0mPbEb0?IO}`SX|4q(C%>c6+)wFewEuX>R9v-d2;5XLYxzHN6L9Dg{lN zU%&3UXhiLVp@dX-hXyP$4ZUX{!bl6gHOncRF74yNOyqF66FyO3l1XV0lWvbbpq)@w zjAXEgn`}{^5Oq~lQe^8t;E}nFh)QB;|E2_azy)C_ zTm#V_pnjv%69I6D<{p9@%AT-s>3g#+}XUwCSrQ}}~c znvG)Qov4GAmF-eA_)y73UhV(z?z;Xp(lbFg_w1`{V{B`aL`p+)jBOp%?L2MHIRvSO>}f+&IE|hwz8O-6Y=L?j`;M{dB0og3)JP$R_wz zs$MgW!X&B9{ILYk$b`?>$W2jE<4N3!6@P3PI*a4p6k}fAOJc6IiZ46`wVlRUP3U#kztv&3C9NK|>@_pDpvDq>;= z*57VgA@V}vf-J86$%37{U!OSECh=NopU!Q2{i3#`H{k9Jmdgx)mH_Ekl&S^!S&#Fz zz5n;rm?gNM#{Bs!XuHm)Cbl_6U&_Mb+n1QZ`Bl~Hh0r_@8Y(J17Dxg~NXb#l>PB?f(cUSkBiyK{$1#C|mB7V-%p4XJk>bDH4v2b5Q!Wu&3%02FiO1u3O{ z;m3u5-Tcj*Z(7S^qFGyX-N78`0_+2M0!?b-DB0g0v&HW3q`9-0++S=ou)awSBKTlkI=lQo( z9wMUc|F=}$x8{FG<*nxcD_TCwhx_9mYxmz$c}-9MhRVCVaSY>Z>a9zV zWVd4&H$DPr%k))H6%Cv}c*N}+Q(yu(9(9$J`cuLqFR+`n57Y7PECS`zFmIxMzP42O+fH{tB}p@2BT&LY(7zAZvONWIsK|@lN;8gAwJS zrb?9$wu69@6eD=EDr}R?yz;j+9C6=W;;Ee|&TY#P`6> zWb$WP#l#(KuQThm_|+^v{#QtL^yl_@%rtjK>KHUDCL3GR`kqrx@SR;()&?bu759kg zK7A@T=$16zy9#rEY$hSuvihLlbiy?}u(3~3lxqpS z@H9bD0A|5v-?Q8Q17dYbFKusaRU{g4yY&bGjOuxly-^(xYWTQO3gcTo`N&40zaeUEMoC&-VPHCRyhBROOT-Kcl}y1fkes zxOeC!`b|r*@({xEfyhRd#|Xvc85e(RU+!OVLa^~ScPWAmP?zP;jhkqx>~6|Zx$$RR zkqT0mux;)hS296a0!|`(Pogljc9Pe3Wkl%3#d~*ey2)g;;b24a$k+8}^ULDaMRl6M z6+r2RaN9I9YL->T$7y?6c{80GZh4H&;7KtNmdi_l>iKvjxCWVRLTAS;hWyw7FFNj2 za=BHQsj|i|+7%WQdB{u*(bP+hiiTR~;ktS3>Oe{T_FB|MVm&_)Y++h)VJ)b}S~`(xk;HUb<+0HPQVfSUuA^`K)OsdsQukJUZv3)J4KKI@|#yx+m{FjkAlFa!&?^AwHd4+TCRHlV&|AdV8o*#_m zQZRC`VoEPv2T-9#b4}*%BI!k*YDL<@yX0h?u7Eo|yj;dvNUPdjHeU)$>L)q*oB^eq z{O};ge!}|JIn&FhPhY>Em@SeH4o@D;fkIiLL%-hQrIeee2NjYq%t{-c%0$NYBnxPB z#6prwYRW?i>@NS}()><|8OMDNpk1Z0ch*&?{r+DAn9ks!yQ`0MM8&Be=~tjjOC9@? z5Ot0RaO&E7f8WTZseJ=BNB!a;`FqIJD6f@z<-_FYe>T9*W=XXCvvctS>HoY){@vfx zs$P@)jjCtg{Xf4jvb3B~p-0lvQPZ4LNLOAu*L#SC75QLlnbES0W`O>+uB&t&3`;Oi z&fIzEs5PP$n8Igl@m}e=NevE5TbjPSVorz~Q_(TSHD;Ksku~tl*ro@Mg(hs?ST8qI z?mBZ}w}CD{JADmty;j^k%>S}7V9T*`QCjq&at1`-)=NkHXAc`#=?C~i-19Lp1zWk5 z#~Az6Zk?fzB6kvfv6g2gd%WaFP?WbCr%t{&Cv7K@1}vQ!<9{qI{P30}Hqq&>L+ge` zPL8yj)V6YZlsYE5NYHUZ9f!Q&`Bd;SN94rYfjTVegH}$CAzoCs9vl^k&RUIJ=>2GC zd_CG#U;ir&flJQ z+Wn}GTj<5KtB6(-J<+J?6j@wRBo&dWNdXwa-5tNK15a%3XEs53)8avIGmI7=M6L8% zCq?Nz&4R^G%X;sHjdq1i@T9A)+z_0ma3*lL zC@J>zR**D-E-&|VaLP*4^0#d123ExK6{fMgf6@nUO~@g~-ialt8mbT0~B^&L~etiwil_zTm(q^UT?IU}PtO~fzgEixEu z#q>SD39G@3naL04v~fbo)SSObdwFv~i|^`m_KQ)0uI$9MCY|@)>Nw>xU#7&MY_&#w z4&jR~3{l>glTyj?Q`B=?&K-rPkLvA$a%Y@$A#W?m{a$tXWgPCqUa9&W{Ct)%JC~Ag zyOy0!(sZO;d~(qE`Elx)4!n~wJ35{e_EM8h&%Hay&B3ddh%V?l^cz|i#k97VrAI_k zadmh%GpM_e4~$~ZEzJx;ixzPo-3_0KrLG>luc2s5bw5g&0I)RyxZ6w7~Of0uJCO6$B{@|*z zXdheUXohu8`Tia0M^wg##HH)o0Tz*rHBf!DB;B z%~l4G zYvPSDnv!v1B5yoaf7 zwc=MY6RyNFIX3ex6wf<(MNh6`R*GCU zeddpQOo=bCdyk19|to za(6FU70DT-&>mlo{X`!H#Sbqy*>qJ~key!7j8Zn$v?<-{bbGyZTEfLs8qC4<-nPzg zu(0>8ka_hpSG1zZI@yyHOXOX7_KgMdEa@TUblYGtr%4t&ow1w#OR53Ay|+~ZW3v6Z zognbNufG;lJxV2%WujSiYTJhCJNI;1^;3MEc0c)Q+bpog^Y<8N@Kwu}26kn9veKX* zGrn!QQkNEG&6xNc8Q^)}IPvHn2?_Uv&PQehW{WKeiQ#QDFQM zZFarg$UI~%w$fxiE0}88l$J5#64-DK8wVnqzYw5ozVbnV;%&}#kBS(f{J$NviMDoCifRKI}c&6z_A```d_3?>y7US#wA`bGQLFyogpor-3jPr zlqm;_gYKd6*tN##Xr`2BmTy^YerLxoiJl%kw-vLxQ*f{L2@7IcOUr_@vIfrAE^_nQ z5{VmorFJyRru8%;Uvj6&5R)#3_*wp?>u$WU%&%3^zB_xrhRM%tHx@)$ZKLk73`f>P z*-OAXvtApGAR;qCv1S<$qEzc5ZiO7lsV(i)JecHo@!Hi>B9^4^)oXci>;wIEYHtI3 z$E6m({G0*l?aX>%4HQ)EsHPG!VhSbf5Z1rt@h-9;l1iC90i!XOf}2Q~H|YXOC5itlD_opb8PR&;;Gj>G%r6TGjGaz z7oR2TXOpWW{k+kFVW7ElExmGLN8{JCq97kgO$;dOnHbZb`^DU{b(`3_(iW32VlNS{ zp`>n8YQgOG(UhqN#;nPlErQn`MY1Q*+6fyTIk=ME zy^alH{K2(ux>WsquKCw7fq+5xT}r)%GI!rNwQsOp z=ZWK>l_lnm(4CjXc6Y~D4t6NPdUblZzhAg)L z!vv-M&N$69BuQB#x`I1>5=-u3lnJQzCA?}N?$GA#aM!roCi!?iW<+D6RQxc$dl7@< z?j+}^Y=DtNRc)yrMG3!rv7?_*{9Wv$z;D8Ke`CG?sDNf9?gJzE8M-LWMXU7flrL>Q zTUoAg%&WvS_~fbPY9Jt<$6p7m^SM=gaZgI@9R1I3=&h6HpiS<+U@a3#@2;}GH?BGs zhHSCSxtmMgn|pPXJGk3ABCm%C*$G;HN6rfogNuyZ$n2@2h*k!$ktCPnu|VlggKbn`8#Yl5ujF7 zztiXX>DvlT0c|pJ(J^ADGPPxN2aQ#fsgX8<(3&*S?A|+gE>gMMH6hk|3|EFv(@j zV{}ZQaX)UWYx(Q$OoEC5_uBUllc@3il%Zw}mtJxA!b|x?q(V|1AKkb^tZZfBn)F4K z;~@wYtTSf!A`(mf-hcO2O^lZR&=*#epEmE8Q8KD$WnqNX-qR0X@l+UQ$)U6o1zsQT zrbgXKKEJE8Sz8!!R`64CJg29Bec>XO>;P(O95snjPCP8dum379C3njt!_wJSTXKqd z)D>N;B&-o}b!>$)91rQ*#*%#n=^h~ zIZCOHKdnv;A`$E4%!tiptrR75Hi5WmoP|kUbP!o($h*C}ORK1v78Re*-MkmQKJ~ju zcfSUE+ttzmdE4^Mr0%?d>Pyd1iiZ*Mzz(X^>4pq{`x)t8zj3k~mX$j+MEBZPPaZ9| z_8q#2y3(YUf--s1eT#&t+UDU-ICF=pg4aCVo#L~yxTjh>_$^dY-Xv?xxmP)7dZzn} zV-fyiU6;Il>}cdYmPkq4@$dPEJ3^ts!5-IdQ?_o9v`HmzzPlcw)=F5jPH($FrHdVz z4h!~hI&La>#EedO(MuzG_}2?A4lm;Z3A9GC7bb&Vxqz!m^^NAI9#vffaD^^Ved$;} zctdWoqTVVSe3HYQG{n)@`f0SdfeA+!ZNe-5auWxk9 z20xC6!i@$p`$py2WZ>o$8B)vjjOa+4&7a)J+p=ib_X}ogs%Fj{6GkwM`jxwrlc5?~ z8aDFcC8nHf1ldb|ose9}0e74me5=$2y5h2g%1X%B;wA+B<%lGjJWT1zE6|tNiq%lu z0sIv6H;dNvROh-5PF6tRsnC~Icv#2w1aLPng5?W)%(ipK1FQtw?I z8rKH1pANVR?pxBV(^8X;74h@C$BFqyOlsZ**RZ!dLpYm5nzCwqmjmTo1$h?he|=>_ z3Y_${zEB6vA4%+cpH7!vHCRK0FGuHF80`1Q+`0SjO_V0ION|#RUwWVw^v{B@RJ$IutHr2G+DEBUA^~;&>P}{P3Y^As#XkN>x zn0k?&hr-AU|JQ7M3n!0lGL)<(14y}Z(Ni!7R*%O*lt3|-Dz5`GgJO~ zykPv}8f})^ac9BZNc4C<5od54B2w2kqCNJFt3tfzb-n2%=wbR*Ge*%QZADJ-5o_<1 zbEngO7R7MrwsOSNOM|H((H9&~BB8U5XoMym4QJ}Tyh(AYt}Dj>+Rp}Yf1fsK()~IA zrq7BBMJXE_uB35tp+Tg+&pNU@^z0t52^kuy(pI>h?=kaL;WhS$k1_@eqxUhsTZIa4 zw(PK>9ua`tXidzxvmyL?;Gs1jn$8vajTNMpMe09Nda8Fq7ALe1rYu`ae}$^`j6ck0 z=_@!LdNlj`W9!Clhw;$nY!u5C3p2TAC-)GO@GLjsc+R@nymH&Wb+MMbfpo=<<=1<$ zccyWTgbav@FH{WL2CvN=r*HBB&D`bJRaGf7t%S0Eme2dNu&X69b3`Dj1p$rQF1;Q8 zGx(VMYn*kTwG%z_R31l_e$%PLgl;g~ZhvW7&2=5w3Ul3VwuR$GPVy5C<@Zi5%duGl zkKzj@->kRCC*v8{T`CA&Es67sazt7nE$NY?%?&A8&mgXroGhq~RPK_-_qHcTH+97R zqrkzP+gJQs-Zoj}v-M@zJ&40)f{P;p=<#!cr zKfG?3JCN7WUYN7-rBa+s(3ZT*4vR8XR8v;(VjYd;L+v7Y zckz8?q;aiiKB!qsVgR8C#kh?Jp70LJBla(42Iy(e$d^E0-2oy-(@(cfBYfcU<*)66 z0o7xoPSCvAK*N=-LE!)8EUT3pe9FaSsSBsY3a>I{7nFLS$0_rE)?5(~r7`#9rtzF} z%RfJfa*Vhcx1|u*^`a-jGE&PtlYcl(zDfwGqD>QMTp?XrIm{xO9YLFZTl?F}IW?#v zD`mYMYGL<#=2cyhI!G>!f5Mh^T2{=Wep9vj8;huqnQdBh_$4MLN({rg2Rhi1GhcLj+8HUj4=!v zC5d5IT)w7qT9$&i&GR}6-EL+jy;`4QJh2EfoX^^?2i0n#Lg^vKn$hcqB4OPa1^d@( zqw;Q0x;3F2l!Y*BD~BKMbECKo#>WON?fHg9e2q5V%~q}4G&effZfCx7Ah?<)A8&X& zC+C;B@vMoSXlhL5+DejXXpi3+=-I5 z(!Z*R8(R|t@~%I6!nx5=IQQj8WLBf_$FXyl=X;y60XyRbeB-RNCNZ3kk~bdc4^4fW z134!XZTed?fIZa>a&x3->|QLciq*~9KonM>QH+^A-0Yi^C~m)r3xE|d1^RHWP;+?<$(3zbjMpDjQCJq#mF%#V z*#{$+_}4yc{}{jTv!I_DrBNtKfvVjZcmaNlYRnM^VJvO6Uy7TSy&qCdw^R&2jF!a_ zGB5FR55K*#MO)3m5g`GC^xVRs3MS*pkSd7-T#%dF%2uZq4fasYN;0k8x}q@|a%jkT zkh`l_bTQ&dRF^Ff@X&_}a`%T&JS)52>DZq@Fd2i|@OdYzoeYb$Q8wQ_H+=Y%tpCj% zQ^P(-X3vS7E*;pj`q^N>0#_vlsbt}+lK6`&UA%_Z-q>DgI%v{TB~$Ba%AKBSp_dqn zw>eCK&IPTtmsY6~<;_BqR7Z1&Q@PE6P!4sQFpp<{Z|m_s5lHw>zQir5D-WaD`l=?F8(CfpTUmOQ-KLV^Gye$nzj@%m>XY=+4otduPEKP*sClF;ozom-IF? z6mI&))u)r zeyvT*`k`iWvD3a=&XAN23e${zJi3a{+!d-OKZLP;=B)}%P;{2azt zTfn`cU(|=;YXAbiO2MV6{Z$Cz`2aOtnF~=3w{pt)uip8GrmdbI8lFRV`TPW;iPx!i~hFGis%8X1#$@<%nRlbe;|1CeE7M`A}ZD;-r&m* zg6f9k?%4zP&*=8!O4f=zK!NQSPHIH$UhD+e6j=JmK~zv zlh-tWL#>qLGyX`4O-M+g?w!GAH~A7S;nUlo_h*H=BaV^crMN?Ebt%>?=-yrFqz9f& zY@L2R9A&&r=H>Vyuhi-QrSvBdGsNNf0RrRKBVD*SSJoLi^<#F#@(Oc!?BR1j;`+0q zF0%FGPf0W-v~-l*Jc>HAjwdqol4+JY_2I)ABPQ=EQCY$T^_K9?{)Bz3*&6j;|7Op( zE0oaopDojl>Aj2$n$8cHi9dUZ_9p@ekI5{k1#P@Hl;&kVoI+gv z@M)Q`7S!)?!06I7k6o+ir&?fGUY9lRRVCjZ=0!21`RjIk^YBA|P8W`?8}sj z$ktC8i|lT`iNzJ!QvnObWMLh7wkAfcAG%wQb;eh}g>rYsSR$=^a`gSK?G1g)xYM;k zjmSj}?2kX|MfI#!0MQGE%zTs5dGLEgy&=$&o&pISGg7^lrM8(P=ka#0Kd2?%e~_*U^xvD~e$s zmiU!V0wz(*SO0rOw4LJGLaKWcnLxsQ4|gw}-^yYrnXI1|-Su-o?4X&-efq8 zrYhvqD3vH;MKYicP;Vkaft4D!R8x2fjxvkKZ&Ow({fMA(kPp@q6a6p!^gk+JP!P!yqTwU$5aav+pC8+#i z?HL*BbLz@UcH1$H)eYhD=Zltg8aPkYUm{-=|?>GD*}k{P{0$a;@_FYRvC2YKOowU>X&U8%d56%=v6@U};P6 zmw{+w<&a1qUwe_D0!~QLbuGu|$--3czj~L;!c-Is-Y7M@+82FmEVdr|e!kVGP7ZqL0a4Qp7G4OEAFL$vcAzBBDUJ>cuLR%-n9 zVL}@C=b!&R%crX7GVQzf;=yHZ6?pM8J^%YKpC2KAUu6H>@SlA`jh{pQZv6Rwb754n zw|Q@=Lm0OBsOxJqt>2;1cgK|1HD1_cn~H=tV3=XQw)vDrUFFjL$mHX9ToTOA>k*;X&X%ZJ29aJNCV^Pi$KdX};gZP2nBT6Ka26wyG7EuHVqw8hzOTQa1 zMxOL5ub9?VEO%hdsB@jNh;0uCrc$3Z0UwBa|6X?y;wxz)#LZT<~Iu%<|mOcn|YJ>1YI#cjeJ`xUqqZU7j^1P&}xd&0(Xs8E^@Kg`Gp-2K&OrNUl$(j4C;HjdnAh~qK~Sn&Ci zLp#fh?RgB%m&{zk`kdlxihU;Um#9MOjV~4klF9rdJVYpOS!?o2WDg4u(p}iD()m3+ zFkQ&b`!ky2yxqeYx?9$4I%+*K+H9_-t64anuS8Zx3GFWQ!$xz@Jp<9a)_hrEZ5)R* z%}4)&AaC@)#(O;A@P?aplk~d?c`q(5Q0qP6F)&ci@txaL3?D%c-w-)?SJv-_A8XjC zA~9Nde77x^^mN7#!dr>vNCD-!gr0%$#%dGtMk^eeYPF_ICuNd?N-KR&4z~l`it*c_ zF_5nb2xFrpO(=Q8fj?99(>~Y3-!9=cv@bVZLXnPpUeCbaR*RHl zbeiVBewjX@2P-j?ZOkN6J|-Ej$ErwG{TczDKvW%&`2L50sGNJzHd|#MN0sK>!sRVR z2Kc{9Z{vI~*Zbo5BlnzVK$AJQ)|Uh)U=WSr#TP1G2Xy?N8e-bXdDn`SC5rRx_cta? z?$M;q-lbUdO+4~C>`KP@eNCy6O~38EIVfpcMsyqH178UC=!Pc?__;JQc_B`hvjr>i zRr1u^k_B;vyv2RmCx>TxEv-z&Q{`U$(|VSpc~8hvkGLl4P7iEjBO_~#msDv-)psZ1 z6>h@0A9ZzTE!-iPKs&px%7UL%A!iZvU{><4s<{t!&gwju&yn-vF^S)#b8p=lig9JO z1*=eB-f1J`of-qgJ#}K>W74KyqU>@)*y*yI&iV2lF<%kFxOI8urrYho4GwMDxJvit z*xHEqFlYOt-4O0)hY>EAkudDI9vifl;iJ-@F%L`2S?)8$ipZ_Jbb|qSg%UU6p z_K+G~vax-x!EgLx6>w^X3&2S?w`#2#fAMpdPCECcS)wxfK5^COtL8q9?Br-;V)Hv1 zW>3Mo)ke9$T16gwe})r^;iNU-(!vY-;0v#B+D_p}Swus(l^8BES z7{3C>-2##argvbIsO0?JV3B*d`grgV7Mt9 zD9+`f4LUnJs(fUs_mYRztQ$3Th@hVP-5{b__bP96OC&gXQ`Dg6Q=p|wYlQ>n;9ALXwpvKDh^5s(jfqIALb*6ho zXt-|;Jvd%NfW$-`LpxS#nUJP7YFm0Ryg=3?X6>JqYYC9l{%*nrB&N{cw} z%QhUPr;AJ7T_32^Xy!zSDvStUM~6Qd*T__AK)}C|A320Ey#15AAt=+O&KbPcaN4%+ zA6m73!}hHIrs)P*8N%@@E_{`%R?Z z2+>XhI;mW!afcY{+R0OmEq?Ue;P*imb<5ZlPwv`B-bj83g=NeUq^yqTwzj8u59c+n z>JnsQa69jh3YC8|@ax&L|0eMbr6=CwIyS5n-gX{EQ_l3Qg5=X|o~osAByd@Wk$ebBd*ANoXl zFTB-o(DG_AVMLO_UvUyOBZ2r)6XJGr-hNrctDf0gg~$lGh^n!Jy}TgB*x-T+oLyHEs7)1Fyvs?p{m zSEF7@U41(!0zoNc0hVp;AbDFWM>C4;LCc7?z14Fs3f1PIePvjL8)+4zTdDBfw}ott zGr!E4iM&aC!L@Wiyvcb)Eqt=ka3pyTBW!WXawarQ^w6-Ere>wFAJMD%3k=H`D^mk* zs6eNzl!;Kawk)klU%S>q&CtP2JP#HqV`s%cAfXf#<+rp zeqmA9F6kyE3yb&*l@#%v;kn@oEW)?#n>I&(FYow;Znb?+z1!%IO5=sm8N434V?b17 zqCowX?`Z)vu@bxYOF!TY{#c}L1%>fiW}isl%(acYLNX^-NL%kV_LE}*1+mkMV#>Tz zO3hPX^e%qDLxzuByDOG=nnoMh_=(G>@7%|sY1bx@V=0g7$CohDBiedhx8sgZ3%eqx zmh6UUK7>~`xv1n_Y|f+TxTy82A+#}kbIfHd_AUeX3kR)6zNATeV2SHew}9REwW&AH zek$gRzxAgkUg>nJM~ubT%!rIsHSteSwCGeh@^T#%^H89C>biY95mmmE_q6?cAJTue z>Zis08Iz|pm_v*QbJ`ZG-+E!RiG*;3u>NW-+QAh*r! z_bb^6#C{IbkYI?6?Rq;WL2Tl9ZSGB(ONnW;=|=QRbEy1AF0ML~gvW9X&OP&-${k*j zX`pwNqs^I0S5|KC_N*{;)-;=yoZYNG_H>(Cb4O<=hIMrOqLYH?c(FBCC|G~AJEIX5 z_G~>@6salKtTA+Is+#U-qTZc$=}R`S|0UHgLuSkhR%&-8Rw-1P^P5>jNTqUG)rs7) z{OQ^&edS6Xv|CTq$vFU*BAvN8bM|>mZ}bmbGD7%PYbOuyr=^R(-LH{}ec&@0Q$?uO z^(okVgr!}Gz*A(|D0wHg2yg)~?vF{S8>}WagAVeK;|kw|H$^4HD3*w8vWd{4^jPhZfiD)YO$pz+dtvHJC4 zrh`Hq_8Mr}sa?O)RTiFtL3k+~9NfZ3OIArQh?1R|)i$GkPWjo}2Dizb*d|H{W zQX;z$gq%s{c`hipJ0TK0X}gexRHm`i_FLXW+96Uds);J{C5RsnIN-dLh2UOK1zF0? zva@iW6$pDSgXS2lcPDCV%{C%4%apBZN67?&T4kCSoz13Eda30=beFbrWM|;^87Q)| z&U1T^k6D}KOaay!Z71u5pbg}dpB6g8=|6Z)T_KhBdDqtll%k^6P8F5 zK<=YDyexGhZi`ie^*&#>3N_Cp!YsM$`X3MGh#_O-HtK7Qy}!wOtryEAuY5D=U|*N^ z8#}U!ytyv-WU5|sDJL95V?J8*Ihc?mS;Vq^nLk+g%w5*`82Hk!)Pp*?3Qq)FMHEW^ zwR{F18V?h~@SA9r?Pyg9{&MSYPuV(*>ogYd-R=3M=p<_@!)2rR3~u*Bv9d4egpXc; zp-m7m}l&wS;O>c*Zs*e^rb;E6FUd8wL zB?}&%{vdur#qBuh-)VGqcXh&GKO~ti(Rs~re%>T8rs1f1!a9HEar&@@sVudi(Yiu= zVsONdO_bf~=dt6GNOEG@A(OVKXCK!;eEqt)z|_~HSHp}YfYx=W!ZL2SY*p3jh9&7? z`bY0K6qKj{1%rg5s1KL!Zhu19%#CX~r=r6}YDavGWlJM#dw|RSVt zTi8~G;h=lGw(E)X?vKEYO~}RZa#=N$RS$&5e1;Q)1`iFuu<0}XGpaq%gNPe|>K-<~ zdUM~yusLs3*3@ZJ&%ub1!*SBHpH!>T4>Z>*8t!%C12-z3FvB+pJ3-y68r=Hf1gEaA z-SE3^Uc{qCE+XNKG*xZ#x`GzgSitytTBvY_qG8|SN;jw2CtUHHaNqM+_MfdF3N*#} z|8x?70&txBj(%sNkkNZw4N}vMQty&*SfI~2YMA0CVM%8PFN2MdU-{7vZl z1}I5K>)M8T=2zM9Wjs3YsxfK8FB~+wcxeHFOz>gBi^7VtTt#W$q7$t7P)^%N9hsT9 zaYQ-c)yq=!-NgRo286M-1;ypj36(UNOti30V{S04(?p^dmwVXjinp6Nw!nHo`8ZpX zs|RX^kG<{wuLT6V4a(`3)xETRckc}WK>;wZa+Co2R+97A#o#|W2w0r!1K}aTxw`g9 zRPw_L1saq`>H*y)$u-hDfC;2qB6G^m>mf6GM->rNyLy+8kq}*O`9(h4BTWiVvYEA% zEy)~Ro%N}j(%@QdLl1>3;K&e8KRLi8UwC;eQ?@7ipNzzQ?nj2KoX|saxS9oBH%7R* zpO2VxUnmn47YmY5Vbyg?`m^WhiEoeB=d!{iT&Vcb_$H8UgT&YGR>jvnZ5Gc;R>jT(yl( zW|tBk#cY3%jBxXI0OA{i)YHH+DFG!j2-W@wX|L$aiP2fng5=jEvzL#uC2pG^=#GXK)d=`FhfXL zQoVhW(ZtQWUQ>n@x>a%CID^%g4j@$t*4*(T8afsvl|X1-uqVDDH9*vYlY-GlE0y5u zbH9_MPMyj3o-IA5+4Y?Dzc|iSw1ri-6}QmYr{?PRQ``fB4!{p%u&YuKZu1n`sixi$ zj?cJ5DO9)jOH8L>8n+iY#q>4)mqNN~v9_RCrplNyjw|I$wkGgE$3uhUW&3`JT0y&| zC@ZX7>=b2y4yX?o+!xdB+v8~ZOg0vAgyqTN&!@dPJA_KgJa>G5%2rM5zf9CcbO{Ju3N~I{ADMLp*`C;H9DK&RDyv}?mRubz6>-Ubuv9g z^u}7>1%t+_@Ml&ax4H$_uLqvNocrT z-6_2CK+DSg*_vwCY%nfw{mPEUNQT`ACu%%u}pS>&& z-@TKYRHJ&OhI@MfwVzg9C^#5mA~@8a*DwFCBeutGW1n6hk($M@(Yx;|xC zhS?3o_ahTYNw_v+C>tqY7-94PIH6CfwQu~Mr^52JI*ax5n(c~cN-RM{>$}YjbV9Dw$)Ijb-k0Di;)eC)f>D&wOXI-?VnVV z7YNy}Mu}KpV+mO$W25Pr+rJ5!{{Fp_?$zG%PNNV$ek4+h&2Mdo;6>a=uC}xIqH7&T zw^Ep7y=G#qTeVV^Sf({zFXJU$uBzlYImEj8`E`7!r>Sb;)h|A|ZRDv|s3gW1sq+=0 zj^E1G5}mOF5RJC~j>5M^xUY7!8HyF>oJblQarZqj|q%f30PWS zol(SpKQ!9p1hB>Sd`C8|h0t>jQ8d19btQRyS0%Kg<7HW=b%`dR4t%g57i=T1h0Vi# zuc7ZOC#>cS@as~0yWuD`3kB)u&Vq4>xBc@^|N09o3VtW1GB7YtPO?{YF;-2v z-$M86DRSfFcadK){Pyk6OGm-I9rI|zb>2xrc)mI*L;E)IDDCOerv(#jZ%0i})k0w4 zD(9sRFtZnG+GI4Bqoq2Hw3(4h5u{ho<~S6%jNB?&2L*!c*r%-#SIrMN%pagRTJSCS1?CpTA!cwsQ?cF^D`1R}eqVS`y~Mq4 zf#sQ>J3f=aAM5r1BPKcwA_r!!zX@a9M)IqAm83mjw6&tG&T|sO{g;^NKVvtT6@;_x z%<+-~&Ts6(D%OmVm`bGGoSQwFhf#3d(<7mRTBo9khe{ML4a?9K880~9-jnPN8e8zZ zNlcB%7L#jdxp8DenlW53WiN7L#RYEozr{!4^gE3VIaj=18V+s_#%rJfDv~cX!whH& z+4unv+q;cU03HcHWcs|(4)tJ^)3cdr`Ll!wc6TOH=%yXeOKKJjD6c2#Aq6-RsI4=EIIg*eH3|e*Rx{BP& zK2iCh@lmD>P;TXu63lQJ8UE*Rt9&7OzDRfqQ(}Z@ZQ!;l^l2i^o^L=m z*!S!unQ~6gANTAcg<^|29PN9B$U`OaZ`UP6jl=h3h}Kno4AhU*7>R^&qk-w|{e)el zEL#Vq-w}j5OsW$!gS-v3s@D_+nt>Cb@c7}4;D2BW*clMr?k$J zjzoKp@YtKG78RC>z3{WMx1?Wu%l!US!q%8+YwAj?_C+mSDIP3Pg3BlK3>%YGhf+RGs*xjRanGL48IO?4?3U@=B~1o*GW`paTkgqS`mC+vPp7HJ4s^G3N7o5AS@Iq8xFVmV+pi&>W75pA7kFhk@ov z+9G$*8z;}{=~unA2WilAfJ$lwq-B<#!G3mSW*qKeKi)E(2;U6PeS}`?T;`5v<6>sV znw|5f)JQw>4&Kr%Q>PT@Bu19+_f9A}p!+%`pHUQM!WZ)+Vs(<~gXTd1D;2~RdOw0-7fcYUnmvqkX z8)hvXOS_ZW={RgOVf&<)TQ9)Ud~ukI=&-w(;j-Q@5)0Y=niU>rqQr!J0Qwwk_OhL7 zFqp=#3J73&UBZulk4`MRVMl(qhs^z_c&%F7`J_XWmqn<^ zu7XvM;{sE8x4O;Sp&pQ$a1qkcnpOfUK3^*Gv_h?wzN1lVw+zpu!Vi*#cZhEHolO!z zIK0Y9M?TPl-Bi9x%SyXwSD0~Yxo!$tUrb;c86FW|2hX39M72{)6HQ%sA1kHh7hL6HAf$Xi5 zhg|z8IiJu|VBeaVtb7W(R*O?F8hl9Z+RQ!_;GA$Qqs9Qv1GrA;au~|y+_AM{?^fRr z#3{g5XRzwq&Xw)LUmwXQ4%AAGwgYfrrl&g4k5^i@Z*sd+w?fdq>li#)%x4fjdr z1~b23J-dfTmo~?VifF19laoUE>WqS=1VKpe~x2qq^L-A%V& zGi0oGK}}6GBAn1~h4f(R2@!<~>A(vgr{>-W%h7^|_kU_uoG9d`^D@={UE&ib)o9;VM!Z| zzE!VAGgHZV8psv!)dH+A+4yYhrqH6}$=soPA+njX#_Wdx;rQ`Dtx}&r&=d(Y`XoLKBkHMy^=|2qhC%9^R>>;aA(y%{f zh`rQ{nB&cEZve7=jx@s%Gwyx^R*dP-&RJPTaIw95<-U~A{{(w%AxZ|RDEgxYYn;y7 z$U_e{rdsV`+`wYt>7<_-9uLOo5UoQ~M04#hmV<=}NI$RXP_;;<70%i@Nf3ZSg*?nL z4aMZKZ#)2Je1Esfm-F6+O0HqNYvESX-syJ6;)n$6WbDcU1W~BVZaP?CK(wK@HrcSV zQG7295mmK`)yG^EtNYGHSbq@2}Q9!iBL!RRfJ0m>?#nXO-uN#|6>3m zIPgY)5?2O5ZJ9LrEoYWf>vALbYO|FxR}~jl)DYQ6Q^dqHuN{*YvcCA!>$xXkT8}^7 zILPC-Qp$F%SdHfdC4Tp4R`UOHkkMX4>F2!(0t!eYUe2pPmI+IaK$DN5eCN6FBd=o* z2&1T4#Y*R8n99(>Q+&M`bYxy8i7lKyM91F5T*{J*5VEFAbs(X2Xdo%Y5PyuhYkj?CmdGk!udJdZO=rxAI4q-r%5B_E zvD|c_($YmAn;Zd}!f-`)I-W1a8>BDC2{$Z6Q~cjZg*zk9x(G|7>2t-Bgtm&*j8@cu z!xir{yqQGlZfEwjUY2>Ftk3zLzCj+vyVloKa>cV}%cs!t`hLU`r#u$&Sj#A>zj@@f z4g@!BrZVD&U8Qc7W5t6Lbv`MQq_2PWlTxYHn{p=_0red`4&N8Tt-TQ1tFI?I96~GS zjF!Et&q8q63(V;n5=_9U2kvq^NsU0)?C$nJ=jKkQMJDd8PRvc7bkAT!v};iZuM^y7 zA6!~%Al43x|7AY8s>@0q7TakOTTI`tQPqccet4N>xwP1}8;4k?y0LaefTxUY%q_rl zG%7tHz3Xn}R(k`=m_&x`jEh4l>m}(W>Riw!2vzKMmbs@}ig-mS2OJi)!H=^$Sx*84 z&y>^w?q9teCSb)Bv2oTnf7>+Dqh0N6=3;ga%_FrrQ?cLFE*ZkAI{Dm?VaD&8_mkSq z*c;f3lgf`iLa)=TZQIq0nM5n;l1C#|)I$z29_bsv5enOwh~1bL$X>2{-oZ@>UxvxB zC*Y;0yt1iI=>k;PhY1*4Rw*R*=Ct*4-y>VzVtpJ!qolp6 zS=QNC;OP64EDbZ&K=w=~K(en)$|i4`M8J2Shj>!$C^;+OI@nj!XvcmPMv?3f2r zegCW*Ye^yb*OEMcEqVPnZ|uLpn{=Y@QH!K2<^v3b;xUQktArNdm+7P%0q7FT$Qdu< zRHe<4B5dShL2*+;6{u{84sCBwu6|Q>IcY?~r8WsTdV4$479#q}_)YujF3Ec1U-`W; z(a}y@=S$J~m6$OX^>2&WyyP>cf0Agn0|f>;|Fg=j$O`!1%))=47Tzcq`v0lI?!Uig z@893}-&6be@8|gM=LqNv;Pro4IeYAPxFy_|?YTpH9RGNtXW)@|TSB2B_h+pf1=u&f zN=FN)3TcT%$9y|bWxm8z#t@@oWeu54EMHz?+57EiDUYmh zil0y4$&Vuetkd+$o~W6w6Zwa4|CoYw3;bt7(!byS86NfTr*Ubh{w)mu76yRj{97~p zH}3_7&mum9`*GzyTBMiOKwmFW(5Fq_NE`4Xf!8PcKm8YP-v6WJ;(vX6cwDsjgGH^# zyXiW`JD`P&A3NnX!+!2Bn+dwJn!0jwWOpZ!yvjCMnr?3Z0cX;InMQDg&g1? z^AB2^?VyxDi%m^UjgPN*zB*|QL#b$MqsT3`zi z(*zmo;wN3{btVb#-rcYA7`+`UK2!C;`);v)%@`gjartsbs=6&d(wyG&p&f4K3|W3;vcxw#xKZ5bXUe0f&d>r0SSK$oK1XzfN%mLUwo*)UE~m4nuA zv1nMFMV=X$4tSc|(r|?@t+T`P3#Zs`u7_;zYaA4`V{WP)9}`nTEZE&h@%zIKI+X}c z7lhA) z`y~NO!(e@vui8vCMZ#-;SBT!XSqDS2oVYtIQ|*mD#!MTWQ)!Z>!LPOQ`B#b5l`0}< zsZ|D72*q!ogO7M`j>6TR^*qK{2<~2Yuvbm)up7QXparf`S)FTc~4Mu@U?iKp#VW3qJ`9TGcC7wEmNC z)-Zi6sI;q*L^`1R$(>S}Dpuu5g??=0^_4lbC9Ki9sP%!qXg)4?Bx3O zuiz3e+jBW*R9qJ6cXpH8(R}+FpN1EQO*Q5+`wOfkE?vsZ&KAM!tX?)uNL7Af!n|sZ zu5cYXt?X=(d-sDQDe6h8xBk&z=H})aUh-23DC9p!vwEoSV&dJVZ{LdSYf3RZE?Cc! z61ysoj&$)07d`-E>eAr_f$>D$l$RV`Ex6U&)Y$lchX5R=8?`x_2<|Lh>`({Hp7iY- zeyvjzTTJZIZN+0goO{-j+z~tAo>NU~Td z;2QOW9QuF(FDioTp%0ZkIi_vp3G6Z{DGAI>hq@93f(92n6jEeYHTLP@jm|`UTFH4Z z6?+yPfH~xWxvCFbD=jURw#a>xSbYTy|5K536E(b;SCQjRkVmvtDsT zyXAN=%OTJHctb~uT35PuolAc{`0}CPaWJs9=H^#nVM!1a0-YFVpu<@=Pt))(Jy2kg z_ck?E3&Y)6RIDScU5ehfyA%)T^TQRcMn>(;F=C{sbCi#I5x|mWY%i&*sxrPeXd8dZ z`O6|}Oah;;wZuu8H@!aHOF*%uE%Q?YXOmJ?-4U}%PcFjS1Vf{BxYgjdv{YXcU^EM4 z9A+X^s2GaZU~Dy=KNzLOXa;@)7MaA7`zzQYt;3FAKB7MT1_Uwxwgfpat*rIA-U7>l z1mECMf{ePDnB&CPP&u$t1BEu%uV0^Ojj!{YGs`o}0&}A$K4|{eG(d6nM(Z<)u>HE< z3#@TjhKWanRSSPier=u-o|>8h(XtoJY++&Hw+Nx=Gb+-^Fc1KDxNN$b`*0jJ_}d*2 zCw-dmz;k~;`irot+o>xLKBJ3v1LGdnoA~&RX0hnU{Dk!$!$8?js@8_?$yp}2w>XxyPjordc=)CIVMDbp>mxzBRw_`#&t z?q1H>RHnz*GWv{+d3fGE_R1)FHxJ@%Wp+P%KbWL}((n9YWo+(?S^IVOEw7a~e>W#y z2)nTR`smyL`ck?iMa}hC@7Ms(d;1|yybWfE%<`T0verY$T zvkl7|SH3;ur0_2I=fpJV6I52{JlRRxeG2?UZAC@yYd)@7A75)7aB9!)k9xcDS|mro z=Ay@U81AX}NcT)^>+)cUV}G*003?l>SZ8p;;jpRpLU!{@DIWSFk;+|;16B^Ak70sr zl7_#FNW>yrFpMz7|4C5lQ>DbYQCc$J{6L&Y|)^`-m%(^ zH7`JcRpF$4e-E7Ijun}>Decr?@XUdK+>jA>=&RuDa~O6Xu6GihE;>-ZwY3FuK!O5{ z0D`!;w|A_BX{Vu-oY`;vxAMGrqj+na4SeBTwA{|aK%7h`o|(Tkkcq=Q>?5)3c5TLM zMLG(8IZ^hQb*0-^a3CgJ@6hF?AKT_R9f9NB-MU9*@7KF9%2}8lzu{br@i28`d(_a# zXz}}p2Idll%*}t;@B*9_i(=Xo@2j|_=*Q`In4B_%d5>~*WE}*4`O>eZrNvmaYfUg7 zq;Bl?wT0Mz5oidbA7%w~er>~HB$4gb0_kTqCxkf52=Rl#%NRG5UUXT3mA0SVKqY-v zjzIr_2~u_PcwBm6lj$;4zba;0KfJ(@zy0!(JjZ(`zS{D(^oiip z3i^VCuIVqjD^(N)cgie(OUq1v8+EleTtxxD%{3RJcGwgYApPpO$$-o&)ckPD z{!G%FyW+G5onHBt5idB~(EGF%7ez%y9eF#}Rxi#8qdAfGrQclofbPm#0&JTp9At>) zbM4D5MX{qYxL&s(2e&DBU~X@kSV~{^>}cNxn;Yx)L)JP*L?b-p?j*cI`gl>Nl2v3a zOQ-n_6EIt=acSa%?x+_dlmh7JnR8urOWzMU=?L?-3Oxxrn;+|<@)MPu*WVwZja-9& zVMx~L`T>O4*4`cjPyqmr^#Esu2}opTXM;Qja;hA;^q%ddI~gac1W&WJG>7|yskABO-p68{#eZkSnkEs9Dqj#aMj&iEphj&J>B*z&&R zus64Tb26d}W7Z1JNSzcR5KWu1JFKB_>OLVVVVx9TwYd7@=2=Ui0W9HlHz7>`RSrgJBVIjMnYB7w6%QObNi>_Sx>@E$)IAG(9K-C4{! zp8}Cw(qqR#HNz*F6Z# zTucuL`7zs>#-2^_d#Z zI>n2<3&G_tR3bdQdE8CnOAgt$OB-P4+6quKz=#k9r`R^pQV*2=Pl9K%VrwcqtHkEX z+Y9&xieELeTLR`z$sRhV25*x;g;&{)ydyY8BlX4ai+r4m#dDgHJ{JacplJH*BNEW< zoJuk1_W6Us=BC_+)Eb?e@SgtIM1}Ms}?0ght^VsaSN3}PTR;Si0G9cq2~`4*X0|!xh*>3MF*>&wP94r z75wP|3`JM);EHEQR-NwX@OtW|B--268K$&~6m?f_+NPG!5CUvTI?WpxC*V74FwMCa zZDXt?uCf**Y&NyleUARJCRCJbRnc?Lps%w6qU_v3-03L`K7XzD-YAa9A>1B5reY@D zS8U_Ik?cR@#8)YN;(nZ9i(|%#SizS5*$%s6-co+8Xii5xX0n=uuWCN?W%6)D;;S zBLTE6LE~WXuu8%7nUj%-sSJVSoJh>Vhs%c6g|_%*_C7=2YHM}srm~sIB%}jZ+j_{JgMK$TwtD4>3F8gf*1=c+4g1U~(-pje{VkusoDZ`E3!l5ss79$f4wO!mLhXBZ z4nhT7o>u^EfR^4rn=CA{6Et6suX~?*=IT?gzIz6EFc3|&B@xZ(>+7o;L&!uY-qkZo zDtXZQn$%;qAWANr0CziMSh(=h(Jgips3)5nE$zbz>4Xav57S)wrFB-@7yS8g2?&d{ zJMnK5WkEN+YqhXu)vdA01$w3)GuqrlJ#;HhBmqGl683 zvO3yjrzAF&$SI#+v6UK%1(4v%awI!N6;l7kzbMA+f-9x+Lj#Gq#JgG5wm;y9kMH&C zK`#S3_|z#s&;W10y=)jT-}j8oTmlGMbK)NY0A_%Qw34og>;ra7dk04B%(3-q={b2) z=F+!BsU%K6wyvtp5oJ|;$8=%8JOH z6FvNag}SJ-KD-S_7=AO{&;%D__TgTL ziy9L`ylFoIEZ^6}*FQSSH(N_>6@MYr2A8VpK72J`aq!){chl3;8vgS^?4_!|uv1nt zSd@lmpqAN=az&)yD?oe`>^P&f)*SiLN>*pNp+)zo7Gv5~_vq{A^{|wL$TKRr%)!0n zu@Tx=E%AKbSrvHz+V*eZOfMFH((Uug<2%mC>1nqW_T|7qVdC) zoy*UINWHexC6G4jLUralF|b_x!Q*XtWd)W+7LdG=*EVYr#rgMcdd7Uc+Dh<@=@Q4J z*%LS~0M60JV|={5V!} zXGAO&W|oQerHcH%yO7-nR36Caq~?f*Z8hGFc@wdhkwLEGR9=$y^V{S~9^9^LMrNQ~ zYiT$&)*a1@Gt0yAC znv!9?{vFBw5{nb75v=27oGQNWcL2WyDtRykMrmgt6b}Qp^34fVPg^2gMd)fStibKj zj)Gm4JS+w>AIXp1qTAV+KrU>|LPi(*Ov1tHhI?;Y{+ zz!o6<>OU>a0aW?K*@GR>8O(!sm6grP{u=yXCxyQ+FRj%4%OW$J3vQ+BbO*u(owHHy z_wy~nL*z1EH24^(<_Um{SyT*t79$yHHojZF@Qqy5J-SoD`*x0k^}1ElxLzUbj8f*s z4RqMTf8Zb6AddcGdU!cyICq&X8iaNx#kR&9+Lt+Vd=&`Vep`d~G@C1ml&UmP{Kw1C z>rvt+sn?Lq#UFCVy=_y<7jO__=S2S_M@Q=-n@xb{Zloh*6p)JmYsN0|(v!s??V#eDSpWx48?DdW}+|sDz zf4cja?Q5UQU6O5c>)wJdgCg_wKNYkZ8TB)8dqFkZ%J0kMf;q09 zHODP(<8}jzX}-Szv)*kGAt*OY^`B5PPQLnVq}sdX?d6d3w@ghjOBLipI$fj7^HKDp zzvw2)JG_mJ6~hui9y5io9%IOorp)D4*Z=SdWQL2L41$zpxA)p7J;UE%X9tKZQ8O_t zAnHoc4NbpdP3Y`I38v*}c!YBYKnd0gfSCfB0YJ?(tw07~Xnf^VxE%+SlvJ^!oX+M4 zk#UDaiEBST#dsdxVe4cFX>AmyToldTVZW#J>CN{agR8pxJf2<~XfJfqDMCCNTsYwF zRw9&!2JJZo*3@Uy*2LBdw@wprJ1AN_be*w65fGT_svcL+tBvExsDP*VW$1^6g;AiI#Bz_ z?}~~s9d9HYp6KGwc-@KC z>w%I;uc|r(G>T5lU`j)xT+QfMvHiaV?#z5Kt*;?29M`y&TnbQM)}467T1Su@t2@KM z0OCgS8R{$8xAmop6 z;fp1t?5r#l$Sh^f9YA!j2PR?yVNEAO5zjE>h~WFBetHWgsF3qd-`cWy9_+OuHR#T2 zW0H)OCn(XkM=90bQ*8=DpX{2K9}h+VGXq34sROHnX^oS*eH(C@er9z6OHkBCzdawc zr^20H(24Ziq48^P-vOInx2ZvM<7}MgXbF8RfAF$3m00Dxp{e3AQXOQB){p)g(@vA=z&yO9Aq{WAR$^uT=+_`D!pkxRQfBsPEZdRl|Nu!1;mX8 z0r6+5!?_9=^32T4zP>aC`9VvvPI|3Zcd4sqpxc*8OFHT*>a%v2g=t>U=*cIuP9F|5 zyx@$o&Z)H0bMy5``qv(uy<~h^K*0Qu&nVn0Eu-`F-kfs|nE7&bek~0IH~?U#h?P{w zsxj963D2k3Zk;*vM~D>x>sakb@)Q-XSFT3fnK^$DJ;pGkqC-6Ad>=-hv8?o@diROS z{~A2bh@ilF@aFM%TI`g|2x<70;y=5Bls8GYz6fELsksl+(((IEl1f0uCX7t;opGWa zIeYa{qoHV~g%ppQ!jE6YVW`}ym?6WRr|nb#N4-}qH}&@8mb+KCU2BVN%9j6uV!%pw zMVPZjCMIK*pj0$YIiybcAK4k`A&5$~r%OgH9kUWhJ0fw;Ox_!mN@CDc?CDqVLpE4y z3;;dgI93H#a`N)PSN-{f7>xi&0miJIA35#~`BZ@Btynyp&-pW5o({gJ}0i_N8eL*wE zD9E$9%2yDIMm;b4&jgcFb&lx0>7n1FwN_nlZFQHCd41!aOx1o0lK`ho9 z?Fqj;jIgV;y*w8%NQPm~;U6p|TT;($1~SJFVB2kD3546WA;%{_*2zwmGpIcZhek;YS_fgdQ2XIJcFa=-VlY?`R9A z)!v{(6VobSOxnd=HatPj&MNorg@eI|g;kA`lxBeRb1?XQ;M%M@j}KJ2iF%OsBxPE! zsm|fx@tw6Uo#{6>fY{X9ilE~+Z7bctM#u79jRlJe@a!)F)H)!`l%3m~r`CV{)ye0G zzD6-%?DDOv^hs+Tn&sDrx|(z#3n}z2{Ey;{`K;HcwR0QP)jAzM1)1v1*ZbMmq-%q4 zvXma>!G^}+QMfK;a)*dhdQJMN#mXw#_TxVtaSp^cT~U{sXmaf=vdx;bRwhKee%AlN zG_Bl2fxsBU)Xwq^^JBQM4qJy~>e~3NU~`XJ0{bN|-Dj4?0fe1>wo#tMixD&6^O@uR zoEN~p0>>S!!Lon#HdwE&UbAzQc>R1XHVhs&G$nhL4QMF)Tr{<)>059D8SFaCP+H2v6AdR4awOam!@5gBT}@S*i8ybX)uU zM(A5;sM`O_%T-;G+>VwM7ti*|nl{AM*KJ1f15$B=$m(g{qU_hQlPIzSbF55E=Pjj% z#&r4j)H-2DV7+Uvb<%xXWim+E;TqG;bhz=B-!O|-USx@%guvL>fkJcRPMqW_upHXi z8vliXMX$%EQIJx7pi1|UIbEpj(R5JfpdM;XXmpW1_PNKbb+U`AsO0-EL*<3cV?wgs>p_X;;&a zi}T$l?;(%PY^5)#KPkrddMx2MHIhJsnguC=3Iy4AijGj^QUx&kav*@+3_GrSG>FTh z#bDb3j@@HZOQ6mxF6F?gxY|j^n6I+p@6Z8q&bD+q+_%AN5S*HVyTD#pup&Jix z{LI}_%EDsiMpp%#ougU3flJPv$rze;*FN1VNNy>3p5SYYs}~hY8_{`eF}zS86db{8kG3v8^Ls?Hu(zlJ7ymASXbY;}!Cs&-- zX5~+TweyeHX64Ad02U>>*Lh6Gd&E&&em-t3xO%T0NUcH&(6g)pWN)@H;1k-|#*-@f z)-4_}k*8BcpKpb>7rMuM5%$ECf)W$-zVii-4+#NMP=BGk4-td=Z=_!`efWT>ChJ|r z0#8Ck-sa+RjfAKB^XUmVcO$Rck+HqCvsb^yXQmXV@aj20LWy#A35lgcX-1W<+eZ*U zFW^0A;gvN~-;JJ3MKDBX+c_85)X{FtO>gg#5Rua-=P*%jE8U05+&NHB$23J2Lr3TB zD&0`aGu{AnB*+?KHGChURd0CTtM!{pC3opB4wi_?%O{%irl);$kGfGRwQ0>u6nLEM zX%Q(Vf+xUvehja||C%YC8N9RCLrr`0fQ0HkzYWbo^yD3ZyLe8KkggvU|Rd)uy8`6wBB5G_#@9~rLOB9Ovw}v zQ~V53hn4^8%$f1e31l_w&61hKBF(gPt33OM%R4I~fBuUoT)D#I5}#8VaSfbtn(>7@4O^}kAlT+4^vv322)UY?v6H(e@A3aA*C-&C86~mh6 zTWOT)E~P~cS@O3n_$%5eH~3aWrc@T*hGFi#@g{?xf6N*Sk02UCD_pV~6QxcDM$VC`Xi+K4GDLj8>wuog7PqXi z(A|+_+@VeZkcG@$#TgQ?6AR=gH%%ISqh3cwwzPNC7B{#O;1#b>jR?$|k zlrQWefp-gT{O>@)fd2FMKPd}vTA#UT5wHd3gE1cKgkIdg3<+vx)&t;MY4kbGfW@s< zQV=yeIX>QUtJRpYqv9-St6RealtGeGR=&lZu{r*FMY_?VuQ6FXuR>uFG2HaJgI|Z` zJRxIq3*f}|DI%S5@jtP*DmNIeEv4%}g+QN@nbiUA(b1P){0}rKuTGd&shKEwtO{Ti z&HY+|*1R?mLKS%fl-maiyso^v;*F?UGnGlaKg%%G@Ss$3R-ZHo2Z#SKxa!S-yK8RZ ze*{--*q6OXvcYBMc;0fqpJQ9m zYO1w{7;PxX60rn3nx+BYf1)y+ST>p?nO{c=bar-^`Pd;4jI*%LCe46Ql z7nfqgSP*_mZo)q)J4o%R^rrd$qx$=QD0sW!&F=+kg3&+Oql!sEH{@RM(Sma{O+WAZd3K-<;Y`;m2E zylEiA(FX}L{n{GffqsvX|AycGx^SD1B{sb4<)xMG!WuztK=t*jb9?YoUQOasBm613 z{!Q8F=0LH79mofZUw|?X+K^~7($RW4vaZi9s!y9jE>v6(&3vZ(byI2SZ)ZDkzzDg7 z+{fLgN9XrVTlx~TBmMf~{OXb8_oC?CUFsQ|CZ|uQ06wCODCban_lelIUSI|S$B#d9 zY|s$|lEEjK>xuBlNX~o#jv*p629#+;MZxr0^L&ezLYM;99`Eb`7VNTk9BVw*W{m(d zE3oPZGax9xo%cqi9eC1xsrHiU2Wzr_uiVGWKc5{8zofWB%k_R{Ko<3)d;@BldWGc_ zNu>TE=tIDWZc`#f4$q$r=-n9A!D=l`NKot7aHiII77v?IczALH32WjCZ}on8){Gf8 zp$U8N{6BbUy?blg>a6KlK*j~m=jANN*Br)p7=<8IM1=e5Y;4m&#{y?vtSZjf=p7pc zjlLfiw#t9FJ|FP666Y{|DIMuA=w{)&5)0H~ zPvWZSOUvSw36l9mN3?yV5}>O=MW>YyZ5_qbJa*LIdY@#FzSC8#Ik}2R zZ;$lF^6_Y<73XxP17Gdkk6)ST^F%syXBoHYBOhAwBS13Vm4wSHNou(;-*!y_U$kYj zXm^(#dT43u!X`|I*^k6kPp<>(0E~HnBYn-V@tbij`C6cvv8jU`{&epq zfT{4$<<>F7?inYxgihdf+Wjk$D4zPt;wbYXsMj8u(o(yTIsXHJzVy>49fS1Kqd@K2 z?VkkwdH^L$q?n!b)Zb9i#5oNCTB$A*Zw0 zs=|zw=>xRPuBrCTzV9_yZ9$1pbwMGw^w5z-fH9NAGv0;fLs3<}R>sSSisc#cJ1!Wmm?xx}P*tQol#H5MH z`aNx!$njkSDsy`GMC18`FCv3v*lEf@B`>7DS>5@h+4eV!EEY}NrEl1t?#zl^Mh*fY zfCdO)X*Qq`26XkImDyO3-!gHdE72GNyRVr#hS&7^?#uBcY_(kAlq!`q5!B|AVtBQ@ z<+0c%?cq`fx@bc29ECoS!uG>VPS`zq!whGG_`&}#N^_z^Xd z$*cO2l_qZVSin?hA`CHLOomevD%=5IcLOW3(L{kwdsluV0lsNhJ;=S6f!LUL2aSrL z1w%l>q`mE2L>J*2_l-{&Esau%s+;{Mr|B7@?>zhb!QFK(xItDW)t}}7Wb(z9ivqH? zJH1kAp0CmMua@C`P|{gnBVgC12gHbpz_KFG#JAm;A=U3xy)fW1g+Tii$v}1VU%cl> zI7PA>qq`g9*bVlWcl#)q@MgfJp?-WII`Ddafac)#cZz@;B+9O6Wh~AIsuSoNOzD;I z9T5lk#y+wGN|!lKgP}*4O8@MRrpHKa`gNYh!?#HVJjh3MN7Rz(wF2{kzhwhrKvBMW z1bqtBWjlacpc8p$KvM!m?1X?oY)p)z_he&*E00#=={wu5zS<~I*7Xypv;_hk7@&g} zi!!*ytcp2c9x&H)Su;Ru(we2=F8N2EvJljt!$IL{#!C&m<^h<1ReCrOE^_|7prFMW zL4sx`XzfS9;sjlN=rY}%5EkY%IXMYoJqpNlTNm|@d(lZAnnDXXc}9#bxZPTl3`NR;PHSR$vu^PzuD)cgzIrno`}3M|n4}duT3^b8=@@Y$*0*y~ z91(Fs?LdpC6=~mRLLX2wVn*WapO1FxxKHr6rPltm=dNF>@6US{2cXZb8DksFg3Yt{ zh_lJ-8=H`=5$YhFKP5`(Z@v9I|=x`0T_ritc%toqZYEChf ze|i9uCcrQ6YHNGgq5pn6(aAdaWg!`n3-%*VVw62DvW{v literal 0 HcmV?d00001 diff --git a/docs/static/img/sqlite_view_harlequin.png b/docs/static/img/sqlite_view_harlequin.png new file mode 100644 index 0000000000000000000000000000000000000000..3e3293ebf879f3d124303ff6397662a19cd95969 GIT binary patch literal 87546 zcmeFZWn7f&+CGeHEfqyVNu@-jOIlh{KpKXU7Aa}zW>|s+0!j)9C_Quy9Tp%lAl;zS z-Ch54z`fSm`?sI{?ESp^mw9<*8FwYGy!s+7kiXbjjlPl+ ziMS$_i;%LZL5QU2Wd!IraM?xKY(8#1}TJGfRn^lT34>#ma{Z*d|{m0dmlb}CTs!tr-&Qk-Xw zzuUPI_1p1x-mm1bHw`yj{e|Jg(RVit$3l<3!FkK&Abb4XSMq;yxs<$ow5O)_mq&y) z?BS|fixgjNRa(1~o>Z3YzMm$?TSR8P7WvrNyw$$Gg|P0p)~`G<{(R=7cZKzjD$mXP zqiW_}ow@pf!ZJ!yT28l*-sfA2TJ!{iH=h+IRg8&MIH2A_2E9}G>csq|@LOpu1Ok*= zUU~%7$G7<)Q=u$5>P=7z>iKik?KO=gkFJ$z+~X%0aCD&E(6E-AKt8ZBjeiqUwy|3k zWyi=mw(@*&GktW{$izDQl{<1EKIy5F0hwg;R(e8y{;TNr zl@+zrQkLf{SY`zlg37`e{U6LNil^I|%Xn728GM@7YG~6_@S$R~hD37DYPomb^NP#< zN-WWjsuB^toW<3)EdKV!f@j3!0tXZqI}4o$BG>Ic58K@Bkgw%f zW)U(T-SC^t8GPkY<}n(qoPEhGwei-g$zOy^h$Q{9W2l_BC-x$AEqk)WOAX1*nLOz} z*ER`i`7dy)kaNU~^Q{j1xf|M$N$zDE){`mjnR{2$3DN5k`})PzPSM$UlxuOZvIbvw z|BgH;yoO+ZvyzkPh<-Cz`7De)Iqi7QmfpC_Ds^RKZ3%m|gxWnO`qcwXF3FcQKXK2l z!fR+FWHP5J_O-ybY=W3TYU;TSs_WY&hq`3$8|5zawABMcD4mFx_!3I{yE|~%wp*TZ zlOu#@OAjxb!^I+V`WVM=u9;idx=h+Oeil01oBoC($mO?IA5h#}&*jf{T=)@`&#Q^t z|1{h`yl_)&K-A@Cu5EU!L8-V|e};|c6-&vS^)pY6*!1Pb-MJw7f`UW^9e|nnIzrEin+P85(mwZ-8o>SGGl|%HSIi-|0>PK?BVQ`$CdaJzS z^3dnbTWL9>ea~t_cFTi2l??Z)y4%qr-JCQ7y(2^F7S>kA+YOJ_2^};fpM6myuM*W; z3lU)wRPIli8mS0zweI+mmfB<&WoIQfLhxXDBX&MzG$G`T!RDeL(y&Tn=c4_NPDwE4 z4RxkyUus-XiVXf!ZyC-xAtxo=WgYrl^5;t^S8sux$cbM@baD)z&PSwDbLf}cipV9C z>|9>nGFwF^GgZ#b>RUh6RAcSbLn|D5{%fyZtLhqha?o{ zQ^@rdkbK6LFDv2`;FYO6TUogWX?I1*&@(HsB(14?NJc5XhQ)ttl5lsRI4g}MKv`jc z)?L-3(s@r*M_?H_lfZtodTw2!Oi}83dhFC=#VfDoi0^(RbdS>mE5Qy zmqU5yiP=nG_LAeXQS{h#7Ki&P4Y{Whpj=@eVz_Y z!(sjAk47vz%B94t>SP2cdMpXFv(BQLySQ)xw|_eveDZoPqH159$2YoZESZ3RVls*N)CTYBuRJZh}JhU zdWTvGd93f?&N4@^Fn5o*7u}V!U%yEu8n*rIyE*M?F?Y=Ou(sVRoUbWZgp1Ot2Xn1@ zx}DY93*t$?JBrQbpt+b;GBY*vLw#7s90;RWxn0{=@Xk_Y=vZ;`^7G3o)}&SBCWki< zK6OX#Ze=nBKJq9oF1l86czHAF-S2@td##L`?%Ok>8a|}D@|cU!bFIy9^$sTUqoa95 zCg0Hr;vYTW!OppcP7+e`Nm2E(>;1)+h+XY4(;}uay=s?C!w{Ds%L@1FS{KFNu1XNhGWaQeA+O&I8$(;vMuq7K)R4D&vp$y%?NqAx%Myz zF_KpIB4j7%a9~l9KYMoBV>ea8pH`2yIM;SQ=67n!vvgi!R6Sf51xsB;=spLA-x|ipLD@rvjl*+; zb1q&Eon0c5xX8sz=)pKw>Med2hHMM(#dy0S%Z|#H?I{X8$Bya~x%1*~<i3zFIP1ID?PrI}YG>zT@4k+m7k|hjAdW6Az0M+(?S5-Su%gJi z|NZ33x^BE}SIME19`37GueyqUZ;jR0?Ot9j^%ksc>9$}F159ik7VI9g(0XJJ@7@il)nZa_eQ|CM~n#_mZW_ryCx@SoyC zx;^Ww*WDRC>RZj7Zufp2NwR6$anVdQ=c=`oNSVN|5-GOrO?BXs+N*EhJ=hPn{ZcI$82MJs<+>OzuejKz~gBh51lS;@uJK9I%6c6lKJps(@ zxtXE-a=-QGjy=Y93`LeRBdb9@{d~ExyVvYibDn+t_)(UAMBH<^ZbFPHv}18VJG#0p zf4eNO2Qmu&jM7zP8k(+_-C)k_EWCx8e!oD#bNQu*&cd$K($-#<6=cp14y%=7`lmK# zre^f!?~V1*BE{7%9p)ePR9`veiJ9-(ZQOe~YHWv_!_ymMxTuA?;9KbQnCUK!Z zhIsGU9{m9Y*9)@jj{*SaeN%}0ze0KcWzcAZ{PNJ`da2_?)T&5z3a-# z%C4@iYHA&ciHWp`t0xE73-1x%7VQEt(%WA z@{O&n5)Z%mLQ;fC-+&B@>Dk?l(Ts9u#k*}<*w70jzBeS!!i;9V{3FkOy zeE*dk`*4l3DQ;1BoZ-2g@wF4Jgm&jcVuOs2LxP@+}ho> znvDJBmtT_owzk}+ADL@-aZ*yozIpSevQiXtaUR*gb?D|#WNT{+YZ>1C?T!_Wqk{uq z3jE~c>>ME%7#OG=FB%;kjWTa~&(`wsqoik}k+k#^xs-y!!mvoR0PH^el2@_e)2Aet z`To;{H0mgINnCStb1p6}N5@ro0QlW)8JUp^SLN~o@7;}wi3v$;Cd2Z<-Ra)FOF~Y5 zxYEQ$E$me4IN37Z6vZD`Y}Lcb#58%*{9J=c0c?jxwvNU-m9eq0s|*YlQ6BE@?w+3V z^-@w&yBqUi!3iP{r)iM!&ap8Y*fic7k6@27v9hu{gz(1d zYGHA4HN+x3LsC*wLPCOE3U=++=p~;;a_BaWuP~KtyJ^-=Zse%5I?qKS4wgI9RX>f4 zNN{p4KzN#&nTd;wV`BJ4ytW?N+vgK~Y-?+)uV^$3D1uIS=uoLTB9@gnh?K2Lt@F-&c(H+Zx1Q>tEoJSV5D*Z+-n6HUz<*FvQDtc59t;XN zvwzPoF7{bTo={U!jT?m19GJB+8Z*TA6;i03`uXZ{BL4{04XNK&1 z_s}4nmR45el$E0plJLZ7N^vv&=*+079%F5~$CBsC$)n=qbA~)uCooo>>E~%_xiCBK zOW$tl=)}Z-h>ec6Fe8-p* zi9{lyR01H4YiMk2Ps8(LBO@h+=tbCde+x)WO+}$lF$t^@8E@a-SWP=v`u>S&6log^ z)o;fMMwdD38*PE%kw~c@?(RQcHJEIKh&sPzu3eUCoEP8N)MpGsbmryw_X+reInc*$GBq0x}K zuoBd-qlC+qS1}g|mx=!P7qWoBWNJ^3g<4x%w|8`)Df=s2bK{-YjjXNT#Kmz?_hjp8 zy;FH3ffN@L)6do{vF=02B+A5oAk|=vc>Y`#B?yr)6g5=oUI?{(eYU5@XTLlmLIZ8t z#lyoBKVUX};C><{$EqjCbz#umQk5lGVtacVQUJDAkcX%Jg+w{S)vIv%VPBuc;i}om zNpn-v@r4Bg6_vLlB?kFxRH<_Q%cmi3)7g`?cr8*s7EbL_3qF=q)j6EnzskhKL`+PK zk+6rIHZ?JUpXJ3Y8P`RlK9DXBRhSzY8ChCp2z3-$s9{KQS3+cTauT;u>a0+E(ORc2 zQc{YDh?LlkR72~~*4Ey!o5rs!>a<)hM3kq996+^skb5!{krDun&8#+BC%gOLNz#Vl zz*9n6%iL@=)(D&aqWpq__OuSj?+Cf^>FMcj-&9y*F{GrVadX)QylMB_^hi(9G1tw70J`NAvd2 zs^CmbLwkd21bY$^6vQtmxVgTLX=o701fynao=#>PfYPOez{`t|E4 z_>=+GC09jbXvy;56EaWCcrJ&s~dj#dHOKTe&ksHI^V(#m*t(Quj zLL{&cc-e8?Z6Y!~+5(x%D(o`%IzFCresk}~C4Nh6-KNg$vYd|Xh|%KF)g2Iap1G@| z1Nkvj#||pq~^yXp>y+LnS6&frX)@%#^J=5%DVc zW~x&?tp&M_prBw1LR(L7@7o7I`6y0{MbnAN$;lr-?nA)Caz;l_kBvD&0h!i4g+)5r zg{fs_?y9P7kYtHTNf1>`UF}}%hOtGm#ZK0imi^VQS8ftWOus=m~p1qcz6mDZ()D! zi%44Z`H7B!94IqvR};tmiAzOE{|P6$-acPt)03?s^;AWe2PtgOMu(Gog_pN5fZzzN z8weB9it}aNH`AFh)@9`8!ge?K`d=U}Lkk1HPqOu+6FB;Q=SB&!uLvikUNmGl2#!Ud zycYQN%bhHG^S(QB$tx)A9qca3UwZGTO&L3XJ(#u=f0w%Ysgfl zR%+JPUVVfi9$mNXp4srmqFfdIx2|O4mu=>I?q_Q2ICupQ=~p1YnCewYsa08e^Nel} zxS%=M6&@4y^|_-hl>k>8d`VS|9UL6YF|0XQs92n-UWqdFY4c*sgT2wntoc$R1CQ)m zKjn7FYR@T43)Kq1H*!IDq#te7l(<>PZ&|MH3VhFMwm#YXuWs9bk2Ytzvx zRLJZI)t*hf72P0e9qAx=hLgYj*4x(y#S}UOfUu;*Pb{~nMLc3-E>Y?Y9exzlmT6&p zp{>~-z{qI2et1OIbJY}h(0*xY8Lo6+{ql-FLC53A=jT9>i=(Gv&lCz*cAE{v$6+W1 z1Ox?Xxp%`XK1uhiXHwg*YC{vM@h2^=_DKVeZkrY4Vvv)4 z#=&JK5vNr`WrNGa1xVzI_UI12#&(7yd6G2yE_;bewsI4WTwGLCR836{(G>v!&1}x` zn&)hqzm?b-)_1o^zI$nIW|koAY&<73Vq!iipVAb0(s=<L=&szx_aHFcQnkYL9rkn)J2rBM)~6kd)opj+oaWNf9dwlEpb_{@Edk48^W;Mxy6)^i99*;QMUwfc)nn9cTsuYvtx>Ydd7Chqf!cw*3of%ckyZS>)|w8vCU`C zrr!QeS3as;Y^~iC$${LNPCJEv>2-9pVw|w^+Eg1LGygKcfxf`YCPG9hSD=)B$8x&!_F8R{8O!=MJ)_7zCA zrtvvWG*4h+2Y~A%3CPOI!XxKwjuYAZ{^==V`YXUwe9Fk^=)OX8F+H8&6$I z-N%wfMz-GMXa(OvOYnGImF5&g)bcKb(3XHhMy3#)L;38m*7d!SBDGmMakBU6XPTO7r`&A+^yv}?|) z__1|H-@J8taeDf}{re`?)?+i467KD~pPrWRZ*h_gx3th5n)hqZuHwqk542TfV2~r< z1QOuy_>ArodU{$#D=@}wZ&VPTKKH=b&W=OtON5*;E$w7&;OP*mJ_FWl!x5Np zw!{zR+K1^dGD=g6bZ2W928eqDxYNir80zkJfwYH|=I7?N=qpGG3~VAd^lD3zAq2!# zrU%RMV|rT1d8X5K*j*+An)r1sEiFKDfDK@@kBvFlK719;YrX+}4^aEr#~-0Ic62y? z^gm-)wH_({p*dbmW#G)$Ia9&60o^w`d2U|{-M{(@u{KqM?m z4)ffY0}$7y&n?8yFgB3;n207P$B!wb>hRm|dYqe?3p-8Elg`LK4!hcht@$+k*j%`18zR4Ht-cp6uJe+6Ue@4jcV}pf#CPj4Q8&D(v^@;BI`eLoJQP zY~rMn98b1ZN2;lQ&En*C+_sL9jZsIr(a=^8Gsa%69ewL91dhZC*tF{rUAoi{j20tSn|+I5 z8xZ62fr=C_=2>hzq~kaNB-QFO9^>4koAm6er+l@(<-5;;H6WABm!g>iZJb;4`+eVs(2A0PZG5BDbJlw9J` z=(1P&1{G;($;qE>lc{F`klgOr)4k<2nyv>#?a9THn4wCCamVWW zD2#7_T#96hhgY?^EQ+%(rYmdt+aSLqmlm=4({{zHAWf~}6(J8cIoVEN621hGvQ-}* z?1pL@7ML`2qSUIYs(`)8qCsh7M(Sj9R1dk<%vRyDe7`Y;;?7 z&l#d;qGU!sv-8Xu3~C)DylfhM!j(t;6JI{%3>+Q4J=3j|TK> z;>Qfrq^mq^C7*rvM;JU!KTk=i3+i83gU9*|DLMJ%5zWfxe@98lr$}Wv(n%YT=8w@) z=)AexGgHrrna-X)`ywa^*6AXUI$R|DJYqlFnL0EtaM;k)wCdio6vrYNLQ56R_qh+{@wBJk$t_zk zUM|9&Yc`+XcEd%ZnB6dG+kML|`NPl&H+%1$wdLjIwmm*>?r0v<0HQ1K#nb-2CjQ_c zcy;o$woXiS-)L*Z11YImCS?O+$bZ^je<<5+F@oGALNN)JOBh@x2NY~6Z zC%om03D5L^`BMUYue3wfA=1Z0v$qI+R%91vS~R+|DH%v%DF^!Y^TjuN4DC=WXTwwL z7Wf3c?>tyyTbJHL=o$l+CpcUeq zJl)NMgXC{(YrrG3u5w$3;$r|4|EH(hCAKM9I%VdyT1a*UN*)tTDpH1#$(%F;5AW>R zunW93yBm4ogzbv1wJC-R2>?#Bby-fHv_zL*vq zYq_2l0I>)g+pKEHHULCGwde5g!sM`H9Rxcm3CUm$@^HYe2Gg|bxYNs&;8ozcxlmLz za%c!xS`C3Hwj1GO2z9D(U!4FtCC8vDCM--17~ZHTSCD4xs<)**?OkDhCplX?-H|4H z726+ILct6rp2^P3vEC2{yWvrE@4MT@lO{d$+X-KDNO+K}nKHlap{FTK)-EFX9Ho7u zz7Cc-ia5{kd8|!AAJuZg+5p_pqC&jVg7ceP6*9D?JI#dF zR8IZ!%6lRywg7ct=<%71b7d7KdfWbwFCW((otT}ic+i#@!KPJSR;J;^jtt$`k~+Oo zL3iuPIa=|`#H6D)!$IDZCZ+bJ1NI3}&A{86enSiJ!Kq%P5cjW{EZ3Yc-TF zD?9r%0aYomum}6QQRX!uHHZNpn{Qm-Sg5woLUNYu1a^gSe0dNolMEJ)&dxBOS>Cwu z9$H*WyjbRF<4wO)Cr_Ui5f&yRBdh!TIZFr4dgY2Fs}8i}j#R~(gPmC{`LC_bz{vQ- z-ydMRjG8M<0KfnKJ8)ncnOz_U5Hl)ljYk>!6M+nOmVh8KM>VrcF@~>gLDjWT_{S?A zO)QgyMssj-D#^;8AtW>`Gz(ggIZr{6Ud7Q3q-Kg=dV2cO(!;6*=A&nO&#UpnUG@^- zNBhw_IyyQ$GWNWpBCi>Km;|Un5Jah^0iU_iz?48jLed;7sO(wK5UpTlXD0!(350pm z_zJwxq6c)RsTBFBQ-m~?u;`VQm760-Lsp$iH+w%B0h|7{PM?IXtOYkvp-3qxunHBx zOi#eCOiFL2V4{K4aFTcmOW*2w74CwC#bOxw>dtYHrKIRhGH7c@W44Pz5^|f2Cv)|x zGLyRB^bM?840d+%UA}x-LIT=*o}H(cZhp;uOE^h| zIg(xb3g*Z1%E~3~`?sq+)}coD6G0rYv9rH{;hE@)uy8r7Gfc&8tC2@{3yPrztH#~C zcM%ALwe=jV#oFAlo$;+ggcWDQr=)91)zdJ1E87`$O^AHE~%O?90+FgpFgodNnyMt-iL_ zX1G!a2EtaQqkzI`yi+z2nwV&d_OWq63C2c5kkZgpJ53{~d%liB%fRb*w*=bOSM)L-n+sz1-uuSzTlEhNxB{qz@&?5}=*0`K z4?f3lWN>?Y*i%G8;t=*EkcBF0a0+@T{qswb+K(OMUIrB;`?#&zQqk#bZ)3f>ZzDBnm0!puwlhd6$ z&o)Y0opAB+VtGs(3rw4UyaK8L;=|O)$bNsv*?XqgqNA>>%Xy+Xy254NKZRA zYk+duaWO#9Ij%2xX=($%8pfoHwq)?X?z876>lq-rfg<{Ga#LO>@7{;P>P}6UjfEjf zT3QLSW{Ln|HG9hYh5^O)VJxT;UaQW4e$?L`(HjWud8 z^yA?L!s5kUxBm!&H%vbuB!OHzPJ7Jf-l)wmDN8=P+=@Kly?%WQWn|0rgR3sdgE5gGya+QWZ zZM+8Ol^3F}=DxjibOjuh?U(;|3-6!i4&Z1}%k-z#NG|Q%d@yzTlzJy<44aQQaNHW8 zU!J2u9=#hw;!!`lI!=G;x9{_#bMvFtH>7MJ1BXDbf^K}y)@Ws{0iR!#N00Zh9Iq)s1Xt9W z=T~WI&|7T2zWs3Q1i7174m_8z9K7z}W7}~C?wM+USr^pjKU)o8fE49-DtVDu0S^bK z(wcR=@~TZvP7Wk5J}&Ov{QI>)#w9rSLGzfKn@b*jFXK-HS%xKfi0-#1%L2iJKW2dN_}>IJp2obuec zbDhIOmXL+*H`UeE-P}qKem}m|@N&5XpohMUuYhR()ZZVwB9NfKMkpI}yt{RJ`de?l z@lyh759^xb+7Mj2RON)>G-cZTiiLra#ZvoGY`oLZ*qWPf4tp+w>TU;Mv|So&5;=SB z+}_539rV8M-@bi?!U)R?SHBD;Q6{#6_PpddN&yAVsKTG|=jI*`#kjH2x+Tqf&r6?1 z1UWvVPmP{v9!^yr1=0A#i4(x60jcSpe@*@}P#^%Y3e8(1eLzE$zJ2=z1^4~B7Z)XQ z!6E}%7a3V(csL0qr6*u@2fTAfYyOrU`woAWdzGFT9x+E*u-U-igcC+5>2dAeaPQ;&1%;@ds$Bi0MZs@c_F7yXWP=o`-; z4A;!fQPZ$xn5>3^>ZA#>+R;~K`55W3n-vF#HRwXCe7XJT6C<(tVeh)U02~9Bzg`0? zL@oaz=1GB=cfo4#_%i>?L5Dx9Qair6I_KlakC4q~-{~mBO3~UB@CvmuhsQq4i zI9Bf7-gc@)MqH0d_WJC5xzhVhmG>L`^mH58S0Mag_2p&RIJFDLz-__RyX%NgExZO{ z4v39)HrTi3+CN-Jqnh6qd|!Ybbjlo3VqJcpX7kNy!mcElxA6vrH!&wbEWpRy6 zOgaH`FV-CTG={T~QsW&x`g^uy2$rkR=YKs-Fo?F&Iy~5C6ONR-1tswI@o5T*=Jdp0 z-Bkav1UegvntXZrj}TgTabiNixb6kW<)E3EnwtX;>Ht+SMh9$9VAHxFnXaAzP$`H; ztiP|X4B#>RAeZ8>y)q6$J}?r`!T$5&MQ10#^yg`5=0LcB@e3fPBM5tM-@XN6#(Q(f z6>he*%&;*!kYfS@;8M@guNbaD8aCqdgKYuKAi~1^-)K^J zN9%%M5bXqzC%!!v_UO?gc#)XvX8J!z={sya?y{PiW{%?Xqm#2Mz1wVBi!X-4rjnmzR7XaO@Q(~(R zm0U}!2cRMOv?}g-5)!9{!7}KP@)Gp$b>)gQlzij5bKW#0N2~Aqp^-mo?7OPXkI(VS z@ryLQw`Dl~sO}>$-xYGIWOl`jj*XdEc)tn9SafA{z2-L7(+4IrMS+o6QV*ty1~PTl z2p64tKwourjNII+j*MUtaP)#g-YUME1yvhrF~D&r2M1^@VM$5GV6OTgCU;~atZYe; z$n|dj@Zo02Q>^j=qM9V!6A7tv=DJm(d{rMnW8i{e%P$Rrd_x>AHx6(!ghnhTCgu_l zHc>TGGZGPU5#SjD+Lwtb4I;O+qGESv2Z}5|*y>JthBugih49TaqmOvf$62I;i14gQ zhI#-KMD=Qm2v_KxP{#G!Qc^Y})nfeo8o1zI%2Lh-NCATaXd?QKb%TR?yN_PHI7diG z2(Cns73%Bi%FD|kZ-%P92fooTK+EZ`pm+E1H~>DXUzh(UQh3$4LtXOY$B(J0DTwY# zpBe>P=$8;&(*O~`s|Kt~Tv(Vf*o>5G!qjg1QO{It(>hDD?y| zgTP<})D2{pzMh`X!L~Wd38HSYH38fzB?uJ2LGnRz{gFt!rTdN#@5)Xhuh$!Syh<$VYI)Nl{Qq-N}+lFJsXffe~uSe z8^`e9d?2~cm^S6_2OSe3IR9VH>tQLKAX@IXX~6tIJG%BZQ9weXhTQtZdwHdQczbCE z8+nxaB{Kh#kU}F;bPx!m!^1v+e*l2529})q#sJ0;Znq;s3n%q0V}4UegcLi_6R$J&?jj0`upF1b)E0;0@g4f__(osi%@~5ya2Ta z@EO*H64Cz(jDLCd$MMIEjBp4VFdlPohuV5eQj&n`It+i0Aj9f8-Da~(piEO0#q;OdXiE(+go_)s#)*_bH3GX)ZEYtf{+-U$b0Absjl4x`XMW;dBUr|zO`pOe^o1IxJ%WlWzJ@Y&(_lN z5)skMgairmmN#hz4?&KE4hnEfz;4(J_#NOd7YEAWJWO^}R8+7?-AOVb(28LIo60HOJx});+Siza&`(;^cSSv?>?&c1 zB{9>6*NWHXdq!c)>1g?Ypdc)B0vCyQz6r+h<ylr!SMwG07iAw%oER!eS(OQ%!xz#c{qM+Y9x*Gebij*T}*G93UV! zq_cn|&_yM3#6($Wev*YXil74s!F&bj>mc63I=x>zqnDeN!oErMcn*7eH&B?;XW(AnCbq5jqLGsouMMcHkwdvJvT|2NH18eNP z(!c~xRevII-xPxQ4pT7jQ6S!hP>W2=&vTFhj%(=bbb9Z5B2z0D4t&6P30y^Eb91;H z@XV-abZ3e@xrJ^Dtrj~ z3|ZejVujEpB|wQ|uvE^E>=SNh{*g&q$+llzU)KlsEBMrh1_v9{>Y;03odsSy9t~j6 znOK|dSgc%Ck~)3NPP0mnH!+;~sUKkNR?5*lDT&7J=r0GEUgaQneSx{DDUIuBG2Vt; z$hG%$zpJZD2&Er%zY(uh44?y6;Wt{pL?4fh2KEsS2B89f^pJRJ_rv=nXHUkC9$jLo z00oVU@1OLLGy?$$69CTO=prL02Va5z)2BcJORMz*h6TV|T2=;DdN|4f=+Jt&QoCa3 zC5Rye)Rv~E5ny4yaG?MqYI+YekVk%gKmkBZ8Q;GT4V3ge9D?}Y8x+`!%$i@TrYTW_ zy%d@nAUUNNKH1jYA!w0ks~+hPnrtEB^XKiyzPthkZuXfBZaz43Vd9|kmrDWo)eW>W zxbLC1!4Ijs+X^v!EZ48!#U6eFjl-Qi87ydwfWMxke8XG;T#=iH$5}ea zCIH&R;CJ=ETjS#mYiHS$lig}_mJYy0J47LX@+VK8oTUS~&l5_R1coyWgawk-<)jwBsv}2s;{o%M1Sae=qkZr$^2YYL&rS-STG9eZE3giOd29}T{ zAU@v;asFMqcPJxD1bqsY%--1<4vw^&w}A+qF}WWv?99r^nFAxqyLay(R)Kzz*jw^b zUmJAe6HJf0zT7L}A>r*e?G;q=g@ix;=AocKe1dY4Sfr11K!IxF-YH=rOp zJ3E10W~on3!-^|2T}>@5wqIV-0X2w~oj$lVPHi7$q;JjZpOYcD_!L2fgWTXC>Q!J$dtapVGq zgadyw<#ZIn-olo1EwkZxn6h|&Vc`Jub-@0X=H`MAzukzCGo`RV!yCcU z?`~~v?d(WPNnuG7U}Ryjzbxn9grtpy>JHY>e47CQFn>b9(b3g~ROP6510&5$clH2K zJfMf3`1O>i&t7kPJHp5Lpg_<%ahv1EySKOUW*iG%VG4>EkL?c!zdYzH?y%F4_mJ^$R0CoW zkXCz?L5VZ2tbg}z?;9bA7Mb4=^Oa5GvDQx!O zNT#_*@@YNFk8zZ;D!ym(r>ElWw~JzR9DICyt{nS&t9^qS?6yD!8N1aZJ+yN4+*dJz zc=T7^-e7U!DO)%ex1?1|4|qIY{G7f6&VX_3Xyq?&{_)W`WPdyI@#PODSo?3EmUH=! z585&L<3n}W|MmeoFHM~|Q4PFh4=h4Pc0KooTcv8(fYMqm{S8Oz&0inn@hn$FeiI_I zs|-&vDZfd-c8yW#O?umd?ksIbV7Dr{@Nb+?{@O5Bet2|q6jI-JX16Iv;UE>(w&=Y* z9tF8`vf~H%n*0tY3tPpT9O9v#{C4UT!6m-*dy>aEikYMp%Lzlr0RpcIq8q9_$qh%? zZ|f}uI9BJf`g&h^A`#n>CtKnegY&On68GAI0}W?Ijz4TDfIO+bl$085N5E&<0!0T7 zg9>Q9AMu|gA2T1!1o2I~q3yu=#z2z?TSLGEv3r*#8iZscndM&q_J3;=I$B?n%E_(j ztG@!r1r-Qv8Z*$>6TDVrp(I5XViF{3;Qrtg8O&K&Pd&^qfLGR}EYTi78iPW@3T!z< z+ONO-^5on_@MycQjLD9tRvkrR#-VOn2fs~!r%Dlw{y=?R6hC$P^z7l%=H}+g$_2^F z^={o;PYE~-tG(cCGBi>cUNT4PL*#P`K9)1HYZ~#xd1~Dhxd=G!JUvMTJPv>+m@|k$ zw;H`Ku{#IoHU=o(;ZL7Fb$4Tv1@@u8#4e$_2pqz2Ay*d{$Ui(n!Vv%!;L@t^-y+zC|`xu16xE-npup1O*z&!)Agd9H@71mOKU`*h%Xa~CuB}&!^%K-Ubkq!>do0uTWbo6B&^YE@Sp%Q05TshYzFkT z2k3r1JO6})^OBW7eZpyK7|~ju1@fivdT?_ z!o&{nD9^k#o?RY@!gwIwfzJSPHZLy^`baXD_j;EWoZ|)W(G|MSa1NVIGwW`WG=8+S zqYV^B;2G24B>RCe_=13DfFsIcz`jt;vStOY+;^;s5t6ook1|1i(!X=(4nz__W-K!S zq$%{}*w|QQK*XSe;&Ovs8^-uqVp;dWV>b7#3W*-)bdVrf?1KjA@!;h)D7KFNC<^R} z_jrVs4;Vn9t;qnb0H=C(7i)S(!cB~g)j%Hr+JM3gj%d{=l4RU`e3le@P?=&Yb#iKo zSxw7GpM)$ns;ik>Cr-(HO9bpnhg6wxJ1`x=ED!H5`3ztvgo~h|z#oacxr!D0U>EYi z3j}7PaUmFSyWuzvoOXs{YsD7>fl&e>>Di@2-Qx3bZe%@j&T+Us$jOQwk{0bhjlX zKf-piYGyrnU8asgFcKKTECi<}(g<6@7z41K$C zIULx$W(qUPcES0h47l;mjnh#UFaVssfr0MXUSKO$5rF|7cCWv-HpS@E6G$wekwI47 zX@_0{8Q=uw01&7ElmVUYEiN4kl%D$!Xt#QLdH^J``j`{&->a<>aPqGj4$sr{NV(^q z&AdoKaThiPD_EA7gTRvsC2adb_Q8i^-#KuS6UdPYr|Aoztpl6Is*ypV z)9hRXm0@#Zg9chU%p92J<|3VW*wS-^c&$y+M3Qi*xf)hrCu)3}b{7h*ZA1CdlU{ah zev?sVqD4lG!ictEFp9YLP-KZuYrYm06-`x$hN|0O`O8V=cVR39`$y*!NlreQu3;u{-?QKV3o0GgN%R7gMkOH*fqH!la%#g%kdF`^0J-m7x|NhMX1WNs<7n6rp`t_?y$L(KmB+J6y9RVlbNADi)oBj=BW;guAMK@F@-`Y1{ zd;DL$@gzrC#dvPTYdCK^8@?W5H|@p;mC^&m#R_(S_q22;Zgl^B;J^48_r8C^$N%)1 z|0NdvH!sF&%;x{(Netr!9HPd!b3gO?8~{|_G)vCDh}zqhiex`MX#v8ggN&cB{8w-M zVw}9?$_ooP^_-&gu&XqqG0j8<(C8jS)4DxrcKD_17Aa!sDWCi58ZS z{KYp5X2ri0U5cL)FV4R?0sfO`{=bite_l+zP7@a&pI`AR&gF2CzqU^^r7pb!w!r6| zG3-B;1QKV0OZDf}{tGuA*;ba!*!r500tXyje`%Dc7y59HtR>_n&Sk5!e^)d9Hca36 zZ#?sV+BpAgA`$tdSh0xs3=Z!2Uy~pzT8%$yuJQgAG$Y=zXAnm!&aaO96Ul$|#{Ylh zfh~Lg19{?`2p`t<5va$!qthb3MguMh#9Rj5eLab1;n#u>98=N5B z!@-u8+FAy18sWYQ4P611=_A1r6JqcigIxI62Yiyayu7?Hqyaxp#Z?Lc$~*&nG4P6$ zGcy^7FpfO1v56aM2ErtosSYM(ElXwuCuaTr^S}QHC-^w000&V}MuB_oQkIpM2YH5d zUja@~Kx6<7WGxOhg&iCDW*Zuz2b=EoSw9Mo+^6KT%6XX-L0qJ`GBzHUq$!{|xGqNdWCp zWdzj9>rC_Oc_rOi8v{RYcO%o~_cejA=|f?0#5DFVDD(f`YtvKe#y`pu(Mmpf^3Htf z%#*R6)3-7c1YniO;5Jy-H2p<*#`z&>k~|F9{{4VF@B~Ywwip+jTj%~+>XIT!H00qQ100vkJva&hABG->znVTyIh7u413bcypYFs)ncS+h& zh1P6cUNAvwfN#Mgf<7&~-(uHGDV8G@h9Kl2(pkN4eSLv2H4kskS0GCMz0mgjx>mJu z3T9m{YNdMLY-Md?DN55K{&UZlFWtcQMF035L>^)lmfU#(&$+Xwr|TnYeKtIwU&Kx) zwFg4CZv6s=6(9#Jv6%>$*(gAFKqCUgbu(xn&-XIIin<1n5)Zkev9U3XsNriL0xvAu zNnHGOX{p_r&!RBG5YI_?^@n6P0%8Z?(e!p?0HcQn@FM^bPo-ASTEk~BfcX&I?Fc5R z`bsY`@GnhA4pNHP+CPq9=vVA(>$y$hx~F^8phVab=6AFzusuGIbL|=*gT?0gW_Xqu z-Pu7t2v8${f?(Qq2Lbhrj*d6w{@U^2Ea=Mm_Gm<+*8X2n%-Y=DlE7NHZu=O6w3`S^s2Cr#!FbaX)cY`ISP(m% zmoPH^00TUG;1y>W^#EsRxY_{;fD-sZ+5+VdG^v_Zg&l`Vj&d<< zXuwuiPGv-FhC;5n}Y}^pn$AeimX_Qu<6$pu%n;N zF$n44i#2c`g;|Yul$7pX!ox$O_uJ21VDVfoc-|7K*HIkMy^PPM7f2vpyD6<&_B_CL zR<42fa|prIXoyVm)$^UHJ->b>(0G2TGFL6C@bfHo-F*cQ4_S16iYj)!DL@{dx=CSs zM5}1cn6@cZVjf;z-%3=3Nj59xur12Xv7EZFi~-bKbNd=BPyXBvxJEz`H0erY@-gep z>;#Pc1dZOFrImy6X=qT;XpNK2ix)4dY-dkRzl@^bI`O;D0reWDHOK=oz(kM8;Ev8M zE|L)taJV0AzyLeDe+j4w8=>IA5?F8@_Iy?yK7f0`cMKl6zD`!{5=k%H5*VJanBE7f zZ3{X|byzLw#>O=49i87_sGn)x zeKC93*l6^Y0H(5F&EKcgW47W$VA?e%_vi;7D%*)|L3&xkPP0(hc{SG&l`NL!Vzu1s z2E1_HIRS^(>Nsw`0)3DBz`wiHllC1z#Jw0cDN+@+mQa0Sz^{8oo10Z^6@wT6(zpeJ zl@l1I6=@Yu0Wk|uU`GJx0Yfm3H3c&?*gThkb*n62sGc_fY#|?lQ&eivzF7mM0^>Ol zUhvpV)WTd7KDR;wUjb}&W5HK(9=tRGC!O9!0H|O>s6ABd4~!{*7vQ+m*3rT4!ovpw z&|=Qt+{m4K)KpX=0Hwod?PEp;c=MD2cqM)b$a-kJ#d#$~#s07}YHU&yS82*ex(3n{ zrx%pO(6X0$iW(A(zklXQ4bgSGZ7H=qr~a0}ORqXl7~5;wN=1ap6m0dPi|G zaCj~37Wohi3`DPEv$6=Ly1#B8MGk8s&|522>nODU8BcJXTLvy8Z2MGvmRk1qJ3tlR zdL9SUd=|~ZG1%euKiHm*?wr=)Na0@(>Nn##I*HVW3c%g{M4r7aa{tab9PfEGd;8B( zQT;IfgxzdiB}49pH3xuzM<0VRr_A2p-zJEG;D|gBVqsZVI= zDqvMryVZ8@$fcvRGa?+gxosIg5Et=1hVr#TA|ut};wX3>mGk7Y`|k)kTE3X;D5$J# zh?=kPbfkGA{;dI#Xs~}zU5=@SoG9YOLicl8$-zS0UZn;HodlP;ty%eH8z-)X?$lQV zhKoN_L|W{t>=)NUD3py_9A4P|{E=^@EDb7{;9tLfy^D{})yQj)y5|Jk`daoUUL-$9 zQ43igGcy6EJ%F>*1p$7w-lv0AAa4O*1q20Y{T8rKDa6+>O#x9JFwitFGaxTKfvpKf zoxP}MR#sHh)X7FnTh`MR$w`8|mi@Ny94cdL3>Ii|a?H8pIV#?bM6^d1yX;+WoH?kx zTkfbK?w_6QOe^`uHwqp<^6OexLiFpmTr%VYf~{C!ar-iLb%iD;uWA&Dx^GS;Mcgg= z<86|m@I^{@b5#`QixxboYa?aLP#}Qb5hwwT?o~uJfEz(m>w5+Z?;jkz)_cURK& zSj3J#!G!JrOuRMQP9X{&rb8nmnGLGYdl1tcIaNfl>S$!B4F7S9g5vSpwcH1FaE!vx z*EbAS6gcAr`ulf#)xwMx4w0)Fu(7cLr2+=o#&YkSz?KV~h;E3x@6KZ2UgF|zH;3e1 zqCI;~A|k?pkg)j78O^N;xz6|-d7A7%q4v79u-I!{z_(Vz+Tug7UN;EHV0ZSLx-w7F z@io0H*h)cr58pgkAQT(T5Vdq&9M655m)2)8nR_x{1XLxR7p~^cc{p65f@=Mp3Q&@i zlu#hnfT9@=39a-V7X8}$jl#|KEd?;Vhd;e``7)=10NBw2NA`CT5{^h9+k>n%H7^ed zq5>|lXdzC{Ws(O_n80Zz2y^vQjtfxiK#c%GV!(I|I>=!akVwKR0IAgPaVs!v0eqls z?fjt>!f{^g3b1!YUrsf(M%-=K`6&;)*H~`T#|jD^P>Aeiz?RF##s(}1kp8;Bc-<82 zR^YOjbdp0E5Y(kdK0K-eboeLFo@En!8?Va%VPBKt^Y+@~))pb1%DTjT1{PL%R}>Ya zkMn*~S!ld6yLt*p)dfsVkAYOYX!@g2*$&svW*uB}xJ@jJ)Ot!S=@JdTsbZN>z0l$fZ7g1j}}NjE3vJ=WYv4 z0mOR*VcESd(`G8Hf@cp^drZY(3^UJ2wc>cX%skk3FnM90$+ur$9T{Anob2o`3pB2| zBbVJAvhHXyWZ~x4u2Eny-(F2D_^%PC$tEKcxKGlaK1hm`q~3;sj{)Akg#eo%)DjMwxu1&k*p@|+(?%K;k1AIHj# z+9%YD&02hn;6-0~!}MP*@dMgh|Dn117mMN+l8idK$wT(cX}3wx zm0azJ+qtK3Tosr8l=1tkLG3M+y2L!|RUW`L(5u&7ghwDl-f6yL0-QU_zFlKRVldn~ zmHj`lN%8S}jC-5!(@A{sb*!pyQt&1gVyEW{*h>@Uf2HCV6tIpz?dY1!OiEg9II-vh z?vFsToy5^1FVdpiI^k%mJqm+4Hh;Ud{yaOX#-{Wae|`X6n~LONj=cC#$O zY|yD_PCoh?+sX$8!^5rjutT-Idt@=vao9qWWzOT+5{8Yc3dD?p2-s@Waj&u%*>->E zAipivvaJiZ)WBvkG>QATCNzos8rhbGK?{Jj-iEJquX|5D|M8A*&_)uE>}2+Ed%wa4 zYqXcYo|wpES9?tCs4Gz8A>`N`yt+@*#DmmGCusas0tbuM+_9k^GRCGqcQ2k7vv?Pj z`X=qst$gJgMlJ4C6x;Xh3TSQ0s$5sUc2~G}r9dCK|4LtfQln6Jz|9(w9?wQc!N=~n zCQL!;-rJb&VQ@;&{pam`&u~2qBC(+mJ>5dp)M{x1HnvgvBy4P)kzL8XciAQe%N$+M zEib2qbSo?A8BCJ+Y*zz_?11cO8oOO2pR$<3x$~sP98*Bc$39k#*1YPHz)?_O?CPal+$_8@}44JH#Y9#H-|h-e@pS*O_C zNEmTQTpDoT64qE8wP|6jU~KFywUd<0U-uO09=lB-GoF?5xEUxlU{W1B zZTw8j%zme(B#%?4v6q#o93IxuP;WZecU0SXvE{2(DI7g-vmBxtJx9)yKeu3(RO4_u zrkR21C#{7oEjOZ=hdoXuPU{06#$ z?8c4BDtRianUU$cwX+SKJHaEL3d`D$4{BrHy}NJN)FxU-GEqk);I#HZE(&A*^5@4c z?#z85XO2qwc~I*V7Jp```s!306(whY-=nEpwv;sZbYSB1;v>JkiS`(5U#0_WS5MvVSSseZB>-&8D4vVes54U8!$0_xECLLl*+nf6hFP6*0Vl<&-8_Y=Q z{8{pG-FFfmcw}seL8Hwwzd|fmMtjNghD)>;YAV2wAB3P@k1il`RN_uIxk)G|CcUu7 zJLaRd!Sl26QOWcS1$uUHv=6n%&|RacryJHSoGq4eb6eKory*C*fH)bJhR*Oi8P}|; zMnCE6*MXR1Y`me{f(94rwiu{?Rxs%M=ne_Lay0jS%ah6waer=8>$kd^tqE5%<|S^nNZ$QyC4>;e z452AQB1r)T@KT(%cu*rmp5D%u9Rm>a{xsH?h5=T#bIXlTM-ozoE%3~l2W zUF4<_@rWPG+^vPmX>C;OxSf5UX{UYh@EB!ZxU}#TxfIK}O2Xy}3%~)h~1caU4=U)Qk!OlT_E;}@316-oXCZ;pH z(}KU0ri@h|e)7Vymn0apN57;Jmrt{5mlp48ezhoLZJz1YsazmkqY#eB$HA<))2 z>)J>yypx)quGe^>w^Vm!RR{rX-pd6YQC}whZyIo}lq#*Gc4PweY9#1-yHAj>?{|>%UJnau0@nV!u0Uarrj(*6_ z(;H>t*l<0XX`fT9QZPl-+&tvr9R58#E_;EGFB;TU0-Ah8Vp!t)&M(Ss+6Lu`%;WAJ z9VWS8(Gymq1C*R;9_}YZt&%FPc^YUw=SRy1jrxeHsRx#phM}7)qa<4v+i~fW%TJbP z+ioxq?JuwMJ^O+LlF0!!{QGG4mK3#)J8-xn%SN(qhSA`A9Pv~skd5vTSy{5IkB5XK z{EI0D6keLQWw>gu(O#M1&mSpi(yMadPrM1A1cz*_romz`EUP(eGMfV*C4crwUsQ=k zON&TrQO0=f;U|3&)5aE24~iI8q}x=}9dygVfh5JU7ed7(ht1&Nc%m+QsUJ%vyz;c> zq{=R(9H}Xv*%KzV6%-&#DpGB?%;ykFT^C#PLz4w7t2*G=lU89o^|C-uR6t;MI_$*v zs!o5^Hkh+_l$RH;^l#sob?Cr-d-J(&t#eCGVBbOhjDKckX;gt>-H*UxQC>yfGsiUMF$Q#-0{#qVw>k#{)2JDtG4x&el`#y2dP|)52CF ze{;Pz=E7gPwv0xl290i@Nhdoh*Fo89t{?|7N45oRqveVV-CxTP4GiGzP*Pl0{o;3_ zagEG1#KKsH^5V~ddzx$QdNZKQM5AY-lI!%%yFaLeZsu!GyR00Xka4?K4zNw8y=~9e zMyubPdu3X5uPFWFU8Qg?B{ef;UsNulP4g4jcx5QRw;XponT@$;H6pBmwPMrf0@Ch+ z6fQdWslPJCUMXtt3N4V|&=YkOa1xeS9s7`RACYk>PFe{JWWR5ZL0pBw&yx2U#la$) zy#D6SgkyC`1j+s=rWJ(&S(+0`n28*AAL6tqay~1#kQ!6dPvR1o^4f*F9IM1~z2<9v0N0V(YdaO`6%;icVa{SU&7+K?%M?Idvu&e^zoY0`# zal~E0UaKM4wSp-c=HA4i2McIytWEZA37h=>MspR_s9(N9BA*uy2Fh;B1?8=+)j=fd z!IXy?qSUh@##{80CC@E=@oo5*#vk+BnB!9kBj&wO6Vbz;&_}%HeLudp%P)!=!&Sf4 zumLEMl>FRdhMl9W-mF5CLxP3cN>8K1oQT7Qp_NLYBBFO=TO$Trvor$}KzwlG`l1la zK99HlgtuZiK2v3;|?ndH2`s4KX1&B=#IC))ns^N6L z@j$|~%U!L#%vMe9L-Fq$>DTcPkWTh~WI1-~G03qSG;7R`jD?ZpXv}a@bho9k0 zD-<`^X?nx)53BQ?O|Yl_zX&atlmF?8yvP2nqx-+Xr+-tu{|lS{H%|RO@bbNbx`vi@ z^x3dKuNp`Nj2fT_EtziJX2$vPTKsN$&k%G2=gB%!z6L>0q?G*h-E`t_LPX2=<0Y&} z;OH|;iO|r27&fn)RHfG2(Zz??@o{d2rKL3l1nrN1`d#GA-bLm5;f` z6p6!SgmtN}&7RbJmDDy@lXg?7IO7HS`yWt+b<9GoP*)?PBpXrmr>*$jj=q=&FD!;} zer9k{Me?Q3x1c7;2B&I5VhI?p-=a}t#3m;dw1lpnsmk*N{J+jkSx~R-@SV;f6@Xq< zEreGXuWdOqC#(lP75<^1J_OB@kH&Fj`Mt2$XT4t!lp#gr9a_JIrahXz^p?aoz;oUH zHe)heE9caMRWxe3Xzm(Gv6C7|YfC>qHSdRL9LBXT&cJj?AJGwds9^7LG@KE>B*e6)$3E z*DT!0iO^)jIDm7aAo)!$1H=8xL^Y;ses2kM>L~_`jov`*Io*PFwfl7iE%`P*p#C!2 z^4L@KSmxP6<;(H5$l_vl5fv9H+x*yZgKURoaEMhrD*edYF{OcVX|j)1TW>^A-ARVc`+YLY09I z1s}f_>k)h7-&tbMFEO>FWnS3^4NE0VA1diTHVW(_d%IOw_{k&MgOi87VSi<&a%c>F zv}MbeFpbYWi866ojUP!hdPA%yS7A+)BzS|t2bi=!7WM_RGBaW7elk{-FpRD@b{x^Y zCn&Uv8w^Wc|G`yqjRF-Z)y&||F_$;-d>%Q*vg$Q^ySm6267iKG&S8%i$dKl)v2VHs zY@5)EN3&6hy4ad}e6O+_*VAo_VaZ|E8E~BPS<}$KV%U2}Y$m9@1n_P&wDJQgqGN;5d=FE<2|j z%w))uG@rIsS#FP~lb4AE=d_fi9dAUXtKY}A?wTv~2&ey83+ghmdAGcNPsrtPcXrKE zS|viBqA!c9v7scNdZ_4#Nf;v92{XiXCDySA5^?M z-cPT?OzRY_&DacJn0*-as@0106{C=fakax;14ci7foHYbY+%Z6-AmsDVxPE$5r_Ig zFZYOWdvSkg%CCzv(kNQg^0amhOs_YjT%o0GkJ9Jxk~y?pk1lVg%X$AwT&lJ4iM`s1 z_nDv;=fcq7U;xfWrk>9}i4qBu$?nj3Af{(9Dz_@8y?Wucv^@tmca4{r>x_R?otp=_ zVkM0CM+uHZ>eZ zNdtZw(Lj$CkwK5m@ZY{Q&Xij@UiHquyVlR}d*WAen%jmktMd%-SfVkMc|zLL4l0o@JWb@aoCf%iU)HpT$Va<=)2q2 zzZT5&ieJAWn6mt}n;;kv3Z__zVB9i~IM3FYj`)J3IB~!LKeIhQ}^n zwWW19PIM$B&SvN);m|SUw*)8ICJR@3o}ft^sr)& zkG#tClf%P)&!&UAWfJ22vU*|R!sZ?Ya=cycw^SrJGu$xtb=e+64!#6u{VyYh5z=8> zt+sr2LAA*{;C@Dii582e7wMlZtirot=Rw-G`wC@$1$`;Xb5w3k?82hcn~wba6111= zY+yJmIYq%k*gUVGkHce@S0l6w8;W)mc9!qGHZ7%L9U>wlyW1hzP!FLBIhYm4kGO}& zA*8bcBjzc{(-w~O5WRm7(3kHW9hM8-<(h?}*YgMS{@#5HV#wWn0uV8sB{W$CCsu7V?9)VIQvsyL* z2mxS{59*;)6ZpLmBw*OtTE%Gr;^)1;0ru^I##U!7sPK>{=%b`xkd9T8MJ49nQx@<4 zyeO}jAW&hkxb*8;lIMjaHeQktW*S9>hg%$dmBsRY*9t**W8M8B{?ya;t7PiU;oE9+ ztRI9MwlL`r@;oT133XCmd-R&HU!@#c=}O8*iSBostqkOEVl5XjCt6V_!F=ploKoAt zbnWbdEvpeVlWw6UI{5*f9#?|YF$VEFmU}}zLFl6zNT^m%FZPI!tz8L9*(zz7Q~uX> zM|>MfYN5R@1az`i{ncH+8@G3fRPC8`_-Jm3vmn;QOe_as1=V6)>(X09UE^aZsUkjz z{ROi>^l0j~UJY-Ka{5$eXT(+9lV5K{RWmIK#eHPl^>X{f_{AZn?fduRE3~1MRo!{> zDzopnYVTx8_+5CpSr7g%vy?LTn$i{0djJqnEYN`Hxj_z2R@PNeOf}nZC*Pd}w7?rj zR`MY%xSt^UKI+YCwQvZi@GbX1nvMGVYZdIhPd8eO<31J(cK&QVUaZ4k#%rMgPy6D= z??cX>%f-cBSq+u2dG5gMW~9XDn_Ci>s%ZR2ujfM);|WV)mwIdY=I#$ErW%I&(TapL zTMw?0`G&#%{p6bZ@V2v0TkZKBFXIX;H2O`{KF!yfgPjuj&e^^4N ztui!v?wkZ@uGYljZVYn=vNSAEy1f}gPuN9{8*XnctA6;THA|DZW&LhhRl6<dif)5F}E-&*qcU}ex^TE(6T2qOSFB4MJCB8uMs zjskaaKtTOxjFU8u%{IgJ%+0Yj3v!m0QzK=Vo}SBOne&eh%?z)=Sq`OOil_rGlPn1h zc4iRqCQEVDu=+{R93gW%>xWw{Xqm5BDJx(z+)|K$4_IpTOe8GLh9ld_d~Koa=TciD zBV$R;#Q$8t3Z+7?^)}+0$TURCH^V5e*i4}W5enyHlvv}Ac0y+YMmBo(d01x|T@7qr z|6!;G3Y!ZbVpSH5&w8fZ^)Wk)cQR-?55&6uYKM@N{Z0srIUK=?VGj!dk5G84SH~#y z($M;iZ@+kvh}lnL6N8r5?nadu+wS`&jHlfA*eAUvUB07Xy||aFq*U;50u$04pDu~xeR>!~(e!%^^R-+`Ry%{lr^Zf{3n8H;3KTPKT#3=kvl+cXFCV%T~ z**E-8-_wVoQ;n}+%tucKv9z<^ER6PCZT*mR1L(9DdoYcU_JzDl8w^|RuMBLx ze(brBlVWKlP%lLlDWCa*%jUjMIG*mq+~`K^cxw^3F&UYn>>-&9%WTe`M++On&BxKh zmDNAg4#oD?gbOnZY;X>&yUhi!+B4GoNsk*VaOqZ5w#~+yyZ@YKcuV|yC%dvOirDo@ zVJe#lc~K=He=o1398rkt?ZR*#{I;GQ&(9*GF1e*$qN>yD$1*8h+NMQydn0b>0nz1^ zI>+9Gl5nhUo`SOiSq_h7{j04j)3jIALUW2jWn_VeZeF}0@1mna#H;c6ko$@B_V(q6 z!%@fQb?}{@_V^H>47M}3J}Hkae$74B6)TbtYAjSAcNR8odNQlN9TnqoPwxG~kHP|G z&h7D8x80nopCd*xEXUF1U!E9WnlF6~FO1BwQM1$C%~0!5LOETO4WlV2e|d+2?wk4I zUi?g=Q_T86#$3QPLe*FimxHNcTO!^^)YZY2%~k&v+<<@iaulz}U~N z<`!1sql8V2A3bwMAMdAEHhF`!(9q5+IB;AWFpme*yp z>}gJ}nB9L}43F-@q#3U1&zoHB5sy&g|D#=D85&VQNS=4!gc#W~oqfnN$JRTL<+?F~NaYah{?hY&# zi3wMI4Q_a4)C<^YkD|0~=~q>y6jPOcWNncbUX&dj(NC#m4Y#z+10N9FPUe_pA0a6@d@33Q7aORq>cTvTG3nv8`rED5OfNk7?vK>rLj417EGBx9c`GN_x#43MRP&O{MzZ=cgwT zNAhaYa=mWTy|Y3AVK>n37qBwyKV`oDYM-K+jBdCbkXMU&WcV>W@bDX<{G7iWUe4n- ziyv+|N2rWdE2~z!m6rPnVY)RRR#mrei{PfOJn9M630z7>Bg{JrMr2TuOcK6bDV(;e zXVX3>@iXUr`ogU>zrnZoGF_CoUKZ<#ixGW(!uTfPZnY<7I-w*##hut>8R-mh!;l{M zbatF4s(-ttfQGU(zhuEova$DUhE%Bx*64j@A`3<|E~;b@v!5eBOIEX)LP~= zHh9=TaqR2GqZp9DvuJ(tmdnskR|i{^Bw?59UI<0S&fZ>C2yv5>%S}zC1>3gcJ@#II zU<9dZYaiN`KIak<5rOzvVd$LT0RbiS(ZKcItd90R#sLpj{=7j5aUAFxRTQYnhOhoBJj4sloCt^kfx?G87i=P!Ac)EbZ$oX z&dgJF78*=V4xN%y!3P?QsbDeFGcY)|!5{=ObWMQF1Gebwz}s?MA#D{Jc~ow8?%W$_ z9v<5iLPFiB#4>QRfEA=)xzS#51cYSA7cWjjUqG&~;RM#^)y|t!!1V%J(dY7VV7C#J z9^MKkkP&K;pnAUwy%yIY4i7;6BX92J1PfX}$pKs$~1O+zpHIYWW@>uLMEurU4-OXtg zq`n@x+3!;_5GlJ}q`tg^eeI|uAAEY=I+vJrSFPO?vBwQB-R1JdtWtm1$UBYdJgu=- z>wgn0o>E%D9!du%-e3#cb6-5URBy^nysP9eUNnp-==J^QGJjzLfLM*lw`{GXTFU}q#x&` z`ltHdiILLc{kDDG;i!v>Hfu$|KWqTslKMBpO;rG zkGYD3#5qsT2JkQj4jDvvfPw;2?Kzp47F%e%)HF5MKTE-L1d1Gp=VN%h1`)&?T;R34 z`>cQOE^O_NPELc6Wx~Pn0t`D@S)ra6P4>BucV8nPLQKNLPXoKaA_sN~@SJdck???@ zKqy|Xv8myyQUV_;U_|TBzJvuzR3f`NLybOfGgOBZI^g5rR z`btZif)Y?`a^|obGz(ElNqUZszwrSH$Ch}que6!uAr5KPRcbC*p zus}BdkU;HgU@th{(vnqN461u@m8$>zxv#%JN;-YBRHl2FE|frHe2slVkX#bcyG*56 z;c?aFh2C|XXE@>1{7dcq($^)_G~6|*Q?z2Vy&KZ+Z&MuRj2y5E5I?_jr#EhQn7yXU z_=I+~%J7HeOXlI1fhG6w)si$+8_2>8VzZmxDMx22DI|&I$0?DCeimiLPkAS0{x#T% zvtk597kSIZyY-TGCgyKvhTf}KFw;SHwV|Qf_$K_PS7O5+RD;E|y|H;smOmK&f<8yk z(DfT<;nKy%PXl2Ji<{}2SC#e669sLiD^~~TZ43>w$~KKsQDjNCN2{v0GU}>kx8p+0 z_JcmC3b>Nk&g%WZ3L6-AwYN86S8@yIkzp0N%awiE2}gv4B*^0fhnai5Bc&MS=xk!C zPMBAc#_(V7q4*Ja|W+;ByB~MF`zv^F>{u_m|U!u(QX{<($p3~ zX!valPlZU`kubXw$bHwdP)n+cP<|ebM&`FI9WKywWb^Ry5{jKQ1*Uw1=Q!MaR@SSH z92efC=HvumCn&C8hg>>7gKs{MgH#zKfvxM~^Vb#=*wzh-c5=A|pN=P)K zQ0x#>)DTFfyR^IvR-vDP_^YiwoS2vhCsg+$f(|dP(VGg)bG^L0++~nvW*~W4SX?C3 zB=Ao58|qmhpM%yG@9I@xPfsl`2N`JT>nB5SnQ!x;pO}`m_Hz>xR~MJx>=11NP$r}X z_=2NOI&NdJ)O1Hj2fUFhxVYM<&v3k`hbt19mzQ_X*8oDo!O#~BwB}-)2-AdxO@pjm z!4M5p5n$o=_4Y?dJOF)7dwcug@a+XlLYYg*Yw)y+3XxMEk^gq!IlG8_#&Q{l11F%h zL^p_*j_$bgAZ(p^Q?r0Zv_7Ot{r$VXgB2+qBBI7mFOT``J0H4+UpRdA5tYTOgthq? zwK(Vy7|^QNW8tdDoGw}&GCVf2Q(U}IkT~I7#Wd2f-=D&4k+%hFLb3xjp!_MwB-0pBPhd$BZOQ_oP#4zT|!9pLmVRFvRt| z+e&CXymrw)hMOBJF-EEBy4khyeEZ|#?W(5)MHk(#)10i)pPY-pY-BCv*gaZx43?c) zP7p@bjW?G06$&=JP79Aj?QS=qf{`}nH)t@s(p9q9O=}w@vM?Wg+%elcn3202{bIrs z?_OAShJ(A`>sQtzX$5i&7j#813a(>}P`XKDr~S&qI!1QpRlzJC2rL95q$5$ zPrs2RXQy!Xjhx-~;Jxu&pif&^_~K)2w--SI1CN`!x;jKVkM z^d2)9f70tintlJ^Alyty8^xZ|g5gYu?Nl8h8L=1a9 zP|vsE%!Xn)e^Lv4&HDTM5w*@|&dH8hEc-iKSy`>ZY^|)kJP=dCe#?&ZQ}0hso-lBr zZRD_oY$%TCzMwyw=|`2Et*tHCASG3drme^&9MMp;>jpL!mi6(u6E%_2u?GbNLQ!dEvpVO?WnFZ}K+LJg{~HT>Zt3VK zUwmZWmy0}?W-`))IWLkjJ@{^rth~jswK=mViiOoQTD?x-SjNpRXn_5(p-T9{MNzaK z106FLFJ^&zRHQ`E=*HFE%CV_r2_23%&9>(Fsf+7Q>vWhz%Fexqq$@XxGafxM4>Qlf zx|iSOl_qPj_K($wlgRxjsk1#;N_kTLP&uG}blE}YGG*R1Wj8v*w4V_?PnQz3CXMU! zK6-A8-Bo}+_BxMLzk|Gr37W@nlk)PN^Py5yn_*IooxvQ0guCd+Z;u}C2qAqm+je!FJ~;qhD70O0WG>^7sk8BjiDtodClbZ(yPey%iS#IULN_ z+S=RG)6yhe>XP5%F+^yqs)D63{DCca`h!)Q1rm9Wi76^F@)u1daLl1zWo2OjLtvTo z9tffWE5oK|92>(r(8j?@evMjqVQ$V5Y%;;K>2gu_8##QM3m?QzJ*hz2Ql)ogJnbdu zQt}E5L;d{;)Re)IGqtG53%r0}sArBuf?uYHkWhkA_`3mrUu##oAAVo9%4P1Gv=ibV z=Xp8syk0^ct!eXu_%DC=T>Jj~-k$FYiG=CmbdiOsZS(#7TZPZf&_9~yS*M895z|Jp zhl38~&gVNId$Qr3QdNBI7fnt2XqTiN1a%7kXCQo;jw6-hNUIR~G}rAv5? zzE}>S)fTG|c6ooHaYp+n<*CDi4Vd!K8r>i5W~} zO83H)+$lWd`$PNAD>^9QW)n3BeEM7&?JM0wYe-3AdEB>^Q?fV9o~ngRSRrPG#x813 zaW>Y+AeL0!x7D^^tAhLet5*+oPnKV1fYcA_D-_u)Y~e8!Z$m>`CmC>x9U>M9Av;4l z_<%fQ(Oxi0&SP`xysrW54uI%Ig@;4hO7qM$h+>FsjC!1~!F?C`FTn|OG5SXmVW23~nj&x#f!DUObc z%E`?wVRHsJ*v7`jaX-J<^LkI(OK=gT++O^cn)<5DCy8MPOfO*vfml8+K0a$pOLs6* z%*wjkLhree$XdM!4=9wO$#NM3gI+FAwm^~t9%1b$>W8L5m&X5cJVt)8KYe=sK7M%X z84G1wqZ2;DX|RPseN?DWU1@WaiEAmH6EA{XWXjHf0r{Jor~moQ&wQTr)ChHQ zyM)6+@#nklfm@&U2FiWM(&pNpw!Z&dr0s<%#~!-XVuuY^EaFV4xxT9_2s6NxGBl zRI;TnSXH%qC9~1Jp2^IPeXRSqyehZ+pyJ+bs%Uv;{OMe0sPF7G+xylj{kr1l)4P6; zq?L!I=wHM>CgW7t&B_R8PHBIVw3E+;prSnf{0qq?epj3tm0_3`A zP!bpR=F@S+DGaRR`+x;MIE<}>888@GfyEq@(nb0CK%H;m$gi!1KD(%-5& zG`7j+0K74wjsa8odWZ;0OdJDoE#RSA>(kG;;tqtrE-fvAHXr&B=)C7@d@l%z;-BkA z+#)xzwJiwgwv z5AFOtX66B~K8KkCxH^LcD3@Ck9JWyVgGHGt`c{b)CGGtt4EZCfkOxQPq<>mq=z0yg ze!7f?4csxs*E7qeSYeZU|0P0Nn}sFd(GRU43Y`k{8^>n)<#DM}E4I-wThlg9i-VhtQU_JY!k-cLIX^VPG^CUYu9I)2_JuMWB+w)>{}=$V_fW*Y?)Z&ZMCLoFP$ zn7#BsySrh-mEl~0(hTY<4o&3*r5amg4ESnR>G4)6p7lciNw9>A*1x3dAKLr}~<)zbRw z;>A;kJ@-Qh0*HuxaaUB-6RN%gEc|S^T43A=lHV7WJz&TOg@zmEL_m|``puh}g@x9J zh9xlW0h5;9D_vvC9U!XD-F#^Cl6SkaR*-|Jwa{sfST@yu=@_TuR&?>UPpdIFNGAv zE#~!1^$EqsmM)x%BlA^}Ne{z=sj5Kk<27R)GxlpqF}BFz1mFANq-qRGkAi|36u>t}*b7qeR4I>mx+&fadiMJw0ThQczF? z1qFdyYqzZU)9D9vAJM!|UV4O4gh)4Lxn8&Zii7ipnC^$x)2@X|ArZ1QutM{Gc>eT% zI5@7xVCj1WP7h&)}KES8H!zuauAN~0n zaEkLN_<1{G954U2s>Q+C34R=L=c~g%GPl!z=8b#WJN*0Ck3aa$zkd7R=KuWZmDKX_ zKOf$I3q(h?cZavFk7-_xy46bl^UpaJZq<9e6p5BxyydR~Z=9dtp|q?lBnF2XU4v9G zZLGu055hu~i*GzOxOxk*{fc-B^XK+5;MQ=3X zABce?`7b_oQZt^BWD&QW-Q65D3sqHBPtW?46okG$b?f*3zCQ4Y2CI>T_;@%`e-sVE zFL2P)*RO|J9k^?7adFYny&eo%xcuL-E;ye+#2NMeeL!H~*5)Rmer6t%kRg{%E@7#Ht7X@OiFMWMG(1`qPj0sltP&(W>oYt`c zx1Q_L<)=O>+1aqX2&L!X5+3qE^UXQ%G zv-#F?mr- zi;ECyi2-+P$RmbY0@46dA|w022KHbm)EYd+g~9Y27K84cU3$1>;06MQ^0`hyXK)s> z7>NUegB5II&b~WKMRQHCUbGzoLQ6`x1O=ZC-CN3fmDjT5$1n0ibh7M%t4Mz8S1nlA z?!X9igk1|p^DwD_iUtOzCulejo^_V(IL){ifup%p8x2d-Ho#|q#PO+xS) zz>UHk_(TXVblJ;k>FW;L-=V1kc7Ggro4+y20F}Vp`M5ryJqkU6!rq^LgW@ce=oyhLs_?T$;sgBNFdU|6Mlgx|^DH$159QQ>p zEEHuhE@n9U3%oeExVcwXR{YP`)zx8-mWyCLLYj{l_N4dg*H=c$XJjaaM3aw2Upcv- z?0ATDZx-))coavbvtJ7fkYFh2g>mP?oq{+3h=US_;KMD|SpgxTZAY2|7}&s~CdKoG^#L#ota}J% z1ApxewW@1iu4U`s0Oxu^T^$-6-dl}pEc%&+gy6iegv}a!ju;|jck2D=Kf}Yn4qYU~&|QGr%{a4<$eWBk-$1qgR9Uch(x1@II4 z!8WAXy=V_NqPbZbl+|u-$53tn3P&E|n+4d@{~rIJ(BeZVXm@Y#shh2@0nDNx5s#mb zuX#EZpatqsTRS`4D_3?f?SRIi93f#B$`;7GgY6MEQ%h@W7;K4&8Zf{43b%Wx&W+#K z0IZJ-U``L8-^`4${ukJn++^3MBq6CV{C)*|v|%aHc%3Ng=~3yaeIcx#o~~Lal5S~H zn!LLbS~OZ;ROI09zjce8uzHDrmHcW?nzr`Sr_d!Q3ps^4`^|rO>#z3>S65i`Z0o0i zGS7n^8rabaUL}jA%wOrK10^4ni$lDx4gwLD#mCEQ zHHn#BOeP~E13f>aKs9*<$|U^fsV%)Qbk&d{@Kq^(dus~UE=LylPm z85QQ`FL*#}?WW}vD<7=skc{~U}HE*Eth2VNO^=;SWE@9`#UT4LvPsYyg$?C~d1 z_p|O<_;%|l4+Q>|!aMu+O?-+9bB+YRIB5RMce^|9#@-d~HimgRiJ}TGxMJLs@BKOF z5uJ$wy~D%W>grMAsQ>~!BaKc;QTjNg8C?#J^%WIHqyr-(aGm8$9-fq_1A1v>Gz}Im zQXnJNDi+VCxK2zAVel3T_nd>YiNzZ(RFa0I!fL>yDWaM!$Y*nLIfOMqms4E){=f@tup#7L)OClkVtDG` z^VS~oTbDgfH^J{<%AYIri#<+(Vfm^k5BVw}&Q0xLl5uo&^jPyMbjyE6zN%m;r>WV} zU`yby$Uj=<2+<_58MO#hF%H-W}_ z-`|F9$q;2KBy%#BDM@BBXQ*hDjG?F~Wyp}3Orb(Dgpx*46lDmZGL=N;Aw$TJ;Wj_l zPy4j@Isbj$_j%v7o_DR+I_s=hJKZvjj)l_+A=bN8Dhc#t~&q_0EcXsn$9OrmZN20TDYou46YnxRlU>}(M>GrsI zB~v~3oWHmx;9O_14@}Un#m1s(h*B`&%P~KBl5pS#bq%@=ZZy~$8j!=7ET0Z;d>Q68 zq%C`Dla&q8d24GQF7(*tq#J(u#fK@lU*J8$4FhAT*K%{GA&&xby-Q#`BjZZos+56>k>i{8%xyj$0|r5S86P&e-;)Bo}NqfOSgt9&Eg@Gcnl|phi5NA=dY> zuGnUp-|*&*C3fOTvsYtN7?1=y%FU9JPczmik*QWx*j7+^a#yfauZw4S4Z<4$fwYLbdv#(FLiAII0PM{zj2IW0G^v$JEJBFYpY$)R*KYG0q*Co5jk6zE@p z32D*p>{&BY)3(80zqB)~ zAKt%**5WX>K*DtmAvW{j!&mN8Sjcsv`wvOHm;XSa_`Qt@{u|La+iD(kpL_K6 z7qLPH(GYzzYO9xRdKaT(VgPm`P;I~-Ff8D+ZkeLWLwSG?4T_PgAL6U5yMH=^oB>6= z0qb!+4|9>FM9Tg*%*SD&VF9KG2ES$qCa-5u1t5)gQ)L2U$sr^C&zZ#eU z!8r$iAPr}rA_sQG7Kp;Qo4y%EJ8H8`Gpg={mrtHN0osMAaJKaYub^NaOm09EF5F;N z3YCj<8Ya2^n5Q(X_;R{K@T$}dzxrm&Llw9Ser zTZEh9XnQBt<~$&(l>RkDCq#J@g)O$?bS~qfa*tLaOkw3(u_T5O(vXeuxq`h$Fy|$6rLlnqSO--(mr@F_x}Cw;J%R6c#V>f$+3fK z>EjE+eBD1fzwm7Xd1C=g$5eREYT2jQZWz_YF+) zVSI=Uizvh@`jD%GbA_|0S@b*8H>h|V^M5jPvtqZ z_P8B*i)crRi?;XJ_LC}tN4*jSqF!DcVPBGVSvaAlM!uwuP4z;sABB}D46?AA8=$4T zyL-1$GFE1dLR&{n;!aImXTaDNa8%7}R&@Q0d!SGK0y~~@O zD=N-+2!=&OCKo=Tl2d9d97UN&UVH2wLpZhRg{?;21WGPR*}FaSo&uGa z)YCnk2xiGyZy$S6)~>{I+()TUW_9C=>_q>f{M1)N3D_x7cBuCADKyOd>Q~)J13zgR zx27q+-Y0gstDg^mmVto*N$2v78;#h>i%kKR*4F;o-=B6jH#eg@sV*t$!m16$-2G_E zRbN;UQ!}GU8_-8h91u)tua&K~P$Ni%j7Z>Z1Ohap;uF-lw!HD1u5Oy2#TG4p<;3 z#>)#_wH<5+8Vm-8%Lar_c}{qMM5YX!%)NlxlfA+4NyFt7OV%$lFS>GKdzcddM-=n= zhTZuV7IEqJ>e-YPqg8wxnJHsOQGR>R+bJT~{p*X3-~3;aW#X@jNQx{=KD}f>FS=t{}hj>ii5bxSsFsb@qIUq{|8jSV0>mVs-WN!fQGZNzV)x*{l83l}|UAsy1;u_^)qL?Xb4BjlF*TgtfJLU#agEN}OP<={t4m z)N8Ia{?5-vVUmVN;HqKz@|;Fi{ER~5;ue0-+i9w21-j>BWx2Od0P*P!;`Sb`rT8nP}wIGi@XIOP)Z02YU!1{iZekMwthL%UC>&{9Lr@qoEhkZOiQJr!4N z(4D9#xCJh`qqoMc-al+a6~G|iF5`+In1Q%PBZ=6{7HImy(E@`AB-@(_3Gd&%!=f10 zjT?2OOb`hWbwQV+_=~-EZDG3QtoOn=RYf{tC4LS9Q@8k_vpe8eS632N9UhL@9# zHLEPk-TqA%%exmno7)h?wtYYB`qk}kL*k?!fNL4^bGh+d;W#@ZEdDm z&C@ryn#}MqcDZf0AP}*+2f$;FV#dUQO00jUopc^anpu3f*5rAyqU5-22w*3nUxe))p->`O<7*};P{TefHz8b&@)s4^ag zp(20X2*a7&#WuWpblf0{32R5_4zcp`1%zQ(>x$dW7Wf*4AC!!2f%?fxZPnGMQPPHl zgdjL@Tu+2iO}JibWHo?B=D@Gl~ zlnV9>W@qXbIiI$5XcsSQ8C(rH{YEr!5;T@)f;gxhOeuM&xl}vE(KOpC#t^us)T6L4_YSsavrG7(WW( z1-B7?JByH(8tvadJTkINEOdJClVD>k6e$pClE!@6`deCBs;uSV*SxUc(S2gyz9>*F zFjK74S=jt3wSeo=tLNfgN2O~m(wUbw?a*5A_e`=X+*_*D$)Y??#p$yvN#=4=nUhwI zeH@8(6cPn^J0k}j!}e*YWfP?Yxf|tpH)^8xLP041y6Q)rTF?%Ho1ccG323CJ7m+79_t zGbeAN^H;M~qw^Yh2~`_bct1H%94IXa^(}@CHf*@y__Jrf-Gb~?Qp)j8T1e%;o-z$U$b^V(1($d{ecB5ohRrP82W;{Qa^1k#sLKGxp z-OEov^}zXj*RE!)l|<`-;D^$y;>mp#{;pm*2W(OP{CNs(BbZx!CgrdajRy9Lz32VC zy=8J|$|X%AM9a~K0qWQ)D@&k9t!I?NH3`v4xaNC6mJ%P(ckeyFn04@Bax!-z71SpA zUeRsirQ;GRwk;WWBmX2R9iqZI=Klb|bX&*DosL{2_H^F8`_^0n;^VOJ@R{Le90{<> z9|x`Q3JJ?y;xt27CTR#JPDiH-D8}PxcQ&Wtam-=a*Lwsqk}HQFz1$RVB+1H)n`SLy~`JEKLGJaFF1v+(1p#>&zI6)hhy#zCHv8)>;{5Wo2kso113=P%K=dVr})YQo=1kw^*{A2PgCoA0{Uz z-k6~b!_GfPqzLq0n*3d(fa?itsX6&KwxqXb4-jFq<+(!|8s!g<xTxRhlACjID|>B)q4FCjhs=s4=xE^$ zx_vwJk>o>E|DbhbDEjJj>_+0>0Wc3O$N=&R3%LwAcb(&RqC>z34GFPZgxk7JGIWrf zHXeK&U1_c;;FM@H%-5WHBvR4qE9=>9H)|!{7^bmMUbMK8EF;>GMen*fjI?V00lnHV zW*_ilJSKWWAk+=C0G5wt94)nalJ%RTdD?-DztN?}oo-6c?iwbyfBlMQb;aVafk6aH z2L$L{0#5!y^tH{Y2}xk~gM%wk0*`f^{SBAZ5Q3CFQqC>~MXrxu7tSvF=2cs-CgQtl z#+^F>BvdzaMaiopA7Njmq=ZCpcsQ}&45%7lvDCD*D)qzc;nVQiB|_OgPfx`v?mWhC zU01+!oIk&PsR&47e!hE#9-L4k4Bjxto3kAR>SJV7dei;Isz3|eZ(@!Nw|n9BWNwtG=t*zQ6L^ zvffq6%RgwrCR!SRGr%P12iFAnDlZ329hc~)g!o@(K{E#r054JMJ>M1f|B925YVVh| zwm|L-5cu=w1DcwFY!5ABw<)dPA|@7LmT7S8v={obh3thhF=H`b&y3zK-Fd6_Wu825 z#P5gcd!|7#(lqW{M)h<+7QI~8uBDWcrSJCtLvLuc*?#3QR{AquxG+V-ECAFxIC(ei zvSZ#6-e_61YeD@E_4;QU?(O@`ITouYuw6`S-E#8EAcEe2OJbFU#J&R-uY+$U0;7R1E;`2v@DOF+MC`1y%P-tN?62gy zaJ54(I`?dc!R@{4IApOG)DLATy2ft0q}mhB8xBBYfK`y#GbgX4FbOdV@-e)$5d>WF z)r;=kt88hRZ&_D%n5n=4g5Zy>ty?klV9_+bjPcSTH(}j+Ds+O$`OkS3(Z2YdY>DF&5QFj2|*v!XAai9U41O^~c-gT~* z*gyUCEBe*inwq{i{`4KGw-dOO^2^rMnr@OaiJ&MfnzJ!0UA9r9TTMnMjpiQ^OxgR# zc`BRh1}70oQ&M=)X8^fFsf^(>xTZTE%Sui4y}+a;muPTjKV+vRp5t!M0O6pgwq!2{ zRfgl&I6Psi|uGPbt?*S%h;+S(Me)U2SP;ueZ<#Ord*+ebXi_O-)o; z+c8fBiG>xDvTF@X+a$_>^&2*18jFnAKB6F!<`+-{ju^oV zDS(qR5r^yClz#h{FFWPrzJkRv$QT+L0xot0n*=_TzXj}&1erPt*Qv=#041&{j^Dil zd8N|5%38qDf@-lrkUg9j_Q4OHh6<= zCiC<2_!YGYJ4dky@|VzyEwDOHvJ`{^iu6x+^$|nCaLuA}3|hM>P+4?s;yGFH73*p(lKbPthqLG@B9fa{&oQc(>KPayMb1HE zhT{oy>w4k6S843qy4f^=hNBBZqf+}hFs1Y>@Gj#62NV?)z}lk`AMEVB9NCZ(Ln=9+ zJ8;)!Zt|oDyDCadPtQeIPj(Q{%VoxR#W#Qts0(ReKzQXpkw4FGV>G~ zT3XD%?S$h$T3DQ2kpDxq>~?T*255)lj)nu zSONeid$vGqB>4w~%RGD(`xHu(xvmrexHtD~AV1sN+dD>wrWF{|kN$owe%#IfDH~!g zI?EB|kFM;&lPC0R)w{^-vG=;MU^af;fhr* z$01#F3AX;)WLpU~V3!YnvR`FfzaAEfMQDY=21sI=D@Iqe--x0<4E(U_1mlJRVz6#g zfsA9w<)Q|IMobW+m zBLT4Iz87*yu^@ARL$a{2fXf2xjB&n2z{kNap9a6oL+1fyKFrWbcrnq<3y{yE3v*an zn1!?tu_9K?l&_Bm6?H}iP8Qk)lWc1pz<6LrqBd()K8)IY@vrD%p&|Vp(jlLQ{f88S zJe?Cp{4BV~=or20d^jh?g9Bfl?NCC*gWC4U;lt}$Sus*{^}>(S&WCrauZl2T{*Ch0 z5J&1Lm|FB1-h-FT!c7hwxBxo7v(sFisp1nGg>&x1*S+2A*a8=Ek(HMhw&xE)&4Jnw zj64>EUkwh{Mkv|4m%-~P-62kW!C;47foCADz3f@kIg&N3u6|)_qUq5K8au7R37C*x zz^f5>(=dj+;c7|uK*=JF&S@W^65lI#zp7|7wRb(pS&K@DA-6J=BAtWXL`ZB+InJD3Ze~tb-j7L<0zrQYT{eO)dcDrvqsY}(h9=OO})xO9V zBi?1FN5`3%pfF$xIC1A0YTu;luW<$hGPiArJG-(6hWiEak# zpGPL|LNo$?RYe3gMSnxjkai%7IH;*9AR_XvqCzVxepLw{bD|1&iHY;~$lEljM)b@L zIss5yzz7Zi&T3<^+_{sT-=EHHWb^k_s$G$r6?djScrY|NDlRIDc%KSE`q+Z^+@PRO zzwj%Ec_puD)^JVxvu7c=k&CI~xc7}wW(B+=_NGGsABPpUe{}DXo2sazsLxc!dx0)# zm+P4MS=RJO-?^?US+)IDmkC{-Im1RpcV|%KAelf-p{hU2+E4qhf%bt=QmYxd&=2cY zJy$TD{jE4sNS|i=H^ouq<$(sXG3G>f9%)r)a<)Khn?TUl#16Ad8jI47Q->Gn$UJ*J zxCmH!)mlu5T=;H`I1jNC^dDMbZ1sby@Sq^=0VD(B9+3<|nm6W3Z5*+%`#&I{I+UD% zQ{PoqY8x6ZV|NTE2M6hx)uF%OpbFg=Za)RQ4tZmJ2_z%fX#~g_suCwBC(O~Ibw!s~ zAsO?S;Q|q~Ek%6JjM9wedf~qtQ#Ax(Ah+*{!yf;ckGgkS-CT1%Napl54lG8->zL?5kHEMn=rttAcwr zmP$Ytf`5sH&@)jJSws~N&;_tqMtVA=oPa3U*x93Xpt*Gbfiqt98tCWS338@gHOVOK}HrNXt&UH9)@L(e}M*EG75ZO?-6Z95-IJUa|yyfj} z{qK;Fg?<|_Q}EW1Btf|+e(XiclP8fywPME6AqK(IUMLS?8^Wyh^>3;r#>LeIspQgL zsmv914W;!oj5p`@IWz6I1Gb;~A7ks94n2{;6Wg^nxVr zP)+|!@)hv$ZA;4~ESW<^fFd1=q{JWG$1&O(8oCEsd2F}>0-lhNK)|Sa6F~;ohq6w7 z5&(b<{txm`Ls$hG;=4bAjO*#{Mzzy5Fc5d`+9Vbpp@;bSQuE;N$z8w?5*51OYkh!d zDvdq;JvjX88Q4*Uj*-ZFCa7dzuS0Wb%4&T`8Bi^0KVlHt0r%x^eSMk&CKf6VR!&Y& zK^Gi<4Hy^oD`4oG(wUg}7ZeQYX#YuF5j_&aIxi-<@(18Pw9zTAoe7QXV-$|nPgBlhH4JJ@YJM`bQ z+yLkRY!BrW+DC(1vf~-aUSCnS0o)k#V{uaG@s*!UoN%^>h7mX^=`fLn4N%{5=+#(ABq1RaZ_KX;IFHGNgOl8)YN0s)9dKy8tUp?;Z%wG zKX^6c_*K1qKE=0pGph?qHE9U)1_-F<25Xkp;)Ygj=lBgf{ZCQTs{Wr3=PGAC1XqS0 z_d?i>2zK(M@VJ4M)#f28&^j2^l=t+(0hNt!0KV8=jjVU`COaQ&lSL^hDRH4e(PI$P z0}4!MXX*CM6FHh7r~ZV5 zAg3z@%rI62jReWXYG1vo^T_PqL{a*3|3eft#voEZf^}0vXbW(+fEgg1JWN(C+byNB znmlNo{=k60UCwK!MAsiBkOo`Pmy5~K2ut&3pZF3?QczasIUfkIuM)NfHg2k>V0z&i zYzn?yT8WTw`uvd3tdQh8+oo(Zr2+H8=}5#7s|FSV*2}m1PgNIdv-E zuAYz5FDfpsvY#Dv*xzR+EgBOZ1W0pVk;vBgV)&%f0&u2{zJ563z?CcSAjEtd!HL55 zQF^-Z{{3nmfq6Yzu9spCAYEdG8d$HaL+VV2zF8bn7Bsk32XP@1Aq-4_|KkxRB_`@i zUFTL9eu6v|2^b0i_tl9|>eVXh>gk6XL(hyt!EoB%9wZtxhqjBB%~UC8;tB*Z5NsNHWZ$_%Y*7T;(AWrR z&2z+HlyDx88JBN=gh~{xCrC^D4L%UxdxFOuADU38FHQ<`N-2| z4ObRF6kA=+DqQbrAGSbum6MglbKD?F0gT@qt}|&**?4Q@yEHIgclKl83@I3WU3_x# z=-{A^)N{-nURa(06fsAtud8E|vaSTH8B((O=JN%yts;#OxH!l@_4 z0dH-8#bj}Xgl4BDNLc!~fsIq-o7rh9& z9>@msa&n-XGWUFT&`|BmHu6+2M=%-B?MF2RP6Ign^6ArqhY!m}pPD_h!%=1%w~~r~ zElh_8gYQPjNz{b_4FK&s2rO700Ng3rYYsD>-(h6qD$?z$iD{W{gN?tjjeO5AMmb2U zrV2(2;hR9ab}gjK>nl42x=Dj`=T9J|fX@b$r)|^jqNZwg9swNp0n0$lP@L;#W%13oSk<=?;fkoqnM2YIk;aBMj3tQK@PDbnB# zwZ0;EBE^0iE+~GL`-7-CY8`*0G94#NmNvYAFw({A4SNCZzFSwd1d|54q;wDUol(GUXi&=72TtWTVHaQ{9S&!4?j%nDVQX~F|W?NVR=6lejgHUO_- zt3&OMi=a(VKh&B_Kn8te^!$(S@Gc=4y&a2R7gFBS(*uz|E`5k$cP2hj>m<+6do5sB z;jZHRE*CrJK761vRmSGB$6b+%Z_Thnv$K{%)pip%x3bL#HieVGCtOecQYSFzD4gBA zB-#yLgIm$v-=Yb48A@+d`LMVM%{cdbr%t!6H>^+=uow~_I{@v|NIpVBvaVx1PQ5%4 z{a(g@>9LY2ZvWC_ksfvpGyu#%X@-Lg6)41Bz2CkuDvMxWSx8*9g%cHBDY04=9(fy0 zxhBUq3q8E8(iuOag|%gR;vPb6^$h~$u~lag+(^3= z`HY$3b3#!Z7b3-BnxQK;9=UlZ!btM+LO;s?@h67CuE9bgox%MB{_BWm&Tu=ETO$LE zA&tU$DTeUi}it_Phq5 zkCvB{4@BP;z>Td>Phnv^OA2CsYDQ}=r8$oz=@saM7P569A1La6Tz~G@ZfR?qce+Ie zXqLwU_2drpRe0tEh8L3gcllJA22n!_ccI^-{mtvF(z36-6)&3CDr!~ zvz|Gwymn9j?PTV~yD8$CE1pa_93dm~Y(vZbkE$sZfeo;Cr%Gim9*zG#!Z&$xud*jM zrq!UB`V(S6&wg2z{MQH<0Xg97)ynMFuRlYv^&jI@|JyhS$(t}S46#4Uv7B;4C|G5S z$ZJcz_k0t3A0H6ffAV!F`GpeG#?V$4iC}hrKI7um4=Y8=DGv$F1ygSj@ny-&PZ{2$ zq2iipN4x*T0S2)X^RK;bUFxsec{Qer@(-m^W!I=%ReH9lcb3o!> zYtx9gagWJeY%s>_BO$u#^!;#0x^~aWq}|8){(OsU=Oy~hnEk6mHPdubwaLgNWc9A@ zGfI$pcH+;^=(6OR^GmCVk4$qHghP;j<2rPS1q*q2dP3J1RIuJ-?a@3eLqrvYRnj+x z^C_!kHg8UwEP{)jus~Orve3EQQs2@pC_DB{|GXBAw1&asG}H6QfL$ViVKYa_-fj&x zb+wYfz=_hehWZXn6l3PU#60*`=|Es6H(TwV$jS?%lPg1TnsStWjGUa+)>86va;Vw) zlZuWh-&Q&|J$wYvB1(GN>m#rcu2Tm0)pLX`#y@B|rLtMq5KD|vMIr-3F$7owLqI6k zPso@IKF!JkITi>pYPuALrsI^om#zsLArnE?`fKKy|N1UA)qCoKJexI-bw;}m`xbB6 z5-m4pa`-SRSIA)1)V_7w`%^yVeUMzQ`nAW+r;|W~*i6utL&|mc{{5Tq^a@{q-em}D zx^+BAVs{+0@=?hE1qIcNG6F*uGabuIc|k!zq>isT!X}wFrB6mvoR`NAIaV_dC(w(4 zT-ODq-MO=YjZLUz;SVyVolL7@t*uxo#5O|NOW^oXQIf5zqxS7pR*RZ;O0Y}Hqga@qd;0agMu7im^VB2?T^ycsh_CHeQ3#J!_GT9gKXE+ z)s3FMqG{L3o;|m5wY9YZ{mKd2#2(JV$+L#+i+DvBFomSf23SFY%qnRNj60@{FiD6txL zDmmDN-~}YFKgiJrFcHL2BNXF6O^-7^k7)Zwa7SLCb=6CBU{kld+pJZh0gh?GL^%Vm> zLth@`cV-yUu(I0P+s~tnfD!{@Kw_3|IA}IS#d~pzb>@A0`*B|$kXIFpefa;`9Yn>@Yzi5SLC+mMBMv+VKv0bzpkf8j3F< zM99il+Rd6Hd4fSRKpLZKZb6xA3nu~q9GGfALw`=Z8E7`{@ZG!9NZwaWuIwGnQPIOg z{qf@mv{3@o-B4mJK!x(sP#fYu=yf2UZ-j;qJrNVr1UmSx-FJ!y{6pG-qGBA%L1vS_ zC;^%xQ`s6rH*t((j;GJHOFRLQ*pU)5ydsWgt$3{omV( zR^?X~rIa!Wi3tfqfU2$Sun7AS4^x%!hL{Zi$WEpni5W^y{J*rr+5}s$Nq){ry2o2ax9v9JsS=)*DQf{OF-YPBXXX4=O!C^mbZCz&h6qOWqNUMGGV}o7~B1T}k!0_O#j)#a63Q?+;JWf;eTyE;O zkfQ~#ssg{m!NGy8sCbxgPFgL{<+(K5e=h(lcEuDGITsC2@adR8?36)g09xEad?&;G zT5>0e_zyt0r>8ICz~ZCG&zFOxom^tXeoju#BU^E#Z$SD{RHPiPNlQ_v{JPdyZmWSeS;IT17l3I~(xiXm_`7L?WpnB4XXdsVFcUke8sL29cHehbj~u z09Wv>DA3NFd92LB_O4_eRdE-l((XziM8z*ZpeRd?SX%9d|;d%ZR2WLJGkdtRa3na}BM z5Zih!S>Sk}Mq0lFhj7184>ska5{nY`=?c97_$NF9QY+k^rtH2=^XK3vN)d$?;vu(d6gl zRW-sYg-;m@R7rRsAiZMNiZ2k4pE<(Ve?6rWh*jdgKd_KOdtE;Q` z#IZQdP@h_sWWl@7(71OW+2`VtSIjRN(wt;E2Q_y<|NL%``K~XvKGzmT&U)Mb2s#e= z8|G=FGM(iXA-e5-k825BF6jZKI`wtZfG2<^BqLl*%#nlGbrkmt2{OL2hQuhyrPisY zsY{&v*$aG<6+vvqnzCq|0jI2`rS0)DS}5M4Hq5lku`6U_BV{9fSijq;!>s3$yTzr|v#d|;imQ==?w`;Zrbb6j$B!!#e_ zY^YwU*S6Pt)1$T(dIbhZ%U}pPMl%fqVgrN{VrmMpiAO81+vBR)KJ--Zyqf%EsC?T# zR76x17YdL(KmpO3Z9bMZp_xq}qDnAX2emcE)j*F(LR1FLG4Y}0qyb%q6963-tTK;9>k^OPA~w_u}iZ2#M&y zg|9czFfW~5GDSGi(D;U`t~VOCmmKRV&~Mh)I{_#Gb}-=fw)-KYXVpm%SI|0Qj{3}* z8q25r`wUYV4(GAO&27{N6NihUtGgy&(-g7#KDWo{>lsSUuyaLzC!g`zimD|ABY8s@ z3R=g~GBq!626}I-o?b0*Td?xnPMHt|B~9~7#E7H2$LHdok_yd(IaXSyPlNZrkMaH4 znnPxu8cQ8et$WTnPi%>)0_Nq8`KCD0a;>v)F~OirQ&SUeAP5-8*tN{`cZ?gJs_2e8 z4pThd$6P@AeSnbJR*iaYI^siX_#sOs%gbCrT(KKRhnO+DTraS5=a<%2;K=;0{uiD|mO*ql`=#VT>;6Qey*?MXZw>e z>w{_mCk@aXUJYk?3{}0-Vbqrg4m@`r{0KQwn%P_v%4vj54BYC8;Zt7ls?ayjB`HWk zhyQC@_r)F|i4dM(XtKQ*dbrU5f=5GGkG>gD_N;$-v19r3$B(cZSm;<@G-Et>BT-3S z^Ol0!1em!*#N_ z@`UBanh$Iib>v;}S~_qm>+O@3hr$=x?^b#8gU!Sq`!n~rNqvZ3G`^j)yLQ98W4+$h zm&@kLstn2T!{z-c>FJ66_lrG?>A=(>arc?> z(_});hj2dtD~jhN2b5zr)i=vyZeG3WI8eu{$Kdj%#K2}gMtI>(v~e-CVQTPng
    +MrQFJ8r{|FIrX+5l2f zl`%``bFW4B$~zwE&1EvBPe_f)I%r6Sp7G#4N6 zd{7lke!hi463`XOv%R>z%z7WMZm6xQI*IVT@OD4h<)<{{tPE5!RZwREbC-kVg=4G+ z%UaLHV%Ot`<6iQ+{@N87=nJ?Vh}_(9IJ$(CnCpSyT)eeL03$eoj1*!#>(C0|85FBC zDeqO@e6x-|gRMQ;k&-OTJ&xyRX}#!f6Bf+QA^Cm4FdVRIqBimLL~VfNf}~3#np{nW z7nfzp$;kr?sHN6^*P6DdDe^D;ORPdh7RE>@x_*B#aDD(h{QK)ZkraP_?OlET@4^xh zqAvOUWn6NM>Gwa7{eJ{`kZr-ay|cVw<3{032Y~M&`UNvvHTlqUq8#lK3#fxR2o&Si zr%!JiaTs4~s9TK}Y2i&5z2BTF8?9v~Y>}aGefK8%3MI}L(j6V8Rnu;tYe(my8vuyB zPs6m(<4ATqz2c9>EnBi3QyAU02NV*Y%G0bVFtgg&SgYaKL} z^YZdQGosSHcH@Sw$g(PisYVOu&y@JC#&sAR8Ul@r25Wj|Cic6oBxdKT<+6z4s z(2aci7SL`X!G;T|Cyj=majir=>Zqf z%tG~x`4o%oQ9Sk2v6xW7?R9OJ-o71jI6>-eD1l*4A?cojSV7oujE^H)8=t#){=5km zsNcMK6ooJ59wV{55qO4ACO%nc`=M6Q43&)k009+rPyj{7pB?@PJ__CNSb)!x(pKwj z!+yLxYUWf{K5_Mztt~82U{80K)6!huv-P9h2$}&91PF)M9db)gP*2Bh<*-w4*IPk9 zqQ0ZbPWGJMm}p>zb{ZmgG*MNnqObV}1wlI{C?IgW(8KL}LnhMMv+V3sXU}T&SCw|- z#l!cVml}OZTVXOLuz(#UB^^gv#b*FJ3P>CS-o@yaV>Q7SrnYs1Hu81&(MwZLyxTB3 zA|TM07O7;J(+uDMXfyZC=k)v&OVEws?m*d&oa`qyhW7NCDgFsH36vYP@7}?Luf!H? zM1OZTW^^!4$S*ElYxxv7Hik)#9XTQ_u_4mo=^Njrb(f#5Jwicg6@xVL`0)=!y`Taf z|H-5C={hm#Q6pJbGZ(igb~^~P_CQs&AMU3R-`=_AhF^~E9C*qOdzR4h=#NJkV0s=z z&kcC?z`=t`u4BcerJFIPtZA?nP(LK1;3|lD6rc^;;Vl7wq1xI}e442zgvPI2Kee3! zZkV4(iAd-v9%3Wh(mzi#wL zcCEF8BjP$yl|Gucs{Ii~i|q)W1;ES%5Rpp&_G7Bl#p@SK;q(lY$^bcEQuzT)&6a#9>0Av7}J16+RG^6~jYt`YSb%lh^3WP;UejEg$BDJO3h zBQ9T_l^UWR+@z#^kz&u@Y zydk)SmICdigG>uRGjpc9&uJaL#}S}NC}98t+GNYj&u9r#dd%SL(a{n9?gEh>_-CbPfOCIzSkKBt= zXQgmi#>&nnhl>CpVq#)|DFSRGCRQ#m$MIY176E#4b9IH_;$w3&7I4f83CqiyK;@;R z)Lva3KCf`lJeT~?6?H8waIbI6%7Cp>7tBIzgTBrn1CzNaL4W9ckGj#i0Z#6|)14rp zD8;ItEK)mAk{q@Wa4lXHG%KxVbMx|m|E0!rf~NN60a`kT>S5z{8wQGTV6CrFr-9h+ ze7ydVV(6Y?9WPORzT5kYjXy{Q_yzPg1lY~=u8Pphz`-46R z4U7_IjO%yxz1MD{QovaS{13D$JS+^C!qk-0ZI5t5HFknWV)75IqotJ<3P5Z(*c)Ap z@w7dS<`<=lB0>!MShpU&3KK4SdtC4LKh2eXoe3DX-@nh!&0V{Aw^H98BoLk)1p4P3 z9jVu?`!PJsZ~I9?hulQ|t^&yc5as!6BRGsOa;#r|aPZL^7sW~Al79lb#QJVi#B948 z;JC@zyDPlxCoQ8)%q9hKq7TR99+!R#E(-7>tcpmr7pr@kk(09>E-Fymk02u=HkJ$1Gkr2NS3)0*c9LtZqo(F@dOAO(Pp?0`dzbIMTo@Yq z9->VNoqjDmxPh)P3-JMSqpX;aRCefq}%cm5)4I}KA8h8 zZ+;_L)C!;bWzbz_KZb+|6E}S0@+~h#Sqyupx0iu`pz(#7ewSBF>jwxm0TBA~KT5Mx$G4VHLOoqqI{ z&q0rZxubr?dj^Sd)liV-eU;X2EM0C&4To_CB5?*XYIS!Eud2rc74n8T{EP8g#Y9Et zp_8k8K#7!9mp4P93Qu;_?m*#VEpxMqtk>FTS;-L+SMU&yjuLVoGhqpT@?@0T6^<7| zfs6>Hh$Ao-bSx4+_el!2D}#fCnHY8I!-w#WvxdbDc?`vyYg)Yy=%A^ZgObjN*LHr*XH*_Jj&isrdC|NMehtjg|VH_FDR&5Hhb+J@Z+%R)2V;PuC7yK zHW8JUCJ`6UoIZVr^iAfjg4#xGeupA+&KqMpKzS76G0VR;+5c)h14TV*l#d_ZB(m2> z@=#k@!Q5hq7&jpXMCiHS+KM{f0aJ?aLGa_0jdF%ydxGzjv`|%LC68IE;6`X_V8U|j zmFqZa>-7(4leg{saiyI3<)U_NO2QFrU7+09x+h}OPS^${@&uz!;!<#R#OHuxfU9v+ z(kc)o7$)R!j_wWXdR)4Dd}Mu8nqI>wXjBxZa9Y!=*%DRTZX*?s$`b-akU+iG)hXXj zo<6;}tfkE?kP_NdMOzQ%1aQw-reC|tmzI|Irh=)8ml7(1fBfrVzf;47-)n|Uq@nmU zBg3}!#d53vQ#hicGXNf1XD2<`a-h}aes{@I^o}?p_#+Pqi;4oRjM5pnss$))AI;`p z2-&xn%i6c@2C#5Ri|R$I#7zX>+EzHU_-}~Z?d9@a*UYo;Fy+sVh`(ZV@#6hzbIuQm zo{Xdngc}rI_S2`cu!|gY&c|R|teMwfPfyJC>pNOCm>nj1s|Y89*$D)9OtEq}>*z6E z*tH1#nr8cgZ!JrV+<%~IV+vbC=J`%}Vd4i-3+~3;+}x1{Q>#;>x4kiX$|lw1McMWN z1&Z^VJ{w1GMS0F!4whf$n?@82N{t*EHL4Y&kB1U_~IL^~a)uk=%=SoIryX`;RIsE0J$uFH=!h zHwF<9c})5F%RP`WUMA*fqvtM<)=| z+}2jz<%oU}g%^54?6ffl7d7SwF=Z>g6M#9O!#Kwq3Q2rlUpS>9*37b%BXORzvvZ6U z#0)=r@^V4mR@*nIP0&lj7)()J&3rB&pct*`MY=> zH%~G(c?CA1g@*{BG8T~yGhHNrMUvdN&s{$PBnKgQHX7~kE!imD@3==VY_sb=y~B(i z?*|zgnbi3L>0plO^1+-^u4$jWe<6mo8kiP-v4Qc;>$Rt%V>@n zzX=9Wla=?nr}XB3K@dZB6S3+^ZZ1}P9Y6!_(f{B{q1tlrd7%>PJzJdV^NTb)W&^0J zqp66NSTg}aU@)s7nZRkFroLU%3xWpaq zXfo*6uEm=VyK;q(HDw~<+d`-Jmk@)Dvk-p5f`B-%ZqLiXg()wLl|teJZ7`rFOeDiL zlyUJ4s{qZuy?d!?X}3#A97Z(4=|K@%FYe}v6?-ViIM&(Xr{Mkq35P@&15F}xBilTP z%H_MDghW<)dRbMKj{j^d@CuFjT!)Vcyes4uA$-B^HZPET;Nq81r&yinzT zf@Bc{Gs=&^0(BJ;3C``nX6x!40gVBe#nOl#n)Q6#Z4`lwVC-?OfeTST=Otx9Qwa-; zuKs>obMq|p4rqp@jr2<^f3&3=izy1i$0zazW}7m2m#q zVwN_%p!UqBY84??36sGT1Egob+0D?v0@1@H>C6{)S9m6%e6Fd9D7QqehY}0xF5m@` za()zcE4Rx*JNm98LsEh;iH{nq2+zPB2K2aDj*Y&Hiwj=u&JK@K&JMJ5zFvy!U*m!H zE@iH+*Nc~0eWIzpe%}fYtS0cMuM9pupmktLa2bEj%xKMoV<85#^tcb_WN$&0vvbD| zV!67qGWXgyh1e&A_lyb!wxJNU;;Cak0WpgjQxUp4U_CH42+RO#seZ4=e{4Yigu{-k zWs-LW-Z=!C8SgK0AJgaW=-*k2{9@{T1Y3R~njZ7;(q5&PO9~7!ESH6^Gf-@30Pu!C zwzVxT%%1;oTL*0?Cd`mTFZbyKdwv_q?Tw-J+ZXTbRc|+Qkhz4ZIAG&&p-?v+CA=WE z1$j(g#M~=N6Uh1qaY2>>5EIC*Ofdon?L9&XqS#Ar!^=A1At4Z861N9-TtLZjl%amn zxX*MvExTyXl}Zcl4%-tF!>gij>-?*2&{2Z@XhZXeCKwM5*05k_N~tbNWSh(N!LtSp z59Ywkt#DqA&Mz-{{;)mUvx-%Komyei8ypd;e6%TQr>T`5Lhp~c*x3TF0rwVMP@H5m z66Vnq>#VTu;a_0IOjJH*45WH z<3gVyjv$6Nt69s90X|vuwhHx9xnP|)*!S#Oep1=je zL=YhG-lIrm{(l0Uk`w@&Dg7id4m2sPq8ELwQTC?xN$)Clozr*gzHEUR%_RS7$wN>= z`MsgoXwqUhFFwD1NZ5&t{;?66`S9f>Xd^e*1L(#G5X@M3#l<@y5Qba>PXXjG{H2MX zDfF_MJUdunncdYiy6*3YrFq!O5Y)d=%F6@)XV6UoJO2Hpavd)rc{0Yp0)NG7Eq2H4 zxb9DmiQhwXaelj&Z}K=CmuA;zB8KQnEPTUme!X^k8gj)5eW%aJ|v_Cc`v4H-`~adKY2;i|7b|Clk}3wxMTT_n*44{~b#h>pDWB@cF>`5SWmUSLra7o}t z=~G)>t#57-SZMVAXylC@*(}o~3=F(keos~Q2WfyqM9qsDRQujB%P39zyx)Q~>l{w8 z-RP~lR5nl8ydmK6Fk@)E7Kd|c2b|}Ca=^utiGxFr>uNebc6-936R#fK4?FC&4+oy-|3lU)-L#@qE^Wla=3?ExJd=(YhFI&J`OZ)o&DX>%ie+%qnYXjs9 zJBo=mmkxmat%%#0F2{(iAels+)fGwor^(L9xlwYBe!$}0l3sbu>j}dM*He!$)7Z$Ed zi}0QUn6duLP<3f|2zI&;=;2L}#wJ7CLn+O`B(Lh|`5QPsPHb3k{jXrcgma!9z%JFz-h_gLe^p ziyloEL=U|N9Frjue!1K})hgz>*?uMFw@^pG_!}q-N@E3uTY6yN^f7~uAB3I^b6K(5 zOfI4Z!w49JK|aal^Y|2An4$4yq8R_tChzrrW@>6*)kWy@2-|cs%npE_$HtPi$^1$p zitgqu%#XZ7KBZ}_+pJJF+CUz{JgpHiyMOJx%Hi%ZZVw6;SqHM+I-L=HUnJ{8W}xQ- z@&V6>#d(9n_S#W!wC`>?oD{KRyobq9|3J1e0(ha#tk@M$N|YrOhjZl1su-D4QjpY!+0)R zA>xJz5;9CKE`zjsxW!J+Uxy9Ffdhlfth0LD9&Y{+eouqmOl{(tG~1o-&W znyNEC4Pr{rS?+aux<5}a@B{7dN0k2&-8`}8AJN1Ytd&(&<5N?gAx@W)vV3}&kSnb< zBUgTJ=b}2#O{iMgny1SK%~GfA zBulAvZW(M-U3l}bQqoNGjZ!D?(FH~}9}83uA66cF)dGAyAc;XQ2(ZVED_vXh;dlz0 zCjJ=>9wr)IcC3GS65bCb*z$rV3gdo=q2POZ9oPkQ%n9;MDxondX)Rw!by*#5HQCxqTs zHK>g1pJC$wTcRjtxIxM2%K zI`na<{4p_D_GF*A$XxCi%1pq25O)|0kZa?T2o4eL4{{PyQBGok<+%D8<;}s7k@Rzv zVLZV|u&A;@dYlul11bj7AV?$hpoMt&a2R|cPtkK`(h)CU%C;ZkEbv%uZG$dOqIf)N zVzTc$^d2g?K4?17Pe9`9<3{Xj;0eZH?(OJ%oo8aGYncm#rtN;@3K}dkJ<#9~_)nv7Kzl(UK;thQN%V9{cnmOG1P)(6vlCYW#}68F zR@=zVoI7m!RBdK_Uy>Y>%z|^{TYo9ToLyXEmW$k{*0k07S`Oi20b9p$14Y@~)I>tj zf#4^?c*7t=`OD6ek+ly@%s)g#cyWlZD zH9r0lvKpLd;e)J7CTChVgbJRUkK+2Cpi}>E+P%>t#|~P}_RHwaLeEIU=myv+$Mssv zEAP4-kw6i5y@ndM$t7aV*;|wZ*t9(PTx53mv_K3LT%%oGx9{E4HfS7s&!Vq^aoRsj zGYyWPP9lh{_uSGnYfn+V#_ZpVxZ3tU)m%lVxmY{JqaP%EM;m}i$8E@{U4bdpBa~f# zfz3aM;s99zPe6pQ-VY?sCCT{B*OYtyj9`Rt(k5EH`3*j)(bC`CAfUse=cZo6(lNC( zomaGDW;e0F4Fpl$GT$hpN3OBj&Z5PDo!vFR0cEmfDEQ-~ayac~~EL>jXY7k?5QOr6w! z-_cCH+7R~ZpjN?%A)-{hKQMOBI&{ox0%wQSJSg2R;F{UC1aI%QyE8KzXSNi9t zJeWR0SZZ^V!FkhBEiI~Y_mqf&k&xCRA@-)Ua+=K~gKhHGS+(TCjDi<^B-Ng&=xzxE z8+ZrkGY)06$alyW&CTKbu+a^00$M?cVWF7QHb<)z95m`~cQJ%JivBdoVRbZVDZ1J! zoRmN#xt>hoD`duCU9XB$wUdXJSW5vK+{F~=Yfm^e0g=Jwuv0#^VbdniPx)9}%%~JvV)WFNFE5$V9}L`$8+U=8 z*p~wjyR&%2Ip#~T6R%CW_PAU&rjnqep4j475-01FTFehU|0_2?e1?ESe0h$)a#cNX zr@<-j_c`T@GSE*1o^(q|e%OMWTH{vJYE1?&PAI0QKqmt(ux4_pOJQmx-Dc8+Gi1Q) z5l-$#d_Yh@0fj+*1Fi}8XBZY95;`1wwxOf$0Gole;*j^sn|V%cnA2LR6?6xBLmN{UHWmaq2&S#cgl@8CZlGaZR$nkdTmnKmf|ktuPq+B_V<+1lYlU zhZY}6|6f9jhJOVA1uaSu0}Fr0<;eOf8u+K0!F1rG0nnFyc&l^xe!ve1jFd_yi6!P9 z^CNkS9{+X71g?JiCIwu1E8p2KvaT&N|306U;&a90VSwNk|04cC@3llNFgF3 zAd*2uat;PiK_o~JBk5W%k4TO(HQcQZ%Z>JXuRTQ zApg8{5+ld8ZQGNS(5{A;5k|d3HIs&P`1o;Ew0Jo=2NCSH#&|G!eaJyZQLD{#2EDiS zk9I%v@bmWvhcON74Tm@!JBC+MvT7U2FaL0V>6!LG^m3<`#{L3aa4T_6j*b$@Y?^5> zW?MgxdUxO#usn#Px{KQx04)UaFqq9IP!+CM36fL087J`auniV})&PjUHW-Zx0YZk!PVYNe7~hQ5{-K$O`d|XWp$}!19?e8CUn5lh zwaAOo($IuKpY+qmk1#<#MYPVPFw6@F&LePiY;2BCno~tKneqdpo*Wwbfpgc)Yy{BM zXJ%yuG|816<>{7+3*==&H4t-RE>X)*>i2QmwFtVCee?;<$$gp_JPljnmPBE*wzdAn znB0H@F$1bCL=3RpP~Q+=&eU?RQxzINA3l6QLSje*FZ<~;iE^P2@h=w@C$2u5a-r_iy86!ID<~vzC%BHRv!DR@A3-#&D`_aW$0Ku$#=2_} zK?#!-K>)u?EGUd_z%eyyHD6kphOz*d!lMTd-Z8J+S)kRF*e)XVrcaXQBS88jr$xuP z;6*^V04I^VfG>%913BK#jcYEk@2j}&x(57}VE@Ji2Axj&S3J^;W(toAd! zm}a`iK&It2#Fta=R{G6SPi$E^?DwUi`C#>hcg>~{q>1ThV1U}}OUV@!d0dx|_yGt2 zO@|yHQ}XpIL^C(c&BeY`qGACxYO&3m7_Y)bQdA3R<29Tj5olg8ueZ>m;<=Df=6Axz zzhNwjTX#VA%(fYoXxXxbD9+s6x*|4%>Hw+=7E@W`a&G*MR=vGp^ZN~%ll@LLw(@4> z8f~E1HV#1ZUPz0G5;4dRg3#biG&H;2uY2wi?0}0M<85NZ+!6%1g^|%ij$j~dIn>Xo zy07Be<^NmAj;BiYm(7q~DMQ}5c1hm(DxCN9j}55n!12MX31)?i;Evv2^@|sU>cW}~ zOEZLlpu&Y3$I;bl9=()5NRTA%bu=sS46zS*>cXZ(6V5Z1|E4jDWkAJZO1pxI@@4Xh z-NG~hLtJH~0x@s-Rxy&%Lj>HDCzrXrqd9)Bj7mwAE6|tGz8<%adw9w#40>@?ea$D+x} zaXjWD8I3_0$b_4Fd)d6|fPg_2LA7B6)&SNL7*9LVbwt*1EhQu>xo9_Q*}Bhb%U*py zv*GN%y#nsxrOZ)A(1IRhW3wA>xqGKDzj`gzMX~duAudUuMwc&2GTT=B12zRn4Yw&KiR4 zTGS?8C$Jr?EG)F>eq>s4=1$1atbCYnb`zv9V_pncxC_%JU~WEaDkw~ok>$b&yZD;9 zXx&P0NJY94>>-udnT_(#%F^-@3}WH@3rHAAY}qW^EIKyGvSFFnyxv7$bmd8*Zpp(! zATfZ;7z3-w!fR$BjFbf%(V0XRhX{|e7D?O!eAVj?FoWHQVh>HK-hlyNs~gs>n+6qe zC|FU;;{Dhp48^eBc>UeOR|6>+hlBuBPgXbzJJ6VCHK{1yIv{JYdE0YWegW-MTTaqk z5fZ>GLt+5R;ltm65g|GNyab;CFo{DJdg#T0D-gj{t|UMBhp44BoNjJgiAkdL*dRs$psP>+MZ~KLI0%&RPig1*+UNxI71`E1s@S1e2{TU7#Ogg zq(RFQK2%&skBWuW$6vYk*=qJm4=*#bcy1%XuycH@IX^W$9W3`JE_S>H{0XXOMu<&_ zQzt8dWUE_Pz{hm4p{mN2g*;@Z^dFgqQZ^rJG?ASzE==YkQ2=ik>`>r^d`7k6;g@Si z*Rqy?JNV+o3r1nvwx%YUojdDlYVfsWxM^F_-12!rLV)FAMuWoDw_ zlfR=oLrF_nu$IMp`_`=$4GjY*Px|`6QtAX25%FCiIwWM@L&B2-^tqwQ$%ryNu=Tqn z8M%HvcPmUd1X==w1?<3ldQGfy-D}s#omZZlk0n?&~IXgYAP;&8wJXZ#aufi{1 zjG)N3#IVU!z&+m7vY_-D!2%NHm;9jJ-!l%s?1U=u*@6bBLuiRZPYP0ZSlBpD1}qY9 zrjH-RSWmCoe2@Ap4Om1iT@`-;qG6c)!FAsseB91-S$=0Tl3y^mZ)jpTN7Lp#Hy`Xw z6MwVPZO0Kpc~)}vPq|_=!HoVz5sdvi`MSRx^F4BRq>*_sNZe)PE;p;|ZA*Wcrf zW9&M}y=3wTdQ3=+K~O@K8NBZmzo9tzRlh){Li48M=+C3;F}fXE#bn_5BS@gRYW6M(y3nSI!3k71kXQ)a|FP> z;;&y(_?dy3iz=)+alhus`c+LPPP6LfqGKQiALZb1mkImc4SvHuqGN#h;UgD$cAtv; zI5`O^GE#Qb3Ycrz|0CYTZo7<7sG{=j8&josJY}DW2(p)vzV74Cg>lkao{wcwsj2ky z{k^?niMLw|<~xmdIR28t`)u++(he3t*(8a1c!lrUcxmjT1J^F>xPX(gGfI2=_L9+C z1kL@KIbc?he%ziG65JnP`Ow{p%ngYN1Q=^;54Zbjf=&2R@NvFsE5;t$+Swro=mWoZ z)24XC>cdAKw2TkECrhME&1|RJd;J=kB#?$JFA#Pi@u< zO_=jgJTx>QZm{>$DljYCOjVZrHPmzeay0+PI!$b9AYK0aH};~QOi7WIk=go;gJmlP zSO(gvoH@CFAst=D_ux)=vzu%SlhkT=*{SQ2XD!GDYiU4Sk~TLV1+tJX%Ac^DJ`kvH z>CC3J1*6ttTyO7m66uH!kW4VuB4c8_2LUsfat{5%f(R8RoHQ4qeT@p0eKzDYI`}6g zjl~ypIXX=FK|n%T3jW4~y;BT@R(u3xxs&F68KV&^_G$z;^T9yo~(33Qw6$ zX&@-M6Vqdk>6Zh+1K*Dq-Z>y)pzhukMp@@Cb{$n5(7?7L)QSML=8)ud?GG3Bd^v(q zaRAo=#qDQd!Bc^F`A1`8&}9S6bevqEU5q-m{AUS3;xD?|j}L4{uQi;U-gjsDc4gSD zOl|1m^UN|>xz`r0fMe|rI=}U1j6q;}!N`{e>`pM{+NUeC7 zkF*3uI7`XwUsJ+9GqJr((zQ?da4zs9ln}67Mf@f}r;BN&m6bmBKknpJ{1dzG!5F*D zY#C$$rl$O0x#P=X1TjSZ03K2CM~bRw+&p#r^~u~u492;F^G!=@2vW_=^mH$mDWq36 zXzU^jKvSF`V1WgTj*znQ*8H#2rw_l>EvEbhiP92uL(5kXjDS3Sw6>4JSBa|*pdCRcs;AII; z0-El1xlTc|HfCVw5GbU(J4!h${SvIyx%v4o{*}W~paY;^(r(I+m1S&TAb`5g(NRQ- za_w5J#2QH8+38{+&w%A9QpNrIPk-eZK@08gAQgviE4*^|6wW_<+a=o#yAAy4vGglq zH~~_u8$C~FKh1u*4y_riGl$1rAkxUMuXkwjtZqpx{`~nF#70tTM8=Q^jMMNe;$qcws%+xPPC1SQ!V*s-flzg8Om@#AnREUK)eEoNG#>J7EK zl2U6&hwJL_e}Gw5hHqS^Cra^1l_r29qZ%^gU{^qbDH?B8!$e%YiA@4Fj*z7xg%1we~(>iVWIB^^47wcgC{s z2RAV2u`Cy)Md;BZUZWet8inrxU6Z&NkMT2-|s8_Q zHx;Q@9qLM{MYbbH$d}0*T1@%D4P+Ql+;ADI7Kmsh1N1pUb@}+JT;#kmsv%P|Ggb}` z9&Ya7gtwsuW`Af*#oBgC@NMeRm2vNks`gyDkFjIU&{_OEE+v7O(1xQMOZXG}zG}I> zG3xP`SIu++EUEBzUH^|j%!f-mrTzp}1nX8UqjR0-K`ij0ZJ7F>8_c(k4PoWY!_^`37qk|d!KlOw-oOGiFuTP?T)wv>I-B> z5Vo>q)1uSyn?rL4Sl8oCrpLA)y4u{e`X2AC&aN(;nXq$a?x%Gx&Et9NDADD5W}Vgi z2Gx%r2zfzXBqn7mqh~EI{}Q=1=c8zFg^Xae=+h8x@QEW05PN)H{bwZ{4CxNH1K5U``Hn-Q7FIPvLEC}kWK$y zLU$tq^CO^-3jwzA@?{gUQ3Um_d909-NJV*hOu_c^@=|~!)P)OY+Vz|nFL~hrL2LTL z1>Dc81_o{o@YI1oeb0&}@NrP{BEEm7#@~SBSsW@jfL6FM577|Md$l~PSs+83+|XT99%kfFF9UyAeQv@3c^W-*Z(ZdN2Dyc&NQad zG}JNRt~CLyzf1?S1E;YvNsNlYBLldB5CeRpZao6u)XWTAfDOM3r~=?a2NZe<_|9c#WxWUpKo$a| zqsu}-3XJ5d@gN|oqH%%Qknl7T652Wc`2(a8K;#jI*EwIvbwoqc+0SpgoAq)}7R>KJ zbVqL!=@Qsf&=bG_Ox#l(oqQ--eWAHIv3dmtPUu3b(4Q|gdCf|i(*#g^&P z)slSN!#fz71r@Zkn*MBEzZv9{B&q`CD zFBd&SBQ7JV{jj2}thswSsBr;%H^+7yWK52kSBxJ@OW#IBpmhU7IC$c~%kcCflbzLa z&F_9l&F_3J-n)l+qRC*hx_gzfeAxHU)AG_$`wiAoZQ)Ri7Z(eS7Bz_avilK-n#*HK zMQM&oxJKx0esE0u=VNla>&MMC3vo9(oH}Kh56BH<*R+i~+St_F1t!`Lnwtl99&faJ zTCK5#eHDcmx?DA&_JdE$;4VQ9J){&q*8dZ9`E!Q}Hy;QEktesi3WF6XBxDZ_0~9~s z-<%9ngtcgQ;AzEZOtcmU=YQqO&nBR%T4K+`wQqWM7RLOqFT}S%j6+392^55AT_Fuf zS|456gG~w61<-KtNa{zoil4gM@TeiPEcx71va3X)paKF6#`)x;*+(KnKlYs0s0V20 zrHJ>8u{$5(e=y8d7)J~@w}N2)kgK$3iu9nX^q>^Y-0qSp&)b0!!wA({_WSlY-G zO-|U*`kIaD=91)4h~qK3ftb4mBp-A*%+VwIdk`U5IPEM|hnOw=tUr1!FBPNO?#aWz zyZ4*Cy!`G(%?lR}k`%E4W7U$y=_=GAvYkkl2*J37>lKk+HwyuuS;Sogo56c<`N3+R zH@b!ip^6k-;^5fw$bB(^U>sW1q%ed;NW4pP&MwhGSk0EzK_ddXUPO(a5)f%J;|$ z|K!WPpH7r~x7Ns=DKin$ByQV@3b1Uj|KS8S15ZX&>0pI!&yj@+!AB|?BpjkX3=hZh zR}y4ojaWscT)u7&gYvqep+>hEN{&Vx!1xCZz}lzFZ|!+Beyl4&z;SL0Izeexqtch? z4cwLrGr;{DxZMsK<7)H+U@{4-KMZu3#EI{;xPUv3R%Uc$p2Cy&$@-M@^p}2`OW1fG ze(;$Q=3Lfrk8D%DQW?{%cugzu{Pp=KK?Zz*oSrkAa(AmNn}lVD0%&hoOrt|X))NB{ zY!g#cJt4uOruHo{;5~YjZE%&ej=@Z0&~^~#h!4bD6@3Ah12TXd4*ll9^U{LE(1c?!lE_`z|;yr^E z4rW7xxx*wY9?nZ>ufsdy$X!Z`k@Oj)C`7aSFV z8uA%deJv_t6BfQScdv_V>rME)1PJnB))}mnM&{2#F%Hmxwr8?&Jr1I&!_GTJcVc@KorNCpfzrgc6C1$JC%NwR5VjN_2ria#{yN}Fs+(N zw$?dycwOFTH_*V5QLFY;XltJ`&2!;PVbYF)uD34cf)GPyRf#VQ78}$gAPQPWF!7td z1UY)77rR(VqaP3p+TE|^nMvi1iwgt>4#*XQM;#~YtsrlMtr5TDTpCgoGiF=PZFF>1 z*hPDJ9{c%4IUMZ6#5EYG!8`;I9DG*eL0|)^)K?ASt{2;Jt}P1h*9;B74^UH1cQjIl zAw@&j#+vFy9k>)@Ukq?w!!?&_D2omqB9U{D8lfc^TJe~of(CR5^x5It0vZZz0&)NUp2$XjN^kBM*k{9C{2o4WOv`~V^+6@}r6;`u(7V%cHXl8PdiJl&25y^Xl zd+JReBwdm)Nqe%r{I!p=XdzQ*$D>Z^=pZwL;lhs}gOf8tkDOKON!(&L*0m=!y1;z} z#Z^l8uiZfl!g)Q@JTHD=cqo5I%m~23(Snsh8;XfJxC`9`0G-a1w}N=-Gzs5sKOLuDuG4ebi%7oZ%dc9_r3P9h^00tuxxgy?qxSVp%8H|!ax z)0gupz5eb}fxSfb(L^V?#Rc{71je;EiR$DO6l4FBnWuSMDYPso-5=ePOpQ=sQK@rb zB#OtE*UA~4gzenb{r;$8Blo0fLheDl#w7o+PmLfC`+NVWf)x;6mR`Hv z{_s%}_xkOxbSv^mCpkmtoX9(hTZ;PB&I zhN{{)VkU96`6OFX+;Z%ye^Qy(Pz+tAfWQnuPFla33s=%7p%zBsj-X#|M^(Zn+!G<; z-kIO~={`r``(fWG1ztRxV(7aO7chu1oXhizl0;42tz|3P=&{bXaz?Y`SJg1~M=jP7 zhK3maqjPUxC}IH4HGe^?HwsmlU!04sBRHIh8=^4x6A{4#*Dw&9Bvg;z>U?m<#F_P# z>!h)TQTr!c5QfF)7Zeb+jD%Z^cx6(vZHnWzl3rFtQJ;pE8@bFBJQcIB!U5H05NBn= ziYPstr596;1Ar%jH^eIV^vGNA!dhzSIMgc8rbHG7(G650-{1-YLBGUBQHqz&Wkf1= z0}C8Ip?OFESp^@nO6AT(9dDi9JFY~lCS*pU2oz)WL0>OAAtcgGElCgl%&_C_D5xz%+LeOPz++fNO!wQ;*;6%Yi=GMIi}6dH~#n{Xany}AznFMRsJ zL%>)wq`^vT=UfGw;R0BK@QQ2+Oc5`;|afx6>3{n}S0Lxun~)fjX*`Vp{r z1+Blbv2hVGj#7c+!rLbN9ZV#ihI(NUOx|^lPf?3A3EPgLR6@ZHTz_FH5!6hJ|7JI(P1kM4H(}WEL1475tuKmia1PzyHw!m*0App(0?gqL&+}zyI z3JWHJrG|72ZcVF1qX-jIfFS&s8>5)ZNzqL}8hk29_#=LGg8ywkU$9Kij2-7{z_122F z9MV6)s%VKDe>h=;Jkh0isIM#ob^SNsr-G+*U8RHjS*w?l`s-B|z4wN*%;ujMK0M%Y zar3RtLO^^1!j_;&AV$b8HUfQV-wvl4R=JvVbKQ7DE;y$oCe9+p;8v^0oaca(0*Or6 z9^p#scO3KB#{CK^WYoR9Tv`Y_^cvR95M^%@vK|8H4ohaNW1;%(kj>)vd(o;z4T90C z=q7gfo4B!GAfr$}q}gpWS^>iUP?;nX%(i|O74?F)U>+5-3$v}@O(7%m^+)^_Uv^bI zmJq?WN{7#tLI^TFYl}g|V8*pq6ALhEJ#JYvp^Rt&$2Fs_f@4Azd6N#(=*Ki!N_=@}>DQFD2P>XEiH`Njs|0p<^h4Hfv$gWPh ztVS=A^>^U5sy>eaE_*Ji$1gqitt|>a9z2PW)%H9e{#LSgWN7GZaxx(fOI8{)PTN}h zX2dXU?9Iw|*J97ib0O^t8ZjI$yklpwB5yJBQfA6rh@^!@5{#&^y9F(Kwt_T5M+ehl zGpxUK^>e_ckZvO<8!QcW8Q1{if;f3{a|tyV`Xo+^^H$R; z#ftP#nDzUWbf^feOis?C%|s{86Y5<8ZOzSl7Ukn!V$}oigrT+=pCe*At?VYP3yU2x?%04$wW zo}bF0coGEO3I;aA4Dnn@*%>lnO@_1-RaIda+~)|v6vh|Uy#>7!21#-_C(w+n<-yD< z9O3neXQX%_3hB-p5L>1Pv~UNHL1;h%eAnyQs-cf*%cLl@4C}3V&K^}klV1S4)JtI9 zfW^~4hV)z{^EHAlj(Na0pi|{yLs&J~NNNXU5CZY~va(?utEV@UC>!Z-jeO6tF)vmc z`3ZCaA_$|X?Cg0QPe~>)zQ9TB_JRkqf0V+-$;>1qyf-W?o={UHCB?QQfC}8~;|JRh zoqT+o?r)<6Q#dUR=__HR1V<8VYf!w&o-_o~vDKFE?lsNe#ptFtXUWW74Y8qbiXZ>J z>raxl_^4SH+9fSVLJNE0c&vjA`sdrje&6%ob?&w<8-|iz710^@&Z0ZWq3#Qy=ZQlA zXGEjJWIax-moImJsD{T{jA9fWr-nVEA$oKOACw*?V;ZF=_6OwU0s==O_E+3_gqN-f@tq=k*>Ls}2)`y?nfyOpcnDN2JMtJz)fTE@4 zcT?1xpq+{9sZ1`*^pX0nYkTfOj*_YPtrX(O>>43ghVri_^FR->RuMFigoG}N#MFR> znMKd;6V>hHbPeZAtY+>4%1r4?g`k4LE)5Y81JrQFVZ9+Apg&an?Hkxx72ouq2)ZDR z!F%KIAZrh0_&;XWM@mquzFF^+Ick1dRT~WrB!$_ja2Et^DJ?zS#?EdBBNv4j+_a0( z!h;$^PzU0Z+G=Q8Gx%!G#Xulf1vrGyGz44XhN)@krGA|MnU=jQx#L4aUDa`oSRNqF z!Ry)bic(A^b`MYs)N?z56@B*FhvX7vYayrdXJBN9tn6tUGq;x~3-e{GlLQr_0L}@k z-9{Z-iCSCtllwn`A=Pn?V>m%m*0sLbND+M>^E2vsD=r|iHQM%a0C7ZOwoT_Qk!~O* z#U=rUORON|3Trp=ou(t4&qqee^^Tn`I975`(_sQpbhV z?=q8Eh`cS53cd#GE68G@^FTX6hw|LJe=5E0>MJ4O*T)1hbk|WPVRFf3G){3$_+hk1 zdH;?eA8+q{jrrekb|58(i~FRu4F&5Dz-+dY1KFSmK7RZdNC}V^!YCgTs~PPHjvC-e z!korx(75WgANB)=?M_jb?H^}BDMXO1VBd($r)}0>O;vS2GxHg8Mf?@XY3}DZC6TPD z&ODj;Z2vFd2{BC##Y1mme>4;yt&WCo6?UE9qg3^1T=U;BTvX?IIK!q zAuaYincO2e?XoSB!M)R`SMoN{8NbKU%ZKa*tcPCC&0U)ANAPWz6h;1OhkH_6)3zN+ z1H1xHpNHFCIhaP!6M~Xh-;L5n4|=D$8JsXrR5Gy$A#rEhN+OTJTdy7@CZt{n+{7aU zj1HLqsAL0RknMcXupa)JXmh_gLv{7qH3KF}JD7|BRf4}D7zgNXT(3(b=$&Cz*xR}i z9y_O<t4Nz0c`3y)2XMrQRpDvVlBea4`aZ$!%lJbVGXW1iN2EBUWy!w_Sy{C z#R?v}SOfo|O3%)}W??Y~v|dP9m|*xQMxlic#UJ1ebd=8Zd}4o(Ljf4f2t;9seX72z zwAlZfc*F=dfjjsg)8AVX-p}b)$NMU@;VP0i9ApSJ1Qg}yxPU{wqa=v4apo#66B0@< zIfd!wdrT=9LZtg(3(6b1(e>G4NdfAwRoz-QS}gBV%J(R)dTWLdnYK=aqej?P(Whn;G~hXrSH= zM1qZp3a{0Yqlr(%I^w(8_XDj3tRk2`5)*6ztAJI%H;jTp`78nR?vSFTp^>lIqpVM> zDOzbLq?y0ySZMF=KW9Ple<(8HtFKxerWo~T(`JgTa!b%e)Q<}UZ1K`nk$<5Dw_@q_ zFTx-PQRptcXiop{V=909V>)V8tDxX3If>LZE?-$wTSM>np69o=<==5N{{_PH|L@~}zC?VE`&P6^jjUTu@r1$8VN=fR zUVAp7y~N{+ys_0ISEeZAu#c3^qsJ64t%2TQifMiU!Dyi1dGdakzWVpLx&JmS_qQeV z?^uuD6Ar`3_kxHQ#jVh~&#zS--d?^!FlKHsNHtFt{Q=Axq~Be_{U~llf2;bI^LgtV z7>*IITmO!$`TzIvukT}_{h$#=qy;z%KD(Avyp+2C=+Wyw8mz7f_%``RFJ)MGF(VC>Zr?(E>$z;OR}FnL8>zhMBpd6#!xSgZjg?Dt(h3T*vPQqh({Dtg~4W&VPhog`SW593w&JDMoOC#!}rY5}Q ziXFO7y!B?N zsq1r{a)BcmB+o$YO=fQI>}<@otKN2Ud1Nrktt+~^2S{Y3e$eWnFqJkq*V0PX?5)&3 z*;Fd%bQQqBeA94lN?_EGHF0V*9j^#K>jKk=qI=h}?X2D=Desyc&}~00ZcakZhUOst zhIgX~*v`PKL;c{y5*T2O?3XN+e7Z2qHtwnQ{d?BYT0$U@0j?)&0(*cDb#;si=e~Wd zHOcUz(v{An%gJ$RPj#B}>hAh6R#S7ezn0hO=HDNjlGyD)uV%RL7ZUYwrp29qSZb zOb=bP?1U&t0IX4ZO?jfpdS&t=oQL?W&GHLCWyDGT(0$x``fhB1gV(Es_TRk}d2=Cu z>6YXF#yI`2t$_dh82F3bHE!QQ&Z!Y)6TZ?4huo;Q90CJsZ6lvLBI@2p9+vHquU>zUNddm1?z+4=NJr>j^ltk~~RR#@0wvUHiZp66BMwePT{jyJP- ztjWhP;;U|6#lYz8DaRDly@J}o&gM(Y5bCHk?Ay{=tjizn(tQx8-E>RG_1>Ou8HtI^ zq*ce|zSWL@4=}#0T~%w6McO6CmacX%Wpiz8fMKvPOD=EJ6@!7&kM9g4NwI3Vvn#jj zgwW2N{uLm_N~8^Vjc;AOY&K|r{hjy4(Q6ZDR~*>J=zULeu$OPZ`jN)`Xi2KofRLzD z!_75oGH!d#cX5F^NNoY(-*cceS7&W+bjxY=5NgR+|*Ng=5IBc zSj^$jnAksV)~aoM=(c~mVWwIMLySSgLD63Akq!s#_3D$#UYC)XvuXIhoT|@ysv00b z)tmfWAUfr)lxaZR#Ptxx%0p(EL4AFnZrgln#rDay}BWb`6IfOA9}yIuO1(;?bZyqX&-J- zw|l%)jxT)MbL~Uj51$tMi(0qcuLj&YZBXLbxGSgcBQ~FibF7Uo#SnmCa1rRXbaI!E>>?iiw%Tz_;K&zQZHMNU*r!TW@2~uy2yN zu9MA&WN+SEli3<3HRS^_oVk2MSC&#|`BmLc8sy@k>R3TR+79|UwN!O-C`daExmz_$ z!mfCiIjj$C^W!ntO;RkNruSZ6r8?PH*KwuHu#w?Ml@IeF(YO;hGpe`m9<-R-JZ>*!jEp@mGJtG z&tA_ooz@5P7!HM!{G~4SBes6Zp=Zza5c0Ne?`-#G$+jH1`igN=;S;vt2E%}J#S8vz zM{q>UU*6Hnm)Wp5BOYaVK607PH`!uPrxLX1;~O1|9(iVt&QHD?Uu>|8le|Ql8FJy= zPZF>isP1~!C>OMF{hhP6qoZK*DC~fZ)*W9Muco)8->~j@&he`~l1$^5(nsDhlfV+2 zDB?T>Q={sA)=Qf|nt#4TI>9l_LEE@tKGIOtPPR~qTzyY^)v9nyj@a7NrxA4D!gs=fg(A&A526ql+$C zuVHAsTF@Y9-l=my!rVMD{H)RG^Rvu+nzI5t<0oxxZ*GZSHLfKX>@*rbZ>at(VwbR8 zaOcMujiB~A5}%b`Ig4q+=N3wym+Z&Jlx1ZH{Ej{>5kLIs{?MiP_aUth&z`2cUKhH6 zc@?#(!#Au3>g|5KaWJp9F=OncBQbyR)dqrO*d#}$}vgJg@L zY%wcT-Lbbk_qgiMS}DQyg>Qqcy=n%$yY$j~_GPQg7IEqy_cS*O3JeH6m-@y=v3yr{ z^>h}u*T%kFF59u5LXVKG;~7?DgG%iCZxbfwPO=%3=87qjOy^o%={CB{^r+Ka6Jd|& zXB662U#v3_`o-8Nz@JZFaWOz!*p=F;(ok{8>w}69{jp6KY=V_9^o`xWZ;qq|&WB&V zf9GNARcLZHop)Zdbg^znyBtZLa-T$|CSD9|Ee>BnW)_x_dD_T3x>IjrWIfd)?~kuJ z8xG{cUve^^UfrN^p~XCQtk=KhZOj3Z0%P zIX+-cKV@b89~mZorN)sN8*Dw)N#)M(cZP4^Gr88-#2I&NYS_p5lbjjPpB6s!s;Phc$o!C9y}EENCN2vDZ(%(e*ev+@>+5d? z?)h~&88TAQcQgw)lbkYBPsgxO)nBye-8264=xSqynwfE@7|lk5*%YS90YS~&!1g=q z-h?L`UP`mRBq&C5TKzM1o?le!V-HVj&dkWYryTv)rZ!5Q&0Zum8Ncga7;^af^4(#S z2{Gq)9czeK(x$vLIOKZ~t*q*Km&#F==Yq$+ISox!Ariah?8)g%2wSs*H;Iw^Rp(C= ztKy;}xw-xMCg0Ad>s3Zy>90-Q#y{i}ES4Fjdv(#i{hFcWzJBKnPq%83F?6~zoqCR` z2PVw+9%Y)P=A6(Ts{HIaet?}rSwq9*u7fbB@uGX0Cq6MPvhj@HW+o}9h*SqAj7_|J zGjib4V#&Uvy2ic-4303>Wln0T{E*qP<6%UbL{E$!6jcdFzuKVX50G zFdmC@S0p_qeaDTK2_Z42>k)IOou=No=E+IOE~qzR&TU6KJv4-aOfeg;uR9(VkQ*xM z;=@Mmul~$^$7I1X?l`dS@(UIK`}RnBjxdul7*b)J=bGW~&8$yWG1cpGDk)k|DZpiY zCRN(^jb_%6|7|9e!ayKc|3oRG<9WYN7oIQTEl#{*B5viRX&=?g=e)Os`O)^of(zOe zrkPf=7BfnQx(9*^H?~b5Gk8*z;e^q~*UO95m!5=t>=MRyxpbaf_|xi>qhpL*=d7x_ z<7|M%=ID&vg>g<2$?6RA(t86<+AZDv6=a?#L8@}rXV)ruH}g)qWTR?e>V{g7V$}o Zi`NciguEcFm?r)t>9nl)n^RiO{{#Qeqr(6I literal 0 HcmV?d00001 From 7bf084adc0d3af32539c712c02b562e8c247c856 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 27 Jan 2025 20:57:32 +1100 Subject: [PATCH 12/22] fixed format issue --- docs/docs/quickstart.mdx | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/docs/docs/quickstart.mdx b/docs/docs/quickstart.mdx index 173e23835..818594df4 100644 --- a/docs/docs/quickstart.mdx +++ b/docs/docs/quickstart.mdx @@ -17,9 +17,9 @@ Cortex.cpp is in active development. If you have any questions, please reach out ## Local Installation Cortex has a **Local Installer** with all of the required dependencies, so that once you've downloaded it, no internet connection is required during the installation process. - - [Windows](https://app.cortexcpp.com/download/latest/windows-amd64-local) - - [Mac (Universal)](https://app.cortexcpp.com/download/latest/mac-universal-local) - - [Linux](https://app.cortexcpp.com/download/latest/linux-amd64-local) +- [Windows](https://app.cortexcpp.com/download/latest/windows-amd64-local) +- [Mac (Universal)](https://app.cortexcpp.com/download/latest/mac-universal-local) +- [Linux](https://app.cortexcpp.com/download/latest/linux-amd64-local) ## Start a Cortex Server @@ -43,11 +43,11 @@ This command allows users to download a model from these Model Hubs: - [Cortex Built-in Models](https://cortex.so/models) - [Hugging Face](https://huggingface.co) (GGUF): `cortex pull ` -It displays available quantizations, recommends a default and downloads the desired quantization. +It displays available quantizations, recommends a default and downloads the desired quantization. - The following two options will show you all of the available models under those names. Cortex will first search + The following two options will show you all of the available models under those names. Cortex will first search on its own hub for models like `llama3.3`, and in huggingface for hyper specific ones like `bartowski/Meta-Llama-3.1-8B-Instruct-GGU`. ```sh cortex pull llama3.3 @@ -70,8 +70,8 @@ It displays available quantizations, recommends a default and downloads the desi ## Run a Model -This command downloads the default `gguf` model (if not available in your file system) from the [Cortex Hub](https://huggingface.co/cortexso), -starts the model, and chat with the model. +This command downloads the default `gguf` model (if not available in your file system) from the +[Cortex Hub](https://huggingface.co/cortexso), starts the model, and chat with the model. @@ -137,6 +137,7 @@ This command displays the running model and the hardware system status (RAM, Eng ## Stop a Model This command stops the running model. + ```sh @@ -153,6 +154,7 @@ This command stops the running model. ## Stop a Cortex Server This command stops the Cortex.cpp API server at `localhost:39281` or whichever other port you used to start cortex. + ```sh From 8c7d71e6e3eec78a03e8a8dcd73750108540c90d Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Tue, 28 Jan 2025 11:11:59 +1100 Subject: [PATCH 13/22] added local files to be ignored --- .gitignore | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/.gitignore b/.gitignore index ad579aed8..8f10ea41e 100644 --- a/.gitignore +++ b/.gitignore @@ -21,4 +21,10 @@ platform/command platform/src/infrastructure/commanders/test/test_data **/vcpkg_installed engine/test.db -!docs/yarn.lock \ No newline at end of file +!docs/yarn.lock + +# Local +docs/.yarn/ +docs/.yarnrc.yml +docs/bun.lockb +docs/yarn.lock From e0a3b0e3006113baf63181cdf51e415655537024 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Tue, 28 Jan 2025 15:17:43 +1100 Subject: [PATCH 14/22] improved wording, tables, and the explanation of how to run models --- docs/docs/capabilities/models/index.mdx | 47 +++++----- docs/docs/capabilities/models/model-yaml.mdx | 95 ++++++++++++-------- docs/docs/capabilities/models/presets.mdx | 7 +- 3 files changed, 86 insertions(+), 63 deletions(-) diff --git a/docs/docs/capabilities/models/index.mdx b/docs/docs/capabilities/models/index.mdx index 2460905de..4dc032575 100644 --- a/docs/docs/capabilities/models/index.mdx +++ b/docs/docs/capabilities/models/index.mdx @@ -4,44 +4,47 @@ description: The Model section overview --- :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended behavior +of Cortex, which may not yet be fully implemented in the codebase. ::: -Models in cortex.cpp are used for inference purposes (e.g., chat completion, embedding, etc.). We support two types of models: local and remote. +Models in cortex are used for inference purposes (e.g., chat completion, embedding, etc.) after they +have been downloaded locally. Currently, we support different engines including `llama.cpp` with the +GGUF model format, TensorRT-LLM for optimized inference on NVIDIA hardware, and ONNX for edge or +different model deployments. -Local models use a local inference engine to run completely offline on your hardware. Currently, we support llama.cpp with the GGUF model format, and we have plans to support TensorRT-LLM and ONNX engines in the future. +In the future, you will also be able to run remote models (like OpenAI GPT-4 and Claude 3.5 Sonnet) via +Cortex. Support for OpenAI and Anthropic engines is under development and will be available soon. -Remote models (like OpenAI GPT-4 and Claude 3.5 Sonnet) use remote engines. Support for OpenAI and Anthropic engines is under development and will be available in cortex.cpp soon. - -When Cortex.cpp is started, it automatically starts an API server, this is inspired by Docker CLI. This server manages various model endpoints. These endpoints facilitate the following: +When you run `cortex start` in the terminal, cortex automatically starts an API server. (This +functionality was inspired by the Docker CLI.) The cortex server manages various model endpoints which +can facilitate the following: - **Model Operations**: Run and stop models. -- **Model Management**: Manage your local models. -:::info -The model in the API server is automatically loaded/unloaded by using the [`/chat/completions`](/api-reference#tag/inference/post/v1/chat/completions) endpoint. -::: +- **Model Management**: Pull and manage your local models. + ## Model Formats -Cortex.cpp supports three model formats and each model format require specific engine to run: + +Cortex supports three model formats and each model format require specific engine to run: - GGUF - run with `llama-cpp` engine - ONNX - run with `onnxruntime` engine - TensorRT-LLM - run with `tensorrt-llm` engine +Within the Python Engine (currently under development), you can run models in other formats + :::info For details on each format, see the [Model Formats](/docs/capabilities/models/model-yaml#model-formats) page. ::: -## Built-in Models -Cortex offers a range of [Built-in models](/models) that include popular open-source options. +## Cortex Hub Models -These models are hosted on [Cortex's HuggingFace](https://huggingface.co/cortexso) and are pre-compiled for different engines, enabling each model to have multiple branches in various formats. +To make it easy to run state-of-the-art open source models, we quantize popular models and upload these +versions the our own space in HuggingFace at [Cortex's HuggingFace](https://huggingface.co/cortexso). +These models are ready to be downloaded and you can check them out at the link above or in our [Models section](/models). -### Built-in Model Variants -Built-in models are made available across the following variants: +### Model Variants + +Built-in models are made available across the following variants: - **By format**: `gguf`, `onnx`, and `tensorrt-llm` - **By Size**: `7b`, `13b`, and more. -- **By quantizations**: `q4`, `q8`, and more. - -## Next steps -- See Cortex's list of [Built-in Models](/models). -- Cortex supports multiple model hubs hosting built-in models. See details [here](/docs/capabilities/models/sources). -- Cortex requires a `model.yaml` file to run a model. Find out more [here](/docs/capabilities/models/model-yaml). \ No newline at end of file +- **By quantization method**: `q4`, `q8`, and more. diff --git a/docs/docs/capabilities/models/model-yaml.mdx b/docs/docs/capabilities/models/model-yaml.mdx index e761d7da2..778d8b90e 100644 --- a/docs/docs/capabilities/models/model-yaml.mdx +++ b/docs/docs/capabilities/models/model-yaml.mdx @@ -7,10 +7,13 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex is currently under active development. Our documentation outlines the intended behavior of +Cortex, which may not yet be fully implemented in the codebase. ::: -Cortex.cpp utilizes a `model.yaml` file to specify the configuration for running a model. Models can be downloaded from the Cortex Model Hub or Hugging Face repositories. Once downloaded, the model data is parsed and stored in the `models` folder. +Cortex uses a `model.yaml` file to specify the configuration desired for each model. Models can be downloaded +from the Cortex Model Hub or Hugging Face repositories. Once downloaded, the model data is parsed and stored +in the `models` directory. ## Structure of `model.yaml` @@ -77,18 +80,28 @@ engine: llama-cpp ``` -The `model.yaml` is composed of three high-level sections: +The `model.yaml` is composed of three high-level sections: The model metadata, +inference parameters, and model load parameters. Each section contains a set of +parameters that define the model's behavior and configuration and some parameters +are optional. + +The `model.yaml` contains sensible defaults for each parameter but there are instances where +may need to override these default values to get your model to work as intended. For example, +if you train or fine-tune a highly bespoke model with a custom template and less common parameters, +you can specify these in the `model.yaml` file. ### Model Metadata ```yaml -model: gemma-2-9b-it-Q8_0 -name: Llama 3.1 -version: 1 -sources: - - models://huggingface/bartowski/Mixtral-8x22B-v0.1/main/Mixtral-8x22B-v0.1-IQ3_M-00001-of-00005.gguf - - files://C:/Users/user/Downloads/Mixtral-8x22B-v0.1-IQ3_M-00001-of-00005.gguf +model: gemma-2-9b-it-Q8_0 +name: Llama 3.1 +version: 1 +sources: + - models://huggingface/bartowski/Mixtral-8x22B-v0.1/main/Mixtral-8x22B-v0.1-IQ3_M-00001-of-00005.gguf + - files://C:/Users/user/Downloads/Mixtral-8x22B-v0.1-IQ3_M-00001-of-00005.gguf ``` -Cortex Meta consists of essential metadata that identifies the model within Cortex.cpp. The required parameters include: +A Cortex Model consists of essential metadata that identifies it within the server and the local +files. The required parameters include: + | **Parameter** | **Description** | **Required** | |------------------------|--------------------------------------------------------------------------------------|--------------| | `name` | The identifier name of the model, used as the `model_id`. |Yes | @@ -98,7 +111,7 @@ Cortex Meta consists of essential metadata that identifies the model within Cort ### Inference Parameters ```yaml -stop: +stop:   - <|end_of_text|>   - <|eot_id|>   - <|eom_id|> @@ -125,9 +138,10 @@ ignore_eos: false n_probs: 0 n_parallels: 1 min_keep: 0 - ``` -Inference parameters define how the results will be produced. The required parameters include: + +Inference parameters affect the results of our model's predictions. While not all parameters are +required, all of the following can be used to tweak the model's output. | **Parameter** | **Description** | **Required** | |---------------|-----------------|--------------| @@ -158,38 +172,47 @@ Inference parameters define how the results will be produced. The required param ### Model Load Parameters + +The Model load parameters give you the options that control how Cortex runs the model and can be crucial +for the model's performance. + ```yaml -prompt_template: |+ +prompt_template: |+ <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_message}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> -ctx_len: 0 -ngl: 33 +ctx_len: 4096 +ngl: 33 engine: llama-cpp - ``` -Model load parameters include the options that control how Cortex.cpp runs the model. The required parameters include: + +Not all parameters are required. + | **Parameter** | **Description** | **Required** | |------------------------|--------------------------------------------------------------------------------------|--------------| -| `ngl` | Number of model layers will be offload to GPU. | No | +| `ngl` | Number of model layers will be offload to GPU. | No | | `ctx_len` | Context length (maximum number of tokens). | No | | `prompt_template` | Template for formatting the prompt, including system messages and instructions. | Yes | -| `engine` | The engine that run model, default to `llama-cpp` for local model with gguf format. | Yes | +| `engine` | The engine that run model, default to `llama-cpp` for local model with gguf format. | Yes | -All parameters from the `model.yml` file are used for running the model via the [CLI run command](/docs/cli/run). These parameters also act as defaults when using the [model start API](/api-reference#tag/models/post/v1/models/start) through cortex.cpp. +These parameters also act as defaults when using the [model start API](/api-reference#tag/models/post/v1/models/start) through +cortex. If you change these, in particular the `prompt_templpate` and the `engine` (the others can be changed during runtime) +make sure to reload the model. ## Runtime parameters -In addition to predefined parameters in `model.yml`, Cortex.cpp supports runtime parameters to override these settings when using the [model start API](/api-reference#tag/models/post/v1/models/start). +In addition to predefined parameters in `model.yml`, Cortex supports runtime parameters to override these settings +when using the [model start API](/api-reference#tag/models/post/v1/models/start). ### Model start params -Cortex.cpp supports the following parameters when starting a model via the [model start API](/api-reference#tag/models/post/v1/models/start) for the `llama-cpp engine`: +Cortex supports the following parameters when starting a model via the [model start API](/api-reference#tag/models/post/v1/models/start) +for the **llama-cpp engine**: -``` +```yaml cache_enabled: bool ngl: int n_parallel: int @@ -197,28 +220,24 @@ cache_type: string ctx_len: int ## Support for vision model -mmproj: string +mmproj: string llama_model_path: string model_path: string ``` -| **Parameter** | **Description** | **Required** | -|------------------------|--------------------------------------------------------------------------------------|--------------| -| `cache_type` | Data type of the KV cache in llama.cpp models. Supported types are `f16`, `q8_0`, and `q4_0`, default is `f16`. | No | -| `cache_enabled` |Enables caching of conversation history for reuse in subsequent requests. Default is `false` | No | -| `mmproj` | path to mmproj GGUF model, support for llava model | No | +| **Parameter** | **Description** | **Required** | +|--------------------|--------------------------------------------------------------------------------------|--------------| +| `cache_type` | Data type of the KV cache in llama.cpp models. Supported types are `f16`, `q8_0`, and `q4_0`, default is `f16`. | No | +| `cache_enabled` | Enables caching of conversation history for reuse in subsequent requests. Default is `false` | No | +| `mmproj` | path to mmproj GGUF model, support for llava model | No | | `llama_model_path` | path to llm GGUF model | No | These parameters will override the `model.yml` parameters when starting model through the API. ### Chat completion API parameters -The API is accessible at the `/v1/chat/completions` URL and accepts all parameters from the chat completion API as described [API reference](/api-reference#tag/chat/post/v1/chat/completions) - -With the `llama-cpp` engine, cortex.cpp accept all parameters from [`model.yml` inference section](#Inference Parameters) and accept all parameters from the chat completion API. +The API is accessible at the `/v1/chat/completions` URL and accepts all parameters from the chat completion +API as described [API reference](/api-reference#tag/chat/post/v1/chat/completions) - +With the `llama-cpp` engine, cortex will accept all parameters from [`model.yml` inference section](#Inference Parameters) +and from the chat completion API. diff --git a/docs/docs/capabilities/models/presets.mdx b/docs/docs/capabilities/models/presets.mdx index 799cf6cbc..c1cc8eb48 100644 --- a/docs/docs/capabilities/models/presets.mdx +++ b/docs/docs/capabilities/models/presets.mdx @@ -4,10 +4,11 @@ description: Model Presets --- :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended behavior +of Cortex, which may not yet be fully implemented in the codebase. ::: - \ No newline at end of file +::: --> From 53c32bf7388986e87af5ac87863c820f61ddefad Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:19:45 +1100 Subject: [PATCH 15/22] improved readability, flow of explanations, and cleaned examples --- docs/docs/capabilities/embeddings.md | 104 ++++++------ docs/docs/capabilities/hardware/index.mdx | 44 ++--- docs/docs/capabilities/text-generation.md | 187 +++++++++++++++++++++- 3 files changed, 256 insertions(+), 79 deletions(-) diff --git a/docs/docs/capabilities/embeddings.md b/docs/docs/capabilities/embeddings.md index 44f153556..32a2bd1b4 100644 --- a/docs/docs/capabilities/embeddings.md +++ b/docs/docs/capabilities/embeddings.md @@ -1,103 +1,95 @@ --- title: Embeddings --- -:::info -🚧 Cortex is currently under development, and this page is a stub for future development. +:::warning +🚧 Cortex is currently under active development. Our documentation outlines the intended behavior of +Cortex, which may not yet be fully implemented in the codebase. ::: -cortex.cpp now support embeddings endpoint with fully OpenAI compatible. - -For embeddings API usage please refer to [API references](/api-reference#tag/chat/POST/v1/embeddings). This tutorial show you how to use embeddings in cortex with openai python SDK. +Cortex now support an embeddings endpoint that is fully compatible with OpenAI's one. +This tutorial show you how to create embeddings in cortex using the OpenAI python SDK. ## Embedding with openai compatible -### 1. Start server and run model +Start server and run model in detached mode. -``` -cortex run llama3.1:8b-gguf-q4-km +```sh +cortex run -d llama3.1:8b-gguf-q4-km ``` -### 2. Create script `embeddings.py` with this content +Create a directory and a python environment, and start a python or IPython shell. +```sh +mkdir test-embeddings +cd test-embeddings +``` +```sh +python -m venv .venv +source .venv/bin/activate +pip install ipython openai +``` +```sh +ipython ``` +Import the necessary modules and create a client. + +```py from datetime import datetime from openai import OpenAI from pydantic import BaseModel -ENDPOINT = "http://localhost:39281/v1" -MODEL = "llama3.1:8bb-gguf-q4-km" +``` +```py client = OpenAI( - base_url=ENDPOINT, + base_url="http://localhost:39281/v1", api_key="not-needed" ) ``` ### 3. Create embeddings -``` -response = client.embeddings.create(input = "embedding", model=MODEL, encoding_format="base64") -print(response) -``` - -The reponse will be like this - -``` -CreateEmbeddingResponse( - data=[ - Embedding( - embedding='hjuAPOD8TryuPU8...', - index=0, - object='embedding' - ) - ], - model='meta-llama3.1-8b-instruct', - object='list', - usage=Usage( - prompt_tokens=2, - total_tokens=2 - ) +```py +output_embs = client.embeddings.create( + input="Roses are red, violets are blue, Cortex is great, and so is Jan too!", + model="llama3.1:8b-gguf-q4-km", + # encoding_format="base64" ) ``` - - -The output embeddings is encoded as base64 string. Default the model will output the embeddings in float mode. - +```py +print(output_embs) ``` -response = client.embeddings.create(input = "embedding", model=MODEL) -print(response) -``` - -Result will be - ``` CreateEmbeddingResponse( data=[ Embedding( - embedding=[0.1, 0.3, 0.4 ....], + embedding=[-0.017303412780165672, -0.014513173140585423, ...], index=0, object='embedding' ) ], - model='meta-llama3.1-8b-instruct', + model='llama3.1:8b-gguf-q4-km', object='list', usage=Usage( - prompt_tokens=2, - total_tokens=2 + prompt_tokens=22, + total_tokens=22 ) ) ``` -Cortex also supports all input types as [OpenAI](https://platform.openai.com/docs/api-reference/embeddings/create#embeddings-create-input). +Cortex also supports the same input types as [OpenAI](https://platform.openai.com/docs/api-reference/embeddings/create). -```sh +```py # input as string -response = client.embeddings.create(input = "embedding", model=MODEL) - +response = client.embeddings.create(input = "single prompt or article or other", model=MODEL) +``` +```py # input as array of string -response = client.embeddings.create(input = ["embedding"], model=MODEL) - +response = client.embeddings.create(input = ["list", "of", "prompts"], model=MODEL) +``` +```py # input as array of tokens -response = client.embeddings.create(input = [12,44,123], model=MODEL) - +response = client.embeddings.create(input = [12, 44, 123], model=MODEL) +``` +```py # input as array of arrays contain tokens response = client.embeddings.create(input = [[912,312,54],[12,433,1241]], model=MODEL) ``` diff --git a/docs/docs/capabilities/hardware/index.mdx b/docs/docs/capabilities/hardware/index.mdx index 707c54373..b1c1fe38b 100644 --- a/docs/docs/capabilities/hardware/index.mdx +++ b/docs/docs/capabilities/hardware/index.mdx @@ -4,41 +4,45 @@ description: The Hardware Awareness section overview --- :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex is currently under active development. Our documentation outlines the intended behavior +of Cortex, which may not yet be fully implemented in the codebase. ::: - -# Hardware Awareness - -Cortex is designed to be hardware aware, meaning it can detect your hardware configuration and automatically set parameters to optimize compatibility and performance, and avoid hardware-related errors. +Cortex is designed to be hardware aware, meaning it can detect your hardware configuration and +automatically set parameters to optimize compatibility and performance, and avoid hardware-related errors. ## Hardware Optimization -Cortex's Hardware awareness allows it to do the following: +Cortex's hardware optimization allows it to do the following: -- Context Length Optimization: Cortex maximizes the context length allowed by your hardware, ensuring that you can work with larger datasets and more complex models without performance degradation. -- Engine Optimization: we detect your CPU and GPU, and maintain a list of optimized engines for each hardware configuration, e.g. taking advantage of AVX-2 and AVX-512 instructions on CPUs. +- **Context Length Optimization** maximizes the context length allowed by your hardware, ensuring that you +can work with larger pieces of text and more complex models without performance degradation. +- **Engine Optimization**: we detect your CPU and GPU, and maintain a list of settings optimized for each +engine-hardware combination, e.g. taking advantage of AVX-2 and AVX-512 instructions on CPUs. ## Hardware Awareness -- Preventing hardware-related error -- Error Handling for Insufficient VRAM: When loading a second model, Cortex provides useful error messages if there is insufficient VRAM memory. This proactive approach helps prevent out-of-memory errors and guides users on how to resolve the issue. - -### Model Compatibility - -- Model Compatibility Detection: Cortex automatically detects your hardware configuration to determine the compatibility of different models. This ensures that the models you use are optimized for your specific hardware setup. -- This is for the Hub, and for existing Models +- Preventing hardware-related error. +- Error Handling for Insufficient VRAM: When loading a second model, Cortex provides useful error messages if +there is insufficient vRAM memory. This proactive approach helps prevent out-of-memory errors and guides +users on how to resolve the issue. +- Model Compatibility Detection: Cortex automatically detects your hardware configuration to determine the +compatibility of different models. This ensures that the models you use are optimized for your specific hardware setup. +- This is for the Hub, and for existing models. ## Hardware Management ### Activating Specific GPUs -Cortex gives you the ability to activating specific GPUs for inference, giving you fine-grained control over hardware resources. This is especially useful for multi-GPU systems. -- Activate GPUs: Cortex can activate and utilize GPUs to accelerate processing, ensuring that computationally intensive tasks are handled efficiently. -You also have the option to deactivate all GPUs, to run inference on only CPU and RAM. +- Cortex gives you the ability to activate specific GPUs for inference, giving you fine-grained control over +hardware resources. This is especially useful for multi-GPU systems. + +You also have the option to deactivate all GPUs, to run inference on only CPU and RAM. ### Hardware Monitoring - Monitoring System Usage -- Monitor VRAM Usage: Cortex keeps track of VRAM usage to prevent out-of-memory (OOM) errors. It ensures that VRAM is used efficiently and provides warnings when resources are running low. -- Monitor System Resource Usage: Cortex continuously monitors the usage of system resources, including CPU, RAM, and GPUs. This helps in maintaining optimal performance and identifying potential bottlenecks. +- Monitor VRAM Usage: Cortex keeps track of vRAM usage to prevent out-of-memory (OOM) errors. It ensures +that vRAM is used efficiently and provides warnings when resources are running low. +- Monitor System Resource Usage: Cortex continuously monitors the usage of system resources, including CPU, +RAM, and GPUs. This helps in maintaining optimal performance and identifying potential bottlenecks. diff --git a/docs/docs/capabilities/text-generation.md b/docs/docs/capabilities/text-generation.md index 680625667..0001514f4 100644 --- a/docs/docs/capabilities/text-generation.md +++ b/docs/docs/capabilities/text-generation.md @@ -2,6 +2,187 @@ title: Text Generation --- -:::info -🚧 Cortex is currently under development, and this page is a stub for future development. -::: \ No newline at end of file + +Cortex provides a text generation endpoint that is fully compatible with OpenAI's API. +This section shows you how to generate text using Cortex with the OpenAI Python SDK. + +## Text Generation with OpenAI compatibility + +Start server and run model in detached mode. + +```sh +cortex run -d llama3.1:8b-gguf-q4-km +``` + +Create a directory and a python environment, and start a python or IPython shell. + +```sh +mkdir test-generation +cd test-generation +``` +```sh +python -m venv .venv # or uv venv .venv --python 3.13 +source .venv/bin/activate +pip install ipython openai rich # or uv pip install ipython openai rich +``` +```sh +ipython # or "uv run ipython" +``` + +Import the necessary modules and create a client. + +```py +from openai import OpenAI + +client = OpenAI( + base_url="http://localhost:39281/v1", + api_key="not-needed" +) +``` + +### Generate Text + +Basic completion: + +```py +response = client.chat.completions.create( + model="llama3.1:8b-gguf-q4-km", + messages=[ + {"role": "user", "content": "Tell me a short story about a friendly robot."} + ] +) +print(response.choices[0].message.content) +``` +``` +Here's a short story about a friendly robot: + +**Zeta's Gift** + +In a small town surrounded by lush green hills, there lived a robot named Zeta. Zeta was unlike any other robot in the world. While others +were designed for specific tasks like assembly or transportation, Zeta was created with a single purpose: to spread joy and kindness. + +Zeta's bright blue body was shaped like a ball, with glowing lines that pulsed with warmth on its surface. Its large, round eyes sparkled +with a warm light, as if reflecting the friendliness within. Zeta loved nothing more than making new friends and surprising them with small +gifts. + +One sunny morning, Zeta decided to visit the local bakery owned by Mrs. Emma, who was famous for her delicious pastries. As Zeta entered the +shop, it was greeted by the sweet aroma of freshly baked bread. The robot's advanced sensors detected a young customer, Timmy, sitting at a +corner table, looking sad. + +Zeta quickly approached Timmy and offered him a warm smile. "Hello there! I'm Zeta. What seems to be troubling you?" Timmy explained that he +was feeling down because his family couldn't afford his favorite dessert – Mrs. Emma's famous chocolate cake – for his birthday. + +Moved by Timmy's story, Zeta asked Mrs. Emma if she could help the young boy celebrate his special day. The baker smiled and handed Zeta a +beautifully decorated cake. As the robot carefully placed the cake on a tray, it sang a gentle melody: "Happy Birthday, Timmy! May your day +be as sweet as this treat!" + +Timmy's eyes widened with joy, and he hugged Zeta tightly. Word of Zeta's kindness spread quickly through the town, earning the robot the +nickname "The Friendly Robot." From that day on, whenever anyone in need was spotted, Zeta would appear at their side, bearing gifts and +spreading love. + +Zeta continued to surprise the townspeople with its thoughtfulness and warm heart, proving that even a machine could be a source of comfort +and joy. +``` + +With additional parameters: + +```py +response = client.chat.completions.create( + model="llama3.1:8b-gguf-q4-km", + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "What are the main differences between Python and C++?"} + ], + temperature=0.7, + max_tokens=150, + top_p=1.0, + frequency_penalty=0.0, + presence_penalty=0.0 +) +``` +```sh +ChatCompletion( + id='dnMbB12ZR6JdVDw2Spi8', + choices=[ + Choice( + finish_reason='stop', + index=0, + logprobs=None, + message=ChatCompletionMessage( + content="Python and C++ are two popular programming languages with distinct characteristics, use cases, ...", + refusal=None, + role='assistant', + audio=None, + function_call=None, + tool_calls=None + ) + ) + ], + created=1738236652, + model='_', + object='chat.completion', + service_tier=None, + system_fingerprint='_', + usage=CompletionUsage( + completion_tokens=150, + prompt_tokens=33, + total_tokens=183, + completion_tokens_details=None, + prompt_tokens_details=None + ) +) +``` + +Stream the response: + +```py +stream = client.chat.completions.create( + model="llama3.1:8b-gguf-q4-km", + messages=[ + {"role": "user", "content": "Write a haiku about programming."} + ], + stream=True +) + +for chunk in stream: + if chunk.choices[0].delta.content is not None: + print(chunk.choices[0].delta.content, end="") +``` +``` +Code flows like a stream + Errors lurk in every line +Bug hunt, endless quest +``` + +Multiple messages in a conversation: + +```py +messages = [ + {"role": "system", "content": "You are a knowledgeable science teacher."}, + {"role": "user", "content": "What is photosynthesis?"}, + {"role": "assistant", "content": "Photosynthesis is the process by which plants convert sunlight into energy."}, + {"role": "user", "content": "Can you explain it in more detail?"} +] + +response = client.chat.completions.create( + model="llama3.1:8b-gguf-q4-km", + messages=messages +) +print(response.choices[0].message.content) +``` +``` +"Photosynthesis is actually one of my favorite topics to teach! It's a crucial process that supports life on Earth, and +I'd be happy to break it down for you.\n\nPhotosynthesis occurs in specialized organelles called chloroplasts, which are present in plant +cells. These tiny factories use energy from the sun to convert carbon dioxide (CO2) and water (H2O) into glucose (a type of sugar) and +oxygen (O2).\n\nHere's a simplified equation:\n\n6 CO2 + 6 H2O + light energy → C6H12O6 (glucose) + 6 O2\n\nIn more detail, the process +involves several steps:\n\n1. **Light absorption**: Light from the sun is absorbed by pigments ..." +``` + +The API endpoint provided by Cortex supports all standard OpenAI parameters including: +- `temperature`: Controls randomness (0.0 to 2.0) +- `max_tokens`: Limits the length of the response +- `top_p`: Controls diversity via nucleus sampling +- `frequency_penalty`: Reduces repetition of token sequences +- `presence_penalty`: Encourages talking about new topics +- `stop`: Custom stop sequences +- `stream`: Enable/disable streaming responses From 6db5bd8271768cefbd5c8fd738ecc27e49e4d2e3 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:22:30 +1100 Subject: [PATCH 16/22] fixed wording, added structured outputs example, cleaned function calling --- docs/docs/guides/function-calling.md | 23 +-- docs/docs/guides/structured-outputs.md | 228 ++++++++++++------------- docs/docs/installation/linux.mdx | 1 - docs/docs/overview.mdx | 39 +++-- 4 files changed, 138 insertions(+), 153 deletions(-) diff --git a/docs/docs/guides/function-calling.md b/docs/docs/guides/function-calling.md index d37911935..8be439c77 100644 --- a/docs/docs/guides/function-calling.md +++ b/docs/docs/guides/function-calling.md @@ -9,20 +9,20 @@ This tutorial, I use the `mistral-nemo:12b-gguf-q4-km` for testing function call ### 1. Start server and run model. -``` -cortex run mistral-nemo:12b-gguf-q4-km +```sh +cortex run -d llama3.1:8b-gguf-q4-km ``` ### 2. Create a python script `function_calling.py` with this content: -``` +```py from datetime import datetime from openai import OpenAI from pydantic import BaseModel -ENDPOINT = "http://localhost:39281/v1" -MODEL = "mistral-nemo:12b-gguf-q4-km" +``` +```py client = OpenAI( - base_url=ENDPOINT, + base_url="http://localhost:39281/v1", api_key="not-needed" ) ``` @@ -31,14 +31,13 @@ This step creates OpenAI client in python ### 3. Start create a chat completion with tool calling -``` +```py tools = [ { "type": "function", "function": { "name": "get_delivery_date", - - "strict": True, + "strict": True, "description": "Get the delivery date for a customer's order. Call this whenever you need to know the delivery date, for example when a customer asks 'Where is my package'", "parameters": { "type": "object", @@ -54,6 +53,8 @@ tools = [ } } ] +``` +```py completion_payload = { "messages": [ {"role": "system", "content": "You are a helpful customer support assistant. Use the supplied tools to assist the user."}, @@ -63,7 +64,7 @@ completion_payload = { response = client.chat.completions.create( top_p=0.9, temperature=0.6, - model=MODEL, + model="llama3.1:8b-gguf-q4-km", messages=completion_payload["messages"], tools=tools, ) @@ -329,7 +330,7 @@ response = client.chat.completions.create( top_p=0.9, temperature=0.6, model=MODEL, - messages= messages, + messages= messages, tools=tools ) print(response) diff --git a/docs/docs/guides/structured-outputs.md b/docs/docs/guides/structured-outputs.md index 1fe3f789b..2f594bb1e 100644 --- a/docs/docs/guides/structured-outputs.md +++ b/docs/docs/guides/structured-outputs.md @@ -1,188 +1,170 @@ --- title: Structured Outputs --- -# Structured Outputs -Structured outputs, or response formats, are a feature designed to generate responses in a defined JSON schema, enabling more predictable and machine-readable outputs. This is essential for applications where data consistency and format adherence are crucial, such as automated data processing, structured data generation, and integrations with other systems. +This guide demonstrates methods for getting structured JSON output from locally-hosted language models +like Llama and Mistral. We'll cover techniques for generating predictable data structures using open source LLMs. -In recent developments, systems like OpenAI's models have excelled at producing these structured outputs. However, while open-source models like Llama 3.1 and Mistral Nemo offer powerful capabilities, they currently struggle to produce reliably structured JSON outputs required for advanced use cases. +## Start the model -This guide explores the concept of structured outputs using these models, highlights the challenges faced in achieving consistent output formatting, and provides strategies for improving output accuracy, particularly when using models that don't inherently support this feature as robustly as GPT models. - -By understanding these nuances, users can make informed decisions when choosing models for tasks requiring structured outputs, ensuring that the tools they select align with their project's formatting requirements and expected accuracy. +```sh +cortex run -d llama3.1:8b-gguf-q4-km +``` +``` +llama3.1:8b-gguf-q4-km model started successfully. Use `cortex run llama3.1:8b-gguf-q4-km` for interactive chat shell +``` -The Structured Outputs/Response Format feature in [OpenAI](https://platform.openai.com/docs/guides/structured-outputs) is fundamentally a prompt engineering challenge. While its goal is to use system prompts to generate JSON output matching a specific schema, popular open-source models like Llama 3.1 and Mistral Nemo struggle to consistently generate exact JSON output that matches the requirements. An easy way to directly guild the model to reponse in json format in system message, you just need to pass the pydantic model to `response_format`: +## Basic Example: Calendar Event -``` +```python from pydantic import BaseModel from openai import OpenAI import json -ENDPOINT = "http://localhost:39281/v1" -MODEL = "llama3.1:8b-gguf-q4-km" - +``` +```py client = OpenAI( - base_url=ENDPOINT, + base_url="http://localhost:39281/v1", api_key="not-needed" ) - class CalendarEvent(BaseModel): name: str date: str participants: list[str] - - +``` +```py completion = client.beta.chat.completions.parse( - model=MODEL, + model="llama3.1:8b-gguf-q4-km", messages=[ - {"role": "system", "content": "Extract the event information."}, - {"role": "user", "content": "Alice and Bob are going to a science fair on Friday."}, + {"role": "system", "content": "Extract the event info as JSON"}, + {"role": "user", "content": "Alice and Bob are going to a science fair on Friday"} ], response_format=CalendarEvent, stop=["<|eot_id|>"] ) - -event = completion.choices[0].message.parsed - -print(json.dumps(event.dict(), indent=4)) -``` - -The output of the model like this - +print(json.dumps(completion.choices[0].message.parsed.dict(), indent=2)) ``` +```json { - "name": "science fair", - "date": "Friday", - "participants": [ - "Alice", - "Bob" - ] + "name": "science fair", + "date": "Friday", + "participants": ["Alice", "Bob"] } ``` -With more complex json format, llama3.1 still struggle to response correct answer: - -``` - -from openai import OpenAI -from pydantic import BaseModel -import json -ENDPOINT = "http://localhost:39281/v1" -MODEL = "llama3.1:8b-gguf-q4-km" -client = OpenAI( - base_url=ENDPOINT, - api_key="not-needed" -) - -format = { - "steps": [{ - "explanation": "string", - "output": "string" - } - ], - "final_output": "string" -} - -completion_payload = { - "messages": [ - {"role": "system", "content": f"You are a helpful math tutor. Guide the user through the solution step by step. You have to response in this json format {format}\n"}, - {"role": "user", "content": "how can I solve 8x + 7 = -23"} - ] -} +## Complex Example: Math Steps +Let's try something more complex with nested schemas. Here's structured math reasoning: +```py class Step(BaseModel): explanation: str output: str - class MathReasoning(BaseModel): steps: list[Step] final_answer: str - - -response = client.beta.chat.completions.parse( - top_p=0.9, - temperature=0.6, - model=MODEL, - messages=completion_payload["messages"], - stop=["<|eot_id|>"], - response_format=MathReasoning -) - -math_reasoning = response.choices[0].message.parsed -print(json.dumps(math_reasoning.dict(), indent=4)) ``` - -The output of model looks like this - -``` -{ - "steps": [ - { - "explanation": "To isolate the variable x, we need to get rid of the constant term on the left-hand side. We can do this by subtracting 7 from both sides of the equation.", - "output": "8x + 7 - 7 = -23 - 7" - }, - { - "explanation": "Simplifying the left-hand side, we get:", - "output": "8x = -30" - }, +```py +response = client.beta.chat.completions.parse( + model="llama3.1:8b-gguf-q4-km", + messages=[ { - "explanation": "Now, to solve for x, we need to isolate it by dividing both sides of the equation by 8.", - "output": "8x / 8 = -30 / 8" + "role": "system", + "content": "Solve this math problem step by step. Output as JSON." }, { - "explanation": "Simplifying the right-hand side, we get:", - "output": "x = -3.75" + "role": "user", + "content": "how can I solve in a lot of detail, the equation 8x + 7 = -23" } ], - "final_answer": "There is no final answer yet, let's break it down step by step." + response_format=MathReasoning, + stop=["<|eot_id|>"] +) +print(json.dumps(response.choices[0].message.parsed.model_dump(), indent=2)) +``` +```json +{ + "steps": [ + { + "explanation": "The given equation is 8x + 7 = -23. To isolate x, we need to get rid of the constant term (+7) on the left side.", + "output": "" + }, + { + "explanation": "We can subtract 7 from both sides of the equation to get: 8x = -30", + "output": "8x = -30" + }, + { + "explanation": "Now, we need to isolate x. To do this, we'll divide both sides of the equation by 8.", + "output": "" + }, + { + "explanation": "Dividing both sides by 8 gives us: x = -3.75", + "output": "x = -3.75" + }, + { + "explanation": "However, looking back at the original problem, we see that it's asking for the value of x in the equation 8x + 7 = -23.", + "output": "" + }, + { + "explanation": "We can simplify this further by converting the decimal to a fraction.", + "output": "" + }, + { + "explanation": "The decimal -3.75 is equivalent to -15/4. Therefore, x = -15/4", + "output": "x = -15/4" + } + ], + "final_answer": "x = -3" } ``` -Even if the model can generate correct format but the information doesn't 100% accurate, the `final_answer` should be `-3.75` instead of `There is no final answer yet, let's break it down step by step.`. +## Quick JSON Lists -Another usecase for structured output with json response, you can provide the `response_format={"type" : "json_object"}`, the model will be force to generate json output. +For straightforward lists, you can use the json_object response format: -``` -json_format = {"song_name":"release date"} +```py completion = client.chat.completions.create( - model=MODEL, + model="llama3.1:8b-gguf-q4-km", messages=[ - {"role": "system", "content": f"You are a helpful assistant, you must reponse with this format: '{json_format}'"}, - {"role": "user", "content": "List 10 songs for me"} + { + "role": "system", + "content": "List songs in {song_name: release_year} format" + }, + { + "role": "user", + "content": "List 10 songs" + } ], response_format={"type": "json_object"}, stop=["<|eot_id|>"] ) - -print(json.dumps(json.loads(completion.choices[0].message.content), indent=4)) +print(json.dumps(json.loads(completion.choices[0].message.content), indent=2)) ``` -The output will looks like this: - -``` +Output: +```json { - "Happy": "2013", - "Uptown Funk": "2014", - "Shut Up and Dance": "2014", - "Can't Stop the Feeling!": "2016", - "We Found Love": "2011", - "All About That Bass": "2014", - "Radioactive": "2012", - "SexyBack": "2006", - "Crazy": "2007", - "Viva la Vida": "2008" + "Hotel California": 1976, + "Stairway to Heaven": 1971, + "Bohemian Rhapsody": 1975, + "Smells Like Teen Spirit": 1991, + "Viva la Vida": 2008, + "Imagine": 1971, + "Hotel Yorba": 2001, + "Mr. Brightside": 2004, + "Sweet Child O Mine": 1987, + "Livin on a Prayer": 1986 } ``` -## Limitations of Open-Source Models for Structured Outputs +## Pro Tips -While the concept of structured outputs is compelling, particularly for applications requiring machine-readable data, it's important to understand that not all models support this capability equally. Open-source models such as Llama 3.1 and Mistral Nemo face notable challenges in generating outputs that adhere strictly to defined JSON schemas. Here are the key limitations: +Open source models have come a long way with structured outputs. A few things to keep in mind: -- Lack of Training Data: These models have not been specifically trained on tasks demanding precise JSON formatting, unlike some proprietary models which have been fine-tuned for such tasks. -- Inconsistency in Output: Due to their training scope, `Llama 3.1` and `Mistral Nemo` often produce outputs that may deviate from the intended schema. This can include additional natural language explanations or incorrectly nested JSON structures. -- Complexity in Parsing: Without consistent JSON formatting, downstream processes that rely on predictable data schemas may encounter errors, leading to challenges in automation and data integration tasks. -- Beta Features: Some features related to structured outputs may still be in beta, requiring usage of specific methods like `client.beta.chat.completions.parse`, which suggests they are not yet fully reliable in all scenarios. +- Be explicit in your prompts about JSON formatting +- Use Pydantic models to enforce schema compliance +- Consider using the stop token to prevent extra output +- Some advanced features are still in beta -Given these constraints, users should consider these limitations when choosing a model for tasks involving structured outputs. Where strict compliance with a JSON schema is critical, alternative models designed for such precision might be a more suitable choice. +With proper prompting and schema validation, you can get reliable structured outputs from your local models. No cloud required! diff --git a/docs/docs/installation/linux.mdx b/docs/docs/installation/linux.mdx index cf7dc354d..8c4afc076 100644 --- a/docs/docs/installation/linux.mdx +++ b/docs/docs/installation/linux.mdx @@ -36,7 +36,6 @@ This instruction is for stable releases. For beta and nightly releases, please r - Local installer for Debian-based distros ```bash - # Local installer curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh | sudo bash -s -- --deb_local ``` diff --git a/docs/docs/overview.mdx b/docs/docs/overview.mdx index c1386d2a8..fd181d618 100644 --- a/docs/docs/overview.mdx +++ b/docs/docs/overview.mdx @@ -19,7 +19,7 @@ or [Discord](https://discord.com/invite/FTk2MvZwJH) ![Cortex Cover Image](/img/social-card.jpg) -Cortex is a Local AI API Platform that is used to run and customize LLMs. +Cortex is a Local AI API Platform that is used to run and customize LLMs. Key Features: - Straightforward CLI (inspired by Ollama) @@ -36,12 +36,12 @@ Cortex's roadmap includes implementing full compatibility with OpenAI API's and ## Inference Backends - Default: [llama.cpp](https://github.com/ggerganov/llama.cpp): cross-platform, supports most laptops, desktops and OSes - Future: [ONNX Runtime](https://github.com/microsoft/onnxruntime): supports Windows Copilot+ PCs & NPUs and traditional machine learning models -- Future: [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM): supports a variety of model architectures on Nvidia GPUs +- Future: [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM): supports a variety of model architectures on Nvidia GPUs If GPU hardware is available, Cortex is GPU accelerated by default. ## Models -Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access. +Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access. - [Hugging Face](https://huggingface.co) - [Cortex Built-in Models](https://cortex.so/models) @@ -51,7 +51,7 @@ Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexib ### Cortex Built-in Models & Quantizations | Model /Engine | llama.cpp | Command | | -------------- | --------------------- | ----------------------------- | -| phi-3.5 | ✅ | `cortex run phi3.5` | +| phi-4 | ✅ | `cortex run phi-4` | | llama3.2 | ✅ | `cortex run llama3.2` | | llama3.1 | ✅ | `cortex run llama3.1` | | codestral | ✅ | `cortex run codestral` | @@ -66,22 +66,25 @@ View all [Cortex Built-in Models](https://cortex.so/models). Cortex supports multiple quantizations for each model. ```sh -❯ cortex-nightly pull llama3.2 +cortex pull phi-4 +``` +``` Downloaded models: - llama3.2:3b-gguf-q2-k + bartowski:phi-4-GGUF:phi-4-Q3_K_S.gguf Available to download: - 1. llama3.2:3b-gguf-q3-kl - 2. llama3.2:3b-gguf-q3-km - 3. llama3.2:3b-gguf-q3-ks - 4. llama3.2:3b-gguf-q4-km (default) - 5. llama3.2:3b-gguf-q4-ks - 6. llama3.2:3b-gguf-q5-km - 7. llama3.2:3b-gguf-q5-ks - 8. llama3.2:3b-gguf-q6-k - 9. llama3.2:3b-gguf-q8-0 - -Select a model (1-9): + 1. phi-4:14.7b-gguf-q2-k + 2. phi-4:14.7b-gguf-q3-kl + 3. phi-4:14.7b-gguf-q3-km + 4. phi-4:14.7b-gguf-q3-ks + 5. phi-4:14.7b-gguf-q4-km (default) + 6. phi-4:14.7b-gguf-q4-ks + 7. phi-4:14.7b-gguf-q5-km + 8. phi-4:14.7b-gguf-q5-ks + 9. phi-4:14.7b-gguf-q6-k + 10. phi-4:14.7b-gguf-q8-0 + +Select a model (1-10): ``` @@ -130,4 +133,4 @@ Select a model (1-9): | openhermes-2.5 | 7b-tensorrt-llm-linux-ada | 7B | `cortex run openhermes-2.5:7b-tensorrt-llm-linux-ada`| - */} \ No newline at end of file + */} From 4c73da0af0fbb56d74ea27c25900d64ca4d5bcdb Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:25:35 +1100 Subject: [PATCH 17/22] polished wording and removed tensorrt-llm --- docs/docs/engines/index.mdx | 112 ++++++++++++++---------------------- 1 file changed, 43 insertions(+), 69 deletions(-) diff --git a/docs/docs/engines/index.mdx b/docs/docs/engines/index.mdx index 4043de20d..1610da351 100644 --- a/docs/docs/engines/index.mdx +++ b/docs/docs/engines/index.mdx @@ -6,62 +6,54 @@ title: Engines import DocCardList from "@theme/DocCardList"; :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex is currently under active development. Our documentation outlines the intended behavior of +Cortex, which may not yet be fully implemented in the codebase. ::: -# Engines - -Engines in Cortex serve as execution drivers for machine learning models, providing the runtime environment necessary for model operations. Each engine is specifically designed to optimize the performance and ensure compatibility with its corresponding model types. +Engines in Cortex serve as execution drivers for machine learning models, providing the runtime +and environment necessary for model operations. Each engine is optimized for hardware +performance and ensures compatibility with its corresponding model types. ## Supported Engines -Cortex currently supports three industry-standard engines: +Cortex currently supports two engines: -| Engine | Source | Description | -| -------------------------------------------------------- | --------- | -------------------------------------------------------------------------------------- | -| [llama.cpp](https://github.com/ggerganov/llama.cpp) | ggerganov | Inference of Meta's LLaMA model (and others) in pure C/C++ | -| [ONNX Runtime](https://github.com/microsoft/onnxruntime) | Microsoft | ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator | -| [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) | NVIDIA | GPU-optimized inference engine for large language models | +| Engine | Source | Description | +| -------------------------------------------------------- | --------- | -----------------------------------------------------------------------| +| [llama.cpp](https://github.com/ggerganov/llama.cpp) | ggerganov | Inference of models in GGUF format, written in pure C/C++ | +| [ONNX Runtime](https://github.com/microsoft/onnxruntime) | Microsoft | Cross-platform, high performance ML inference and training accelerator | -> **Note:** Cortex also supports users in building their own engines. +> **Note:** Cortex also supports building and adding your own custom engines. ## Features -- **Engine Retrieval**: Install engines with a single click. +- **Engine Retrieval**: Install the engines above or your own custom one with a single command. - **Engine Management**: Easily manage engines by type, variant, and version. -- **User-Friendly Interface**: Access models via Command Line Interface (CLI) or HTTP API. -- **Engine Selection**: Select the appropriate engines to run your models. - -## Usage - -Cortex offers comprehensive support for multiple engine types, including [llama.cpp](https://github.com/ggerganov/llama.cpp), [ONNX Runtime](https://github.com/microsoft/onnxruntime), and [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). These engines are utilized to load their corresponding model types. The platform provides a flexible management system for different engine variants and versions, enabling developers and users to easily rollback changes or compare performance metrics across different engine versions. +- **User-Friendly Interface**: Manage your server, engines, and models via Cortex's CLI or via HTTP API. +- **Engine Selection**: Depending on the model and its format, you can use different engine for the same models. ### Installing an engine -Cortex makes it extremely easy to install an engine. For example, to run a `GGUF` model, you will need the `llama-cpp` engine. To install it, simply enter `cortex engines install llama-cpp` into your terminal and wait for the process to complete. Cortex will automatically pull the latest stable version suitable for your PC's specifications. - -#### CLI To install an engine using the CLI, use the following command: ```sh cortex engines install llama-cpp +``` +``` Validating download items, please wait.. Start downloading.. llama-cpp 100%[==================================================] [00m:00s] 1.24 MB/1.24 MB Engine llama-cpp downloaded successfully! ``` -#### HTTP API - To install an engine using the HTTP API, use the following command: ```sh -curl --location --request POST 'http://127.0.0.1:39281/engines/install/llama-cpp' +curl http://127.0.0.1:39281/v1/engines/llama-cpp/install \ + --request POST \ + --header 'Content-Type: application/json' ``` - -Example response: - ```json { "message": "Engine llama-cpp starts installing!" @@ -70,7 +62,7 @@ Example response: ### Listing engines -Cortex allowing clients to easily list current engines and their statuses. Each engine type can have different variants and versions, which are crucial for debugging and performance optimization. Different variants cater to specific hardware configurations, such as CUDA for NVIDIA GPUs and Vulkan for AMD GPUs on Windows, or AVX512 support for CPUs. +Cortex allows you to list current engines and their statuses. Each engine type can have different variants and versions, which are crucial for debugging and performance optimization. Different variants cater to specific hardware configurations, such as CUDA for NVIDIA GPUs and Vulkan for AMD GPUs on Windows, or AVX512 support for CPUs. #### CLI @@ -78,6 +70,8 @@ You can list the available engines using the following command: ```sh cortex engines list +``` +``` +---+--------------+-------------------+---------+-----------+--------------+ | # | Name | Supported Formats | Version | Variant | Status | +---+--------------+-------------------+---------+-----------+--------------+ @@ -85,8 +79,6 @@ cortex engines list +---+--------------+-------------------+---------+-----------+--------------+ | 2 | llama-cpp | GGUF | 0.1.37 | mac-arm64 | Ready | +---+--------------+-------------------+---------+-----------+--------------+ -| 3 | tensorrt-llm | TensorRT Engines | | | Incompatible | -+---+--------------+-------------------+---------+-----------+--------------+ ``` #### HTTP API @@ -94,11 +86,8 @@ cortex engines list You can also retrieve the list of engines via the HTTP API: ```sh -curl --location 'http://127.0.0.1:39281/v1/engines' +curl http://127.0.0.1:39281/v1/engines ``` - -Example response: - ```json { "data": [ @@ -119,15 +108,6 @@ Example response: "status": "Ready", "variant": "mac-arm64", "version": "0.1.37" - }, - { - "description": "This extension enables chat completion API calls using the TensorrtLLM engine", - "format": "TensorRT Engines", - "name": "tensorrt-llm", - "productName": "tensorrt-llm", - "status": "Incompatible", - "variant": "", - "version": "" } ], "object": "list", @@ -137,7 +117,7 @@ Example response: ### Getting detail information of an engine -Cortex allows users to retrieve detailed information about a specific engine. This includes supported formats, versions, variants, and status. This feature helps users understand the capabilities and compatibility of the engine they are working with. +Cortex allows users to retrieve detailed information about a specific engine. This includes supported formats, versions, variants, and status. This information helps users understand the capabilities and compatibility of their engines. #### CLI @@ -145,14 +125,15 @@ To retrieve detailed information about an engine using the CLI, use the followin ```sh cortex engines get llama-cpp -+-----------+-------------------+---------+-----------+--------+ -| Name | Supported Formats | Version | Variant | Status | -+-----------+-------------------+---------+-----------+--------+ -| llama-cpp | GGUF | 0.1.37 | mac-arm64 | Ready | -+-----------+-------------------+---------+-----------+--------+ +``` +``` ++---+-----------+---------+----------------------------+-----------+ +| # | Name | Version | Variant | Status | ++---+-----------+---------+----------------------------+-----------+ +| 1 | llama-cpp | v0.1.49 | linux-amd64-avx2-cuda-12-0 | Installed | ++---+-----------+---------+----------------------------+-----------+ ``` -This command will display information such as the engine's name, supported formats, version, variant, and status. #### HTTP API @@ -161,40 +142,33 @@ To retrieve detailed information about an engine using the HTTP API, send a GET ```sh curl --location 'http://127.0.0.1:39281/engines/llama-cpp' ``` - -This request will return a JSON response containing detailed information about the engine, including its description, format, name, product name, status, variant, and version. -Example response: - ```json -{ - "description": "This extension enables chat completion API calls using the LlamaCPP engine", - "format": "GGUF", - "name": "llama-cpp", - "productName": "llama-cpp", - "status": "Not Installed", - "variant": "", - "version": "" -} +[ + { + "engine": "llama-cpp", + "name": "linux-amd64-avx2-cuda-12-0", + "version": "v0.1.49" + } +] ``` ### Uninstalling an engine -Cortex provides an easy way to uninstall an engine. This is useful when users want to uninstall the current version and then install the latest stable version of a particular engine. +Cortex provides an easy way to uninstall an engine, which can be useful if you want to have the latest version only +instead of different ones. #### CLI -To uninstall an engine, use the following CLI command: - ```sh cortex engines uninstall llama-cpp ``` #### HTTP API -To uninstall an engine using the HTTP API, send a DELETE request to the appropriate endpoint. - ```sh -curl --location --request DELETE 'http://127.0.0.1:39281/engines/llama-cpp' +curl http://127.0.0.1:39281/v1/engines/llama-cpp/install \ + --request DELETE \ + --header 'Content-Type: application/json' ``` Example response: From e95b6c75f9750e54a64a8f899e04ef610158f512 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:27:00 +1100 Subject: [PATCH 18/22] polished wording and removed tensorrt-llm mentions --- docs/docs/engines/index.mdx | 5 -- docs/docs/engines/llamacpp.mdx | 117 +++++++++++++++++---------------- 2 files changed, 62 insertions(+), 60 deletions(-) diff --git a/docs/docs/engines/index.mdx b/docs/docs/engines/index.mdx index 1610da351..7eb8fcab3 100644 --- a/docs/docs/engines/index.mdx +++ b/docs/docs/engines/index.mdx @@ -5,11 +5,6 @@ title: Engines import DocCardList from "@theme/DocCardList"; -:::warning -🚧 Cortex is currently under active development. Our documentation outlines the intended behavior of -Cortex, which may not yet be fully implemented in the codebase. -::: - Engines in Cortex serve as execution drivers for machine learning models, providing the runtime and environment necessary for model operations. Each engine is optimized for hardware performance and ensures compatibility with its corresponding model types. diff --git a/docs/docs/engines/llamacpp.mdx b/docs/docs/engines/llamacpp.mdx index 2ace67944..dc4948c2f 100644 --- a/docs/docs/engines/llamacpp.mdx +++ b/docs/docs/engines/llamacpp.mdx @@ -3,52 +3,49 @@ title: Llama.cpp description: GGUF Model Format. --- -:::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. -::: +Cortex leverages `llama.cpp` as its default engine for GGUF models. The example model configuration shown +below illustrates how to configure a GGUF model (in this case DeepSeek's 8B model) with both required and +optional parameters. The configuration includes metadata, inference parameters, and model loading settings +that control everything from basic model identification to advanced generation behavior. Cortex can automatically +generate GGUF models from HuggingFace repositories when a model.yaml file isn't available. -Cortex uses `llama.cpp` as the default engine by default the `GGUF` format is supported by Cortex. - -:::info -Cortex automatically generates any `GGUF` model from the HuggingFace repo that does not have the `model.yaml` file. -::: - -## [`model.yaml`](/docs/capabilities/models/model-yaml) Sample ```yaml -## BEGIN GENERAL GGUF METADATA -id: Mistral-Nemo-Instruct-2407 # Model ID unique between models (author / quantization) -model: mistral-nemo # Model ID which is used for request construct - should be unique between models (author / quantization) -name: Mistral-Nemo-Instruct-2407 # metadata.general.name -version: 2 # metadata.version -files: # can be universal protocol (models://) OR absolute local file path (file://) OR https remote URL (https://) - - /home/thuan/cortex/models/mistral-nemo-q8/Mistral-Nemo-Instruct-2407.Q6_K.gguf +# BEGIN GENERAL GGUF METADATA +id: deepseek-r1-distill-llama-8b # Model ID unique between models (author / quantization) +model: deepseek-r1-distill-llama-8b:8b-gguf-q2-k # Model ID which is used for request construct - should be unique between models (author / quantization) +name: deepseek-r1-distill-llama-8b # metadata.general.name +version: 1 +files: # Can be relative OR absolute local file path + - models/cortex.so/deepseek-r1-distill-llama-8b/8b-gguf-q2-k/model.gguf # END GENERAL GGUF METADATA # BEGIN INFERENCE PARAMETERS # BEGIN REQUIRED stop: # tokenizer.ggml.eos_token_id - - + - <|im_end|> + - <|end▁of▁sentence|> # END REQUIRED # BEGIN OPTIONAL +size: 3179134413 stream: true # Default true? -top_p: 0.949999988 # Ranges: 0 to 1 -temperature: 0.699999988 # Ranges: 0 to 1 +top_p: 0.9 # Ranges: 0 to 1 +temperature: 0.7 # Ranges: 0 to 1 frequency_penalty: 0 # Ranges: 0 to 1 presence_penalty: 0 # Ranges: 0 to 1 -max_tokens: 1024000 # Should be default to context length +max_tokens: 4096 # Should be default to context length seed: -1 dynatemp_range: 0 dynatemp_exponent: 1 top_k: 40 -min_p: 0.0500000007 +min_p: 0.05 tfs_z: 1 typ_p: 1 repeat_last_n: 64 repeat_penalty: 1 mirostat: false mirostat_tau: 5 -mirostat_eta: 0.100000001 +mirostat_eta: 0.1 penalize_nl: false ignore_eos: false n_probs: 0 @@ -58,49 +55,59 @@ min_keep: 0 # BEGIN MODEL LOAD PARAMETERS # BEGIN REQUIRED -engine: cortex.llamacpp # engine to run model -prompt_template: "[INST] <>\n{system_message}\n<>\n{prompt}[/INST]" +engine: llama-cpp # engine to run model +prompt_template: <|start_of_text|>{system_message}<|User|>{prompt}<|Assistant|> # END REQUIRED # BEGIN OPTIONAL -ctx_len: 1024000 # llama.context_length | 0 or undefined = loaded from model -ngl: 41 # Undefined = loaded from model +ctx_len: 4096 # llama.context_length | 0 or undefined = loaded from model +n_parallel: 1 +ngl: 34 # Undefined = loaded from model # END OPTIONAL # END MODEL LOAD PARAMETERS - ``` + ## Model Parameters + | **Parameter** | **Description** | **Required** | |------------------------|--------------------------------------------------------------------------------------|--------------| -| `top_p` | The cumulative probability threshold for token sampling. | No | -| `temperature` | Controls the randomness of predictions by scaling logits before applying softmax. | No | -| `frequency_penalty` | Penalizes new tokens based on their existing frequency in the sequence so far. | No | -| `presence_penalty` | Penalizes new tokens based on whether they appear in the sequence so far. | No | -| `max_tokens` | Maximum number of tokens in the output. | No | -| `stream` | Enables or disables streaming mode for the output (true or false). | No | -| `ngl` | Number of attention heads. | No | -| `ctx_len` | Context length (maximum number of tokens). | No | -| `prompt_template` | Template for formatting the prompt, including system messages and instructions. | Yes | -| `stop` | Specifies the stopping condition for the model, which can be a word, a letter, or a specific text. | Yes | -| `seed` | Random seed value used to initialize the generation process. | No | -| `dynatemp_range` | Dynamic temperature range used to adjust randomness during generation. | No | -| `dynatemp_exponent` | Exponent used to adjust the effect of dynamic temperature. | No | -| `top_k` | Limits the number of highest probability tokens to consider during sampling. | No | -| `min_p` | Minimum cumulative probability for nucleus sampling. | No | -| `tfs_z` | Top-p frequency selection parameter. | No | -| `typ_p` | Typical sampling probability threshold. | No | -| `repeat_last_n` | Number of tokens to consider for the repetition penalty. | No | -| `repeat_penalty` | Penalty applied to repeated tokens to reduce their likelihood of being selected again. | No | -| `mirostat` | Enables or disables the use of Mirostat algorithm for dynamic temperature adjustment. | No | -| `mirostat_tau` | Target surprise value for Mirostat algorithm. | No | -| `mirostat_eta` | Learning rate for Mirostat algorithm. | No | -| `penalize_nl` | Whether newline characters should be penalized during sampling. | No | -| `ignore_eos` | If true, ignores the end of sequence token, allowing generation to continue indefinitely. | No | -| `n_probs` | Number of top token probabilities to return in the output. | No | -| `min_keep` | Minimum number of tokens to keep during top-k sampling. | No | +| `id` | Unique model identifier including author and quantization | Yes | +| `model` | Model ID used for request construction | Yes | +| `name` | General name metadata for the model | Yes | +| `version` | Model version number | Yes | +| `files` | Path to model GGUF file (relative or absolute) | Yes | +| `stop` | Array of stopping sequences for generation | Yes | +| `engine` | Model execution engine (llama-cpp) | Yes | +| `prompt_template` | Template for formatting the prompt with system message and user input | Yes | +| `size` | Model file size in bytes | No | +| `stream` | Enable streaming output (default: true) | No | +| `top_p` | Nucleus sampling probability threshold (0-1) | No | +| `temperature` | Output randomness control (0-1) | No | +| `frequency_penalty` | Penalty for frequent token usage (0-1) | No | +| `presence_penalty` | Penalty for token presence (0-1) | No | +| `max_tokens` | Maximum output length | No | +| `seed` | Random seed for reproducibility | No | +| `dynatemp_range` | Dynamic temperature range | No | +| `dynatemp_exponent` | Dynamic temperature exponent | No | +| `top_k` | Top-k sampling parameter | No | +| `min_p` | Minimum probability threshold | No | +| `tfs_z` | Tail-free sampling parameter | No | +| `typ_p` | Typical sampling parameter | No | +| `repeat_last_n` | Repetition penalty window | No | +| `repeat_penalty` | Penalty for repeated tokens | No | +| `mirostat` | Enable Mirostat sampling | No | +| `mirostat_tau` | Mirostat target entropy | No | +| `mirostat_eta` | Mirostat learning rate | No | +| `penalize_nl` | Apply penalty to newlines | No | +| `ignore_eos` | Ignore end-of-sequence token | No | +| `n_probs` | Number of probability outputs | No | +| `min_keep` | Minimum tokens to retain | No | +| `ctx_len` | Context window size | No | +| `n_parallel` | Number of parallel instances | No | +| `ngl` | Number of GPU layers | No | \ No newline at end of file +::: --> From a38fc818a5ae1cd8856c97b3e370e8bb535ffed0 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:44:11 +1100 Subject: [PATCH 19/22] added -d for detached mode --- docs/docs/cli/run.mdx | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/docs/cli/run.mdx b/docs/docs/cli/run.mdx index 57c8358a2..73144de36 100644 --- a/docs/docs/cli/run.mdx +++ b/docs/docs/cli/run.mdx @@ -9,25 +9,25 @@ import TabItem from "@theme/TabItem"; # `cortex run` -This CLI command is a shortcut to run models easily. It executes this sequence of commands: -1. [`cortex pull`](/docs/cli/models/): This command pulls the specified model if the model is not yet downloaded, or finds a local model. -2. [`cortex engines install`](/docs/cli/engines/): This command installs the specified engines if not yet downloaded. -3. [`cortex models start`](/docs/cli/models/): This command starts the specified model, making it active and ready for interactions. +The lazy dev's way to run models. Does three things: +1. [`cortex pull`](/docs/cli/models/): Grabs the model if you don't have it +2. [`cortex engines install`](/docs/cli/engines/): Sets up engines if missing +3. [`cortex models start`](/docs/cli/models/): Fires up the model ## Usage :::info -You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`. +Need the gory details? Use `--verbose` flag like this: `cortex --verbose [subcommand]` ::: ```sh - cortex [options] + cortex run [options] ``` ```sh - cortex.exe [options] + cortex.exe run [options] ``` @@ -40,4 +40,4 @@ You can use the `--verbose` flag to display more detailed output of the internal | `--gpus` | List of GPUs to use. | No | - | `[0,1]` | | `--ctx_len` | Maximum context length for inference. | No | `min(8192, max_model_context_length)` | `1024` | | `-h`, `--help` | Display help information for the command. | No | - | `-h` | - +| `-d`, `--detached` | Load the model without starting an interactive chat | No | - | `-d` | From 77a6294a887219240c3f072ba092b47c26595bdf Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:45:47 +1100 Subject: [PATCH 20/22] polished barely anything --- docs/docs/cli/start.mdx | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/docs/docs/cli/start.mdx b/docs/docs/cli/start.mdx index 703e5f535..08bb4a23c 100644 --- a/docs/docs/cli/start.mdx +++ b/docs/docs/cli/start.mdx @@ -9,7 +9,7 @@ import TabItem from "@theme/TabItem"; # `cortex start` This command starts the Cortex API server processes. -If the server is not yet running, the server will automatically be started when running other Cortex commands. +If the server is not yet running, the server will automatically start when running other Cortex commands. ## Usage :::info @@ -36,6 +36,3 @@ You can use the `--verbose` flag to display more detailed output of the internal | `-h`, `--help` | Display help information for the command. | No | - | `-h` | | `-p`, `--port ` | Port to serve the application. | No | - | `-p 39281` | | `--loglevel ` | Setup loglevel for cortex server, in the priority of `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE` | No | - | `--loglevel INFO` will display ERROR, WARN and INFO logs| - - - From ab8834733f972d49609d17b75df8020ed05d083b Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 14:50:57 +1100 Subject: [PATCH 21/22] added tutorial --- docs/docs/assistants/index.md | 179 +++++++++++++++++++++++++++++++++- 1 file changed, 178 insertions(+), 1 deletion(-) diff --git a/docs/docs/assistants/index.md b/docs/docs/assistants/index.md index d38b33e52..887752052 100644 --- a/docs/docs/assistants/index.md +++ b/docs/docs/assistants/index.md @@ -1,3 +1,180 @@ --- title: Assistants ---- \ No newline at end of file +--- + +# Building Local AI Assistants + +While Cortex doesn't yet support the full OpenAI Assistants API, we can build assistant-like functionality +using the chat completions API. Here's how to create persistent, specialized assistants locally. + +## Get Started + +First, fire up our model: + +```sh +cortex run -d llama3.1:8b-gguf-q4-km +``` + +Set up your Python environment: + +```bash +mkdir assistant-test +cd assistant-test +python -m venv .venv +source .venv/bin/activate +pip install openai +``` + +## Creating an Assistant + +Here's how to create an assistant-like experience using chat completions: + +```python +from openai import OpenAI +from typing import List, Dict + +class LocalAssistant: + def __init__(self, name: str, instructions: str): + self.client = OpenAI( + base_url="http://localhost:39281/v1", + api_key="not-needed" + ) + self.name = name + self.instructions = instructions + self.conversation_history: List[Dict] = [] + + def add_message(self, content: str, role: str = "user") -> str: + # Add message to history + self.conversation_history.append({"role": role, "content": content}) + + # Prepare messages with system instructions and history + messages = [ + {"role": "system", "content": self.instructions}, + *self.conversation_history + ] + + # Get response + response = self.client.chat.completions.create( + model="llama3.1:8b-gguf-q4-km", + messages=messages + ) + + # Add assistant's response to history + assistant_message = response.choices[0].message.content + self.conversation_history.append({"role": "assistant", "content": assistant_message}) + + return assistant_message + +# Create a coding assistant +coding_assistant = LocalAssistant( + name="Code Buddy", + instructions="""You are a helpful coding assistant who: + - Explains concepts with practical examples + - Provides working code snippets + - Points out potential pitfalls + - Keeps responses concise but informative""" +) + +# Ask a question +response = coding_assistant.add_message("Can you explain Python list comprehensions with examples?") +print(response) + +# Follow-up question (with conversation history maintained) +response = coding_assistant.add_message("Can you show a more complex example with filtering?") +print(response) +``` + +## Specialized Assistants + +You can create different types of assistants by changing the instructions: + +```python +# Math tutor assistant +math_tutor = LocalAssistant( + name="Math Buddy", + instructions="""You are a patient math tutor who: + - Breaks down problems step by step + - Uses clear explanations + - Provides practice problems + - Encourages understanding over memorization""" +) + +# Writing assistant +writing_assistant = LocalAssistant( + name="Writing Buddy", + instructions="""You are a writing assistant who: + - Helps improve clarity and structure + - Suggests better word choices + - Maintains the author's voice + - Explains the reasoning behind suggestions""" +) +``` + +## Working with Context + +Here's how to create an assistant that can work with context: + +```python +class ContextAwareAssistant(LocalAssistant): + def __init__(self, name: str, instructions: str, context: str): + super().__init__(name, instructions) + self.context = context + + def add_message(self, content: str, role: str = "user") -> str: + # Include context in the system message + messages = [ + {"role": "system", "content": f"{self.instructions}\n\nContext:\n{self.context}"}, + *self.conversation_history, + {"role": role, "content": content} + ] + + response = self.client.chat.completions.create( + model="llama3.1:8b-gguf-q4-km", + messages=messages + ) + + assistant_message = response.choices[0].message.content + self.conversation_history.append({"role": role, "content": content}) + self.conversation_history.append({"role": "assistant", "content": assistant_message}) + + return assistant_message + +# Example usage with code review context +code_context = """ +def calculate_average(numbers): + total = 0 + for num in numbers: + total += num + return total / len(numbers) +""" + +code_reviewer = ContextAwareAssistant( + name="Code Reviewer", + instructions="You are a helpful code reviewer. Suggest improvements while being constructive.", + context=code_context +) + +response = code_reviewer.add_message("Can you review this code and suggest improvements?") +print(response) +``` + +## Pro Tips + +- Keep the conversation history focused - clear it when starting a new topic +- Use specific instructions to get better responses +- Consider using temperature and max_tokens parameters for different use cases +- Remember that responses are stateless - maintain context yourself + +## Memory Management + +For longer conversations, you might want to limit the history: + +```python +def trim_conversation_history(self, max_messages: int = 10): + if len(self.conversation_history) > max_messages: + # Keep system message and last N messages + self.conversation_history = self.conversation_history[-max_messages:] +``` + +That's it! While we don't have the full Assistants API yet, we can still create powerful assistant-like +experiences using the chat completions API. The best part? It's all running locally on your machine. From f3024c3f12198d59ed5153eab190dfadd9c132bd Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Fri, 31 Jan 2025 23:26:36 +1100 Subject: [PATCH 22/22] removed tensorrt-llm and made general improvements to all docs --- docs/docs/architecture/cortexrc.mdx | 1 - docs/docs/architecture/data-folder.mdx | 17 +- docs/docs/basic-usage/index.mdx | 5 +- docs/docs/capabilities/models/index.mdx | 11 +- docs/docs/capabilities/models/model-yaml.mdx | 5 - docs/docs/cli/config.mdx | 45 ++- docs/docs/cli/engines/index.mdx | 27 +- docs/docs/cli/models/index.mdx | 108 ++++++-- docs/docs/cli/ps.mdx | 20 +- docs/docs/cli/pull.mdx | 12 +- docs/docs/cli/stop.mdx | 6 +- docs/docs/cli/update.mdx | 6 +- docs/docs/engines/onnx.mdx | 11 +- docs/docs/engines/python-engine.mdx | 275 +++++++------------ docs/docs/engines/tensorrt-llm.mdx | 37 ++- docs/docs/overview.mdx | 51 +--- docs/docs/quickstart.mdx | 22 -- 17 files changed, 314 insertions(+), 345 deletions(-) diff --git a/docs/docs/architecture/cortexrc.mdx b/docs/docs/architecture/cortexrc.mdx index 881627c2a..d32039bca 100644 --- a/docs/docs/architecture/cortexrc.mdx +++ b/docs/docs/architecture/cortexrc.mdx @@ -34,7 +34,6 @@ You can configure the following parameters in the `.cortexrc` file: | `apiServerPort` | Port number for the Cortex.cpp API server. | `39281` | | `logFolderPath` | Path the folder where logs are located | User's home folder. | | `logLlamaCppPath` | The llama-cpp engine . | `./logs/cortex.log` | -| `logTensorrtLLMPath` | The tensorrt-llm engine log file path. | `./logs/cortex.log` | | `logOnnxPath` | The onnxruntime engine log file path. | `./logs/cortex.log` | | `maxLogLines` | The maximum log lines that write to file. | `100000` | | `checkedForUpdateAt` | The last time for checking updates. | `0` | diff --git a/docs/docs/architecture/data-folder.mdx b/docs/docs/architecture/data-folder.mdx index b8dca1b29..9a78b57de 100644 --- a/docs/docs/architecture/data-folder.mdx +++ b/docs/docs/architecture/data-folder.mdx @@ -51,18 +51,11 @@ it typically follows the structure below: ├── cortex.db ├── engines/ │   ├── cortex.llamacpp/ -│   │   ├── deps/ -│   │   │   ├── libcublasLt.so.12 -│   │   │   └── libcudart.so.12 -│   │   └── linux-amd64-avx2-cuda-12-0/ -│   │   └── ... -│   └── cortex.tensorrt-llm/ -│   ├── deps/ -│   │   └── ... -│   └── linux-cuda-12-4/ -│   └── v0.0.9/ -│   ├── ... -│   └── libtensorrt_llm.so +│   ├── deps/ +│   │   ├── libcublasLt.so.12 +│   │   └── libcudart.so.12 +│   └── linux-amd64-avx2-cuda-12-0/ +│   └── ... ├── files ├── logs/ │   ├── cortex-cli.log diff --git a/docs/docs/basic-usage/index.mdx b/docs/docs/basic-usage/index.mdx index 7e7106aa6..837d78733 100644 --- a/docs/docs/basic-usage/index.mdx +++ b/docs/docs/basic-usage/index.mdx @@ -36,7 +36,7 @@ curl --request DELETE \ ## Engines Cortex currently supports a general Python Engine for highly customised deployments and -3 specialized ones for different multi-modal foundation models: llama.cpp, ONNXRuntime and TensorRT-LLM. +2 specialized ones for different multi-modal foundation models: llama.cpp and ONNXRuntime. By default, Cortex installs `llama.cpp` as it main engine as it can be used in most laptops, desktop environments and operating systems. @@ -58,8 +58,7 @@ curl --request GET \ "name": "linux-amd64-avx2-cuda-12-0", "version": "v0.1.49" } - ], - "tensorrt-llm": [] + ] } ``` diff --git a/docs/docs/capabilities/models/index.mdx b/docs/docs/capabilities/models/index.mdx index 4dc032575..beda81e69 100644 --- a/docs/docs/capabilities/models/index.mdx +++ b/docs/docs/capabilities/models/index.mdx @@ -3,15 +3,9 @@ title: Model Overview description: The Model section overview --- -:::warning -🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended behavior -of Cortex, which may not yet be fully implemented in the codebase. -::: - Models in cortex are used for inference purposes (e.g., chat completion, embedding, etc.) after they have been downloaded locally. Currently, we support different engines including `llama.cpp` with the -GGUF model format, TensorRT-LLM for optimized inference on NVIDIA hardware, and ONNX for edge or -different model deployments. +GGUF model format, and ONNX for edge or different model deployments. In the future, you will also be able to run remote models (like OpenAI GPT-4 and Claude 3.5 Sonnet) via Cortex. Support for OpenAI and Anthropic engines is under development and will be available soon. @@ -27,7 +21,6 @@ can facilitate the following: Cortex supports three model formats and each model format require specific engine to run: - GGUF - run with `llama-cpp` engine - ONNX - run with `onnxruntime` engine -- TensorRT-LLM - run with `tensorrt-llm` engine Within the Python Engine (currently under development), you can run models in other formats @@ -45,6 +38,6 @@ These models are ready to be downloaded and you can check them out at the link a Built-in models are made available across the following variants: -- **By format**: `gguf`, `onnx`, and `tensorrt-llm` +- **By format**: `gguf` and `onnx` - **By Size**: `7b`, `13b`, and more. - **By quantization method**: `q4`, `q8`, and more. diff --git a/docs/docs/capabilities/models/model-yaml.mdx b/docs/docs/capabilities/models/model-yaml.mdx index 778d8b90e..ebfd2ec6a 100644 --- a/docs/docs/capabilities/models/model-yaml.mdx +++ b/docs/docs/capabilities/models/model-yaml.mdx @@ -6,11 +6,6 @@ description: The model.yaml import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; -:::warning -🚧 Cortex is currently under active development. Our documentation outlines the intended behavior of -Cortex, which may not yet be fully implemented in the codebase. -::: - Cortex uses a `model.yaml` file to specify the configuration desired for each model. Models can be downloaded from the Cortex Model Hub or Hugging Face repositories. Once downloaded, the model data is parsed and stored in the `models` directory. diff --git a/docs/docs/cli/config.mdx b/docs/docs/cli/config.mdx index 471a7a04a..b377c42a6 100644 --- a/docs/docs/cli/config.mdx +++ b/docs/docs/cli/config.mdx @@ -9,6 +9,11 @@ import TabItem from "@theme/TabItem"; # `cortex config` +:::warning +At the moment, the `cortex config` command only supports a few configurations. More +configurations will be added soon. +::: + This command allows you to update server configurations such as CORS and Allowed Headers. ## Usage @@ -65,14 +70,34 @@ This command returns all server configurations. For example, it returns the following: ``` -+-------------------------------------------------------------------------------------+ -| Config name | Value | -+-------------------------------------------------------------------------------------+ -| allowed_origins | http://localhost:39281 | -+-------------------------------------------------------------------------------------+ -| allowed_origins | http://127.0.0.1:39281/ | -+-------------------------------------------------------------------------------------+ -| cors | true | -+-------------------------------------------------------------------------------------+ ++-----------------------+-------------------------------------+ +| Config name | Value | ++-----------------------+-------------------------------------+ +| allowed_origins | http://localhost:39281 | ++-----------------------+-------------------------------------+ +| allowed_origins | http://127.0.0.1:39281 | ++-----------------------+-------------------------------------+ +| allowed_origins | http://0.0.0.0:39281 | ++-----------------------+-------------------------------------+ +| cors | true | ++-----------------------+-------------------------------------+ +| huggingface_token | | ++-----------------------+-------------------------------------+ +| no_proxy | example.com,::1,localhost,127.0.0.1 | ++-----------------------+-------------------------------------+ +| proxy_password | | ++-----------------------+-------------------------------------+ +| proxy_url | | ++-----------------------+-------------------------------------+ +| proxy_username | | ++-----------------------+-------------------------------------+ +| verify_host_ssl | true | ++-----------------------+-------------------------------------+ +| verify_peer_ssl | true | ++-----------------------+-------------------------------------+ +| verify_proxy_host_ssl | true | ++-----------------------+-------------------------------------+ +| verify_proxy_ssl | true | ++-----------------------+-------------------------------------+ -``` \ No newline at end of file +``` diff --git a/docs/docs/cli/engines/index.mdx b/docs/docs/cli/engines/index.mdx index 2712e0af5..0ebcb9461 100644 --- a/docs/docs/cli/engines/index.mdx +++ b/docs/docs/cli/engines/index.mdx @@ -9,8 +9,8 @@ import TabItem from "@theme/TabItem"; This command allows you to manage various engines available within Cortex. - **Usage**: + ```sh @@ -24,7 +24,6 @@ This command allows you to manage various engines available within Cortex. - **Options**: | Option | Description | Required | Default value | Example | @@ -32,18 +31,18 @@ This command allows you to manage various engines available within Cortex. | `-h`, `--help` | Display help information for the command. | No | - | `-h` | {/* | `-vk`, `--vulkan` | Install Vulkan engine. | No | `false` | `-vk` | */} ---- -# Subcommands: + ## `cortex engines list` + :::info This CLI command calls the following API endpoint: - [List Engines](/api-reference#tag/engines/get/v1/engines) ::: -This command lists all the Cortex's engines. - +This command lists all the Cortex's engines. **Usage**: + ```sh @@ -58,6 +57,7 @@ This command lists all the Cortex's engines. For example, it returns the following: + ``` +---+--------------+-------------------+---------+----------------------------+---------------+ | # | Name | Supported Formats | Version | Variant | Status | @@ -66,18 +66,19 @@ For example, it returns the following: +---+--------------+-------------------+---------+----------------------------+---------------+ | 2 | llama-cpp | GGUF | 0.1.34 | linux-amd64-avx2-cuda-12-0 | Ready | +---+--------------+-------------------+---------+----------------------------+---------------+ -| 3 | tensorrt-llm | TensorRT Engines | | | Not Installed | -+---+--------------+-------------------+---------+----------------------------+---------------+ ``` ## `cortex engines get` + :::info This CLI command calls the following API endpoint: - [Get Engine](/api-reference#tag/engines/get/v1/engines/{name}) ::: + This command returns an engine detail defined by an engine `engine_name`. **Usage**: + ```sh @@ -92,6 +93,7 @@ This command returns an engine detail defined by an engine `engine_name`. For example, it returns the following: + ``` +-----------+-------------------+---------+-----------+--------+ | Name | Supported Formats | Version | Variant | Status | @@ -99,11 +101,11 @@ For example, it returns the following: | llama-cpp | GGUF | 0.1.37 | mac-arm64 | Ready | +-----------+-------------------+---------+-----------+--------+ ``` + :::info To get an engine name, run the [`engines list`](/docs/cli/engines/list) command. ::: - **Options**: | Option | Description | Required | Default value | Example | @@ -114,16 +116,18 @@ To get an engine name, run the [`engines list`](/docs/cli/engines/list) command. ## `cortex engines install` + :::info This CLI command calls the following API endpoint: - [Init Engine](/api-reference#tag/engines/post/v1/engines/{name}/init) ::: + This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines: - `llama-cpp` - `onnxruntime` -- `tensorrt-llm` **Usage**: + ```sh @@ -133,7 +137,6 @@ This command downloads the required dependencies and installs the engine within ```sh cortex.exe engines install [options] - ``` @@ -150,6 +153,7 @@ This command downloads the required dependencies and installs the engine within This command uninstalls the engine within Cortex. **Usage**: + ```sh @@ -164,6 +168,7 @@ This command uninstalls the engine within Cortex. For Example: + ```bash ## Llama.cpp engine cortex engines uninstall llama-cpp diff --git a/docs/docs/cli/models/index.mdx b/docs/docs/cli/models/index.mdx index dff452788..6c40ee55e 100644 --- a/docs/docs/cli/models/index.mdx +++ b/docs/docs/cli/models/index.mdx @@ -14,6 +14,7 @@ This command allows you to start, stop, and manage various local or remote model :::info You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`. ::: + ```sh @@ -23,7 +24,6 @@ You can use the `--verbose` flag to display more detailed output of the internal ```sh cortex.exe models [options] - ``` @@ -38,15 +38,16 @@ You can use the `--verbose` flag to display more detailed output of the internal # Subcommands: ## `cortex models get` + :::info This CLI command calls the following API endpoint: - [Get Model](/api-reference#tag/models/get/v1/models/{id}) ::: -This command returns a model detail defined by a `model_id`. - +This command returns a model detail defined by a `model_id`. **Usage**: + ```sh @@ -56,16 +57,67 @@ This command returns a model detail defined by a `model_id`. ```sh cortex.exe models get - ``` For example, it returns the following: -```yaml +```json { - "ai_template":"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n","created":9223372036854775888,"ctx_len":4096,"dynatemp_exponent":1.0,"dynatemp_range":0.0,"engine":"llama-cpp","files":["models/cortex.so/llama3.2/3b-gguf-q4-km/model.gguf"],"frequency_penalty":0.0,"gpu_arch":"","id":"Llama-3.2-3B-Instruct","ignore_eos":false,"max_tokens":4096,"min_keep":0,"min_p":0.05000000074505806,"mirostat":false,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"Llama-3.2-3B-Instruct","n_parallel":1,"n_probs":0,"name":"llama3.2:3b-gguf-q4-km","ngl":29,"object":"model","os":"","owned_by":"","penalize_nl":false,"precision":"","presence_penalty":0.0,"prompt_template":"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n","quantization_method":"","repeat_last_n":64,"repeat_penalty":1.0,"result":"OK","seed":-1,"stop":["<|eot_id|>"],"stream":true,"system_template":"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n","temperature":0.69999998807907104,"text_model":false,"tfs_z":1.0,"top_k":40,"top_p":0.89999997615814209,"typ_p":1.0,"user_template":"<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n","version":"2" + "ai_template" : "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n", + "created" : 127638593791813, + "ctx_len" : 8192, + "dynatemp_exponent" : 1.0, + "dynatemp_range" : 0.0, + "engine" : "llama-cpp", + "files" : + [ + "models/cortex.so/llama3.1/8b-gguf-q4-km/model.gguf" + ], + "frequency_penalty" : 0.0, + "gpu_arch" : "", + "id" : "llama3.1:8b-gguf-q4-km", + "ignore_eos" : false, + "max_tokens" : 8192, + "min_keep" : 0, + "min_p" : 0.050000000000000003, + "mirostat" : false, + "mirostat_eta" : 0.10000000000000001, + "mirostat_tau" : 5.0, + "model" : "llama3.1:8b-gguf-q4-km", + "n_parallel" : 1, + "n_probs" : 0, + "name" : "llama3.1:8b-gguf-q4-km", + "ngl" : 33, + "object" : "", + "os" : "", + "owned_by" : "", + "penalize_nl" : false, + "precision" : "", + "presence_penalty" : 0.0, + "prompt_template" : "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n", + "quantization_method" : "", + "repeat_last_n" : 64, + "repeat_penalty" : 1.0, + "seed" : -1, + "size" : 4920739981, + "stop" : + [ + "<|end_of_text|>", + "<|eot_id|>", + "<|eom_id|>" + ], + "stream" : true, + "system_template" : "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n", + "temperature" : 0.59999999999999998, + "text_model" : false, + "tfs_z" : 1.0, + "top_k" : 40, + "top_p" : 0.90000000000000002, + "typ_p" : 1.0, + "user_template" : "<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n", + "version" : "1" } ``` :::info @@ -89,6 +141,7 @@ This command lists all the downloaded local and remote models. **Usage**: + ```sh @@ -102,8 +155,9 @@ This command lists all the downloaded local and remote models. -For example, it returns the following:w -```bash +For example, it returns the following: + +``` +---------+---------------------------------------------------------------------------+ | (Index) | ID | +---------+---------------------------------------------------------------------------+ @@ -113,7 +167,6 @@ For example, it returns the following:w +---------+---------------------------------------------------------------------------+ | 3 | TheBloke:Mistral-7B-Instruct-v0.1-GGUF:mistral-7b-instruct-v0.1.Q2_K.gguf | +---------+---------------------------------------------------------------------------+ - ``` **Options**: @@ -126,16 +179,18 @@ For example, it returns the following:w | `--cpu_mode` | Display CPU mode. | No | - | `--cpu_mode` | | `--gpu_mode` | Display GPU mode. | No | - | `--gpu_mode` | + ## `cortex models start` + :::info This CLI command calls the following API endpoint: - [Start Model](/api-reference#tag/models/post/v1/models/{modelId}/start) ::: -This command starts a model defined by a `model_id`. - +This command starts a model defined by a `model_id`. **Usage**: + ```sh @@ -145,12 +200,10 @@ This command starts a model defined by a `model_id`. ```sh cortex.exe models start [options] - ``` - :::info This command uses a `model_id` from the model that you have downloaded or available in your file system. ::: @@ -165,15 +218,16 @@ This command uses a `model_id` from the model that you have downloaded or availa | `-h`, `--help` | Display help information for the command. | No | - | `-h` | ## `cortex models stop` + :::info This CLI command calls the following API endpoint: - [Stop Model](/api-reference#tag/models/post/v1/models/{modelId}/stop) ::: -This command stops a model defined by a `model_id`. - +This command stops a model defined by a `model_id`. **Usage**: + ```sh @@ -183,7 +237,6 @@ This command stops a model defined by a `model_id`. ```sh cortex.exe models stop - ``` @@ -191,6 +244,7 @@ This command stops a model defined by a `model_id`. :::info This command uses a `model_id` from the model that you have started before. ::: + **Options**: | Option | Description | Required | Default value | Example | @@ -199,15 +253,16 @@ This command uses a `model_id` from the model that you have started before. | `-h`, `--help` | Display help information for the command. | No | - | `-h` | ## `cortex models delete` + :::info This CLI command calls the following API endpoint: - [Delete Model](/api-reference#tag/models/delete/v1/models/{id}) ::: -This command deletes a local model defined by a `model_id`. - +This command deletes a local model defined by a `model_id`. **Usage**: + ```sh @@ -217,7 +272,6 @@ This command deletes a local model defined by a `model_id`. ```sh cortex.exe models delete - ``` @@ -227,20 +281,23 @@ This command uses a `model_id` from the model that you have downloaded or availa ::: **Options**: + | Option | Description | Required | Default value | Example | |---------------------------|-----------------------------------------------------------------------------|----------|----------------------|------------------------| | `model_id` | The identifier of the model you want to delete. | Yes | - | `mistral` | | `-h`, `--help` | Display help for command. | No | - | `-h` | ## `cortex models update` + :::info This CLI command calls the following API endpoint: - [Update Model](/api-reference#tag/models/patch/v1/models/{modelId) ::: -This command updates the `model.yaml` file of a local model. +This command updates the `model.yaml` file of a local model. **Usage**: + ```sh @@ -250,13 +307,13 @@ This command updates the `model.yaml` file of a local model. ```sh cortex.exe models update [options] - ``` **Options**: + | Option | Description | Required | Default value | Example | |---------------------------|-----------------------------------------------------------------------------|----------|----------------------|------------------------| | `-h`, `--help` | Display help for command. | No | - | `-h` | @@ -306,14 +363,16 @@ This command updates the `model.yaml` file of a local model. | `--n_probs` | Number of probability outputs to return. | No | - | `--n_probs 5` | ## `cortex models import` -This command imports the local model using the model's `gguf` file. +This command imports the local model using the model's `gguf` file. **Usage**: + :::info This CLI command calls the following API endpoint: - [Import Model](/api-reference#tag/models/post/v1/models/import) ::: + ```sh @@ -323,15 +382,14 @@ This CLI command calls the following API endpoint: ```sh cortex.exe models import --model_id --model_path - ``` - **Options**: + | Option | Description | Required | Default value | Example | |---------------------------|-----------------------------------------------------------------------------|----------|----------------------|------------------------| | `-h`, `--help` | Display help for command. | No | - | `-h` | | `--model_id` | The identifier of the model. | Yes | - | `mistral` | -| `--model_path` | The path of the model source file. | Yes | - | `/path/to/your/model.gguf` | \ No newline at end of file +| `--model_path` | The path of the model source file. | Yes | - | `/path/to/your/model.gguf` | diff --git a/docs/docs/cli/ps.mdx b/docs/docs/cli/ps.mdx index a70a9501c..5b531165b 100644 --- a/docs/docs/cli/ps.mdx +++ b/docs/docs/cli/ps.mdx @@ -12,6 +12,7 @@ import TabItem from "@theme/TabItem"; This command shows the running model and its status (Engine, RAM, VRAM, and Uptime). ## Usage + ```sh @@ -27,8 +28,7 @@ This command shows the running model and its status (Engine, RAM, VRAM, and Upti For example, it returns the following table: -```bash -> cortex ps +``` +------------------------+-----------+-----------+-----------+-------------------------------+ | Model | Engine | RAM | VRAM | Uptime | +------------------------+-----------+-----------+-----------+-------------------------------+ @@ -45,4 +45,18 @@ For example, it returns the following table: :::info You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`. -::: \ No newline at end of file +::: + +```sh +cortex --verbose ps +``` +``` +20250131 12:03:52.995079 UTC 472664 INFO Gpu Driver Version: 565.77 - system_info_utils.cc:20 +20250131 12:03:52.995393 UTC 472664 INFO CUDA Version: 12.7 - system_info_utils.cc:31 ++------------------------+-----------+--------+---------+---------------------------------+ +| Model | Engine | RAM | VRAM | Up time | ++------------------------+-----------+--------+---------+---------------------------------+ +| llama3.1:8b-gguf-q4-km | llama-cpp | 0.00 B | 4.58 GB | 9 hours, 40 minutes, 34 seconds | ++------------------------+-----------+--------+---------+---------------------------------+ +20250131 12:03:53.012323 UTC 472670 INFO Will not check for new update, return the cache latest: v1.0.8 - cortex_upd_cmd.cc:149 +``` diff --git a/docs/docs/cli/pull.mdx b/docs/docs/cli/pull.mdx index 028962896..62103aa7d 100644 --- a/docs/docs/cli/pull.mdx +++ b/docs/docs/cli/pull.mdx @@ -12,7 +12,7 @@ import TabItem from "@theme/TabItem"; This CLI command calls the following API endpoint: - [Download Model](/api-reference#tag/pulling-models/post/v1/models/pull) ::: -This command displays downloaded models, or displays models available for downloading. +This command displays downloaded models, or displays models available for downloading. There are 3 ways to download models: - From Cortex's [Built-in models](/models): `cortex pull ` @@ -33,19 +33,21 @@ You can use the `--verbose` flag to display more detailed output of the internal ```sh - cortex pull [options] + cortex pull [options] ``` ```sh - cortex.exe pull [options] + cortex.exe pull [options] ``` For example, this returns the following: ```bash -> cortex pull llama3.2 +cortex pull llama3.2 +``` +``` Downloaded models: llama3.2:3b-gguf-q4-km @@ -68,4 +70,4 @@ Select a model (1-9): | Option | Description | Required | Default value | Example | | -------------- | ------------------------------------------------- | -------- | ------------- | ----------- | | `model_id` | The identifier of the model you want to download. | Yes | - | `mistral` | -| `-h`, `--help` | Display help information for the command. | No | - | `-h` | \ No newline at end of file +| `-h`, `--help` | Display help information for the command. | No | - | `-h` | diff --git a/docs/docs/cli/stop.mdx b/docs/docs/cli/stop.mdx index 0b8625f9e..7422037d1 100644 --- a/docs/docs/cli/stop.mdx +++ b/docs/docs/cli/stop.mdx @@ -8,16 +8,20 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; # `cortex stop` + :::info This CLI command calls the following API endpoint: - [Stop Cortex](/api-reference#tag/system/delete/v1/system) ::: + This command stops the API server. ## Usage + :::info You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`. ::: + ```sh @@ -36,4 +40,4 @@ You can use the `--verbose` flag to display more detailed output of the internal | Option | Description | Required | Default value | Example | |-------------------|-------------------------------------------------------|----------|---------------|-------------| -| `-h`, `--help` | Display help information for the command. | No | - | `-h` | \ No newline at end of file +| `-h`, `--help` | Display help information for the command. | No | - | `-h` | diff --git a/docs/docs/cli/update.mdx b/docs/docs/cli/update.mdx index 0f06f8476..3cc40ba20 100644 --- a/docs/docs/cli/update.mdx +++ b/docs/docs/cli/update.mdx @@ -16,10 +16,11 @@ This command updates Cortex.cpp to the provided version or the latest version. :::info You can use the `--verbose` flag to display more detailed output of the internal processes. To apply this flag, use the following format: `cortex --verbose [subcommand]`. ::: + ```sh - cortex update [options] + sudo cortex update [options] ``` @@ -39,6 +40,3 @@ By default, if no version is specified, Cortex.cpp will be updated to the latest |----------------------------|-------------------------------------------|----------|---------------|------------------------| | `-h`, `--help` | Display help information for the command. | No | - | `-h` | | `-v` | Specify the version of the Cortex. | No | - | `-v1.0.1`| - - - diff --git a/docs/docs/engines/onnx.mdx b/docs/docs/engines/onnx.mdx index 370aa1e53..9414ef537 100644 --- a/docs/docs/engines/onnx.mdx +++ b/docs/docs/engines/onnx.mdx @@ -5,7 +5,7 @@ unlisted: true --- :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex is currently under active development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. ::: Cortex uses `onnxruntime-genai` with DirectML to provide GPU acceleration for AMD, Intel, NVIDIA, and Qualcomm GPUs. @@ -14,11 +14,14 @@ Cortex uses `onnxruntime-genai` with DirectML to provide GPU acceleration for AM ```bash ## Initialize the ONNX engine cortex engines onnx init +``` ## Run an ONNX model +```sh cortex run openhermes-2.5:7b-onnx ``` -## [`model.yaml`](/docs/capabilities/models/model-yaml) Sample + +## `model.yaml` Sample ```yaml name: openhermes-2.5 model: openhermes @@ -33,7 +36,7 @@ top_p: 1.0 temperature: 1.0 frequency_penalty: 0 presence_penalty: 0 -max_tokens: 2048 +max_tokens: 2048 stream: true # true | false ``` @@ -58,4 +61,4 @@ stream: true # true | false You can download a `ONNX` model from the following: - [Cortex Model Repos](/docs/capabilities/models/sources/cortex-hub) - [HuggingFace Model Repos](/docs/capabilities/models/sources/hugging-face) -::: --> \ No newline at end of file +::: --> diff --git a/docs/docs/engines/python-engine.mdx b/docs/docs/engines/python-engine.mdx index 4fe2a5c0b..64996406d 100644 --- a/docs/docs/engines/python-engine.mdx +++ b/docs/docs/engines/python-engine.mdx @@ -1,36 +1,36 @@ --- title: Python Engine -description: Interface for running Python process through cortex +description: Interface for running Python processes through Cortex --- :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex.cpp is currently under active development. Our documentation outlines the intended +behavior of Cortex, which may not yet be fully implemented in the codebase. ::: -# Guild to Python Engine -## Introduction -To run python program, we need python environment and python intepreter to running the different process from the main cortex process. All requests to The python program will be routed through cortex with Http API protocol. -The python-engine acts like a process manager, mange all python processes. -Each python program will be treated as a model and has it own model.yml template +The Python Engine manages Python processes that run models via Cortex. Each Python program is treated as +a model with its own `model.yml` configuration template. All requests are routed through Cortex using HTTP. -## Python engine cpp implementation -The python-engine implemented the [EngineI Interface ](/docs/engines/engine-extension) with the following map: -- LoadModel: Load the python program and start the python process -- UnloadModel: Stop the python process -- GetModelStatus: Send health check requests to the python processes -- GetModels: Get running python program +## Python Engine Implementation -Beside the EngineI interface, the python-engine also implemented the HandleInference and HandleRouteRequest method: -- HandleInference: Send inference request to the python process -- HandleRouteRequest: route any types of requests to the python process +The Python Engine is implemented as a C++ package called [EngineI](/docs/engines/engine-extension). It exposes these core methods: -Python engine is a built in engine of cortex-cpp, so that it will automatically loaded when load model, users don't need to download engine or load/unload engine like working with llama-cpp engine. +- `LoadModel`: Starts Python process and loads model +- `UnloadModel`: Stops process and unloads model +- `GetModelStatus`: Health check for running processes +- `GetModels`: Lists active Python models -## Python program implementation +Additional methods: +- `HandleInference`: Routes inference requests to Python process +- `HandleRouteRequest`: Routes arbitrary requests to Python process -Each python program will be treated as python model. Each python model has it own `model.yml` template: -```yaml +The Python Engine is built into Cortex.cpp and loads automatically when needed. + +## Model Configuration + +Each Python model requires a `model.yml` configuration file: +```yaml id: ichigo-0.5:fp16-linux-amd64 model: ichigo-0.5:fp16-linux-amd64 name: Ichigo Wrapper @@ -57,7 +57,6 @@ extra_params: whisper_port: 3348 ``` - | **Parameter** | **Description** | **Required** | |-----------------|-----------------------------------------------------------------------------------------------------------|--------------| | `id` | Unique identifier for the model, typically includes version and platform information. | Yes | @@ -65,126 +64,103 @@ extra_params: | `name` | The human-readable name for the model, used as the `model_id`. | Yes | | `version` | The specific version number of the model. | Yes | | `port` | The network port on which the Python program will listen for requests. | Yes | -| `script` | Path to the main Python script to be executed by the engine. This is relative path to the model folder | Yes | +| `script` | Path to the main Python script to be executed by the engine. This is relative path to the model folder | Yes | | `log_path` | File location where logs will be stored for the Python program's execution. log_path is relative path of cortex data folder | No | | `log_level` | The level of logging detail (e.g., INFO, DEBUG). | No | | `command` | The command used to launch the Python program, typically starting with 'python'. | Yes | | `files` | For python models, the files is the path to folder contains all python scripts, model binary and environment to run the program | No | | `depends` | Dependencies required by the model, specified by their identifiers. The dependencies are other models | No | | `engine` | Specifies the engine to use, which in this context is 'python-engine'. | Yes | -| `extra_params` | Additional parameters that may be required by the model, often including device IDs and network ports of dependencies models. This extra_params will be passed when running python script | No | +| `extra_params` | Additional parameters passed to the Python script at runtime | No | + +## Example: Ichigo Python Model -## Ichigo python with cortex +[Ichigo python](https://github.com/janhq/ichigo) is a built-in Cortex model for chat with audio support. + +### Required Models + +Ichigo requires these models: -[Ichigo python](https://github.com/janhq/ichigo) is built in model in cortex that support chat with audio. -### Downloads models -Ichigo python requires 4 models to run: - ichigo-0.5 - whispervq - ichigo-0.4 -- fish-speech (this model is required if user want to use text to speech mode) +- fish-speech (optional, for text-to-speech) + +Download models for your platform (example for Linux AMD64): -Firstly, you need to download these models, remember to chose the correct version base on your device and operating system. -for example for linux amd64: ```sh -> curl --location '127.0.0.1:39281/v1/models/pull' \ +curl --location '127.0.0.1:39281/v1/models/pull' \ --header 'Content-Type: application/json' \ --data '{"model":"ichigo-0.5:fp16-linux-amd64"}' -> curl --location '127.0.0.1:39281/v1/models/pull' \ +curl --location '127.0.0.1:39281/v1/models/pull' \ --header 'Content-Type: application/json' \ --data '{"model":"ichigo-0.4:8b-gguf-q4-km"}' -> curl --location '127.0.0.1:39281/v1/models/pull' \ +curl --location '127.0.0.1:39281/v1/models/pull' \ --header 'Content-Type: application/json' \ --data '{"model":"whispervq:fp16-linux-amd64"}' -> curl --location '127.0.0.1:39281/v1/models/pull' \ +curl --location '127.0.0.1:39281/v1/models/pull' \ --header 'Content-Type: application/json' \ --data '{"model":"fish-speech:fp16-linux-amd64"}' ``` -### Start model -Each python model will run it owns server with a port defined in `model.yml`, you can update `model.yml` to change the port. -for the ichigo-0.5 model, it has `extra_params` that need to be defined correctly: -extra_params: - device_id: 0 - fish_speech_port: 22312 - ichigo_model: ichigo-0.4:8b-gguf-q4-km - ichigo_port: 39281 - whisper_port: 3348 +### Model Management -To start model just need to send API: +Start model: ```sh -> curl --location '127.0.0.1:39281/v1/models/start' \ +curl --location '127.0.0.1:39281/v1/models/start' \ --header 'Content-Type: application/json' \ ---data '{ - "model":"ichigo-0.5:fp16-linux-amd64" -}' - +--data '{"model":"ichigo-0.5:fp16-linux-amd64"}' ``` -Then the model will start all dependencies model of ichigo - -### Check Status - -You can check the status of the model by sending API: +Check status: +```sh +curl --location '127.0.0.1:39281/v1/models/status/fish-speech:fp16-linux-amd64' ``` -curl --location '127.0.0.1:39281/v1/models/status/fish-speech:fp16-linux-amd64' + +Stop model: +```sh +curl --location '127.0.0.1:39281/v1/models/stop' \ +--header 'Content-Type: application/json' \ +--data '{"model":"ichigo-0.5:fp16-linux-amd64"}' ``` ### Inference -You can send inference request to the model by sending API: +Example inference request: ```sh -> curl --location '127.0.0.1:39281/v1/inference' \ +curl --location '127.0.0.1:39281/v1/inference' \ --header 'Content-Type: application/json' \ --data '{ "model":"ichigo-0.5:fp16-linux-amd64", "engine":"python-engine", "body":{ - "messages": [ - { - "role":"system", -"content":"you are helpful assistant, you must answer questions short and concil!" + "messages": [{ + "role":"system", + "content":"you are helpful assistant, you must answer questions short and concil!" + }], + "input_audio": { + "data": "base64_encoded_audio_data", + "format": "wav" + }, + "model": "ichigo-0.4:8b-gguf-q4km", + "stream": true, + "temperature": 0.7, + "top_p": 0.9, + "max_tokens": 2048, + "presence_penalty": 0, + "frequency_penalty": 0, + "stop": ["<|eot_id|>"], + "output_audio": true } - ], - "input_audio": { - "data": "base64_encoded_audio_data", - "format": "wav" - }, - "model": "ichigo-0.4:8b-gguf-q4km", - "stream": true, - "temperature": 0.7, - "top_p": 0.9, - "max_tokens": 2048, - "presence_penalty": 0, - "frequency_penalty": 0, - "stop": [ - "<|eot_id|>" - ], - "output_audio": true -}}' - -``` - -### Stop Model - -You can stop the model by sending API: -```sh -> curl --location '127.0.0.1:39281/v1/models/stop' \ ---header 'Content-Type: application/json' \ ---data '{ - "model":"ichigo-0.5:fp16-linux-amd64" }' ``` -Cortex also stop all dependencies of this model. - -### Route requests - -Beside from that, cortex also support route any kind of request to python program through the route request endpoint. +### Route Requests +Generic request routing example: ```sh curl --location '127.0.0.1:39281/v1/route/request' \ --header 'Content-Type: application/json' \ @@ -195,44 +171,37 @@ curl --location '127.0.0.1:39281/v1/route/request' \ "method":"post", "transform_response":"{ {%- set first = true -%} {%- for key, value in input_request -%} {%- if key == \"tokens\" -%} {%- if not first -%},{%- endif -%} \"{{ key }}\": {{ tojson(value) }} {%- set first = false -%} {%- endif -%} {%- endfor -%} }", "body": { - "data": "base64 data", - "format": "wav" -} -} -' - + "data": "base64 data", + "format": "wav" + } +}' ``` -## Add new python model -### Python model implementation +## Adding New Python Models -The implementation of a python program can follow this [implementation](https://github.com/janhq/ichigo/pull/154). -The python server should expose at least 2 endpoint: -- /health : for checking status of server. -- /inference : for inferencing purpose. +### Implementation Requirements -Exemple of the main entrypoint `src/app.py`: +Python models must expose at least two endpoints: +- `/health`: Server status check +- `/inference`: Model inference -``` +Example server implementation: + +```python import argparse import os import sys from pathlib import Path - from contextlib import asynccontextmanager - from typing import AsyncGenerator, List - import uvicorn from dotenv import load_dotenv from fastapi import APIRouter, FastAPI - from common.utility.logger_utility import LoggerUtility from services.audio.audio_controller import AudioController from services.audio.implementation.audio_service import AudioService from services.health.health_controller import HealthController - def create_app() -> FastAPI: routes: List[APIRouter] = [ HealthController(), @@ -243,75 +212,35 @@ def create_app() -> FastAPI: app.include_router(route) return app - def parse_argument(): parser = argparse.ArgumentParser(description="Ichigo-wrapper Application") - parser.add_argument('--log_path', type=str, - default='Ichigo-wrapper.log', help='The log file path') - parser.add_argument('--log_level', type=str, default='INFO', - choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'TRACE'], help='The log level') - parser.add_argument('--port', type=int, default=22310, - help='The port to run the Ichigo-wrapper app on') - parser.add_argument('--device_id', type=str, default="0", - help='The port to run the Ichigo-wrapper app on') - parser.add_argument('--package_dir', type=str, default="", - help='The package-dir to be extended to sys.path') - parser.add_argument('--whisper_port', type=int, default=3348, - help='The port of whisper vq model') - parser.add_argument('--ichigo_port', type=int, default=39281, - help='The port of ichigo model') - parser.add_argument('--fish_speech_port', type=int, default=22312, - help='The port of fish speech model') - parser.add_argument('--ichigo_model', type=str, default="ichigo:8b-gguf-q4-km", - help='The ichigo model name') - args = parser.parse_args() - return args - + parser.add_argument('--log_path', type=str, default='Ichigo-wrapper.log', help='The log file path') + parser.add_argument('--log_level', type=str, default='INFO', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'TRACE']) + parser.add_argument('--port', type=int, default=22310) + parser.add_argument('--device_id', type=str, default="0") + parser.add_argument('--package_dir', type=str, default="") + parser.add_argument('--whisper_port', type=int, default=3348) + parser.add_argument('--ichigo_port', type=int, default=39281) + parser.add_argument('--fish_speech_port', type=int, default=22312) + parser.add_argument('--ichigo_model', type=str, default="ichigo:8b-gguf-q4-km") + return parser.parse_args() if __name__ == "__main__": args = parse_argument() LoggerUtility.init_logger(__name__, args.log_level, args.log_path) - - env_path = Path(os.path.dirname(os.path.realpath(__file__)) - ) / "variables" / ".env" - AudioService.initialize( - args.whisper_port, args.ichigo_port, args.fish_speech_port, args.ichigo_model) + env_path = Path(os.path.dirname(os.path.realpath(__file__))) / "variables" / ".env" + AudioService.initialize(args.whisper_port, args.ichigo_port, args.fish_speech_port, args.ichigo_model) load_dotenv(dotenv_path=env_path) - app: FastAPI = create_app() + app = create_app() print("Server is running at: 0.0.0.0:", args.port) uvicorn.run(app=app, host="0.0.0.0", port=args.port) - ``` +### Deployment + +1. Create model files following the example above +2. Add required `requirements.txt` and `requirements.cuda.txt` files +3. Trigger the [Python Script Package CI](https://github.com/janhq/cortex.cpp/actions/workflows/python-script-package.yml) +4. Trigger the [Python Venv Package CI](https://github.com/janhq/cortex.cpp/actions/workflows/python-venv-package.yml) -The parse_argument must include parameters to integrate with cortex: -- port -- log_path -- log_level - -The python server can also have extra parameters and need to be defined in `extra_params` part of `model.yml` -When starting server, the parameters will be override by the parameters in `model.yml` - -When finished python code, you need to trigger this [CI](https://github.com/janhq/cortex.cpp/actions/workflows/python-script-package.yml) -so that the latest code will be pushed to cortexso huggingface. After pushed to HF, user can download and use it. -The CI will clone and checkout approriate branch of your repo and navigate to the correct folder base on input parameters.The CI needs 5 parameters: -- Path to model directory in github repo: the path to folder contains all model scripts for running python program -- name of repo to be checked out: name of github repo -- branch to be checked out: name of branch to be checked out -- name of huggingface repo to be pushed: name of huggingface repo to be pushed (e.g. cortexso/ichigo-0.5) -- prefix of hf branch: The prefix of branch name (e.g `fp16`) - -### Python venv package -For packaging python venv, you need to prepare a `requirements.txt` and a `requirements.cuda.txt` file in the root of your project. -The `requirements.txt` file should contain all the dependencies for your project, and the `requirements.cuda.txt` file should contain all the dependencies that require CUDA. -The `requirements.txt` will be used to build venv for MacOS. The `requirements.cuda.txt` will be used to build venv for Linux and Windows. - -After finished you need to trigger this [CI](https://github.com/janhq/cortex.cpp/actions/workflows/python-venv-package.yml). - After the CI is finished, the venv for 4 os will be build and pushed to HuggingFace and it can be downloaded and used by users. - The CI will clone and checkout approriate branch of your repo and navigate to the correct folder contains `requirements.txt` base on input parameters.The CI needs 5 parameters: -- Path to model directory in github repo: the path to folder contains all model scripts for running python program -- name of repo to be checked out: name of github repo -- name of model to be release: name of the model that we are building venv for (e.g whispervq) -- branch to be checked out: name of branch to be checked out -- name of huggingface repo to be pushed: name of huggingface repo to be pushed (e.g. cortexso/ichigo-0.5) -- prefix of hf branch: The prefix of branch name (e.g `fp16`) \ No newline at end of file +The CIs will build and publish your model to Hugging Face where it can then be downloaded and used. diff --git a/docs/docs/engines/tensorrt-llm.mdx b/docs/docs/engines/tensorrt-llm.mdx index 94a3d3875..f3dfd6aff 100644 --- a/docs/docs/engines/tensorrt-llm.mdx +++ b/docs/docs/engines/tensorrt-llm.mdx @@ -5,19 +5,42 @@ unlisted: true --- :::warning -🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. +🚧 Cortex is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase. ::: Cortex uses the `tensorrt-llm` inference library for NVIDIA GPUs acceleration. -## Run Model +Download the Engine + ```bash -## Initialize the TensorRT-LLM engine -cortex engines tensorrt-llm init +cortex engines install tensorrt-llm +``` +``` +tensorrt-llm 100%[========================] 00m:00s 1.09 GB/1.09 GB +cuda 100%[========================] 00m:00s 346.61 MB/346.61 MB +Engine tensorrt-llm downloaded successfully! +``` -## Run a TensorRT-LLM model -cortex run openhermes-2.5:7b-tensorrt-llm +Load TensorRT-LLM Engine + +```bash +cortex engines load tensorrt-llm ``` + +To run a model using the `tensorrt-llm` engine, you will have to specify the parti +```bash +cortex run mistral:7b-tensorrt-llm-linux-ada +``` +``` +Start downloading.. +config.json 100%[========================] 00m:00s 5.92 KB/5.92 KB +model.yml 100%[========================] 00m:00s 445.00 B/445.00 B +rank0.engine 89%[=====================> ] 01m:13s 3.49 GB/3.88 GB +tokenizer.model 100%[========================] 00m:00s 573.64 KB/573.64 KB +Model mistral:7b-tensorrt-llm-linux-ada downloaded successfully! +``` + + ## [`model.yaml`](/docs/capabilities/models/model-yaml) Sample ```yaml name: Openhermes-2.5 7b Linux Ada @@ -69,4 +92,4 @@ You can download a `TensorRT-LLM` model from the following: - [Cortex Model Repos](/docs/capabilities/models/sources/cortex-hub) - [HuggingFace Model Repos](/docs/capabilities/models/sources/hugging-face) - Nvidia Catalog (Coming Soon!) -::: --> \ No newline at end of file +::: --> diff --git a/docs/docs/overview.mdx b/docs/docs/overview.mdx index fd181d618..0dcabe41f 100644 --- a/docs/docs/overview.mdx +++ b/docs/docs/overview.mdx @@ -26,7 +26,7 @@ Key Features: - Full C++ implementation, packageable into Desktop and Mobile apps - Pull from Huggingface, or Cortex Built-in Model Library - Models stored in universal file formats (vs blobs) -- Swappable Inference Backends (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp), future: [`ONNXRuntime`](https://github.com/janhq/cortex.onnx), [`TensorRT-LLM`](https://github.com/janhq/cortex.tensorrt-llm)) +- Swappable Inference Backends (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp) and [`ONNXRuntime`](https://github.com/janhq/cortex.onnx)) - Cortex can be deployed as a standalone API server, or integrated into apps like [Jan.ai](https://jan.ai/) - Automatic API docs for your server @@ -36,7 +36,6 @@ Cortex's roadmap includes implementing full compatibility with OpenAI API's and ## Inference Backends - Default: [llama.cpp](https://github.com/ggerganov/llama.cpp): cross-platform, supports most laptops, desktops and OSes - Future: [ONNX Runtime](https://github.com/microsoft/onnxruntime): supports Windows Copilot+ PCs & NPUs and traditional machine learning models -- Future: [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM): supports a variety of model architectures on Nvidia GPUs If GPU hardware is available, Cortex is GPU accelerated by default. @@ -86,51 +85,3 @@ Available to download: Select a model (1-10): ``` - - -{/* - - -| Model ID | Variant (Branch) | Model size | CLI command | -|------------------|------------------|-------------------|------------------------------------| -| codestral | 22b-gguf | 22B | `cortex run codestral:22b-gguf` | -| command-r | 35b-gguf | 35B | `cortex run command-r:35b-gguf` | -| gemma | 7b-gguf | 7B | `cortex run gemma:7b-gguf` | -| llama3 | gguf | 8B | `cortex run llama3:gguf` | -| llama3.1 | gguf | 8B | `cortex run llama3.1:gguf` | -| mistral | 7b-gguf | 7B | `cortex run mistral:7b-gguf` | -| mixtral | 7x8b-gguf | 46.7B | `cortex run mixtral:7x8b-gguf` | -| openhermes-2.5 | 7b-gguf | 7B | `cortex run openhermes-2.5:7b-gguf`| -| phi3 | medium-gguf | 14B - 4k ctx len | `cortex run phi3:medium-gguf` | -| phi3 | mini-gguf | 3.82B - 4k ctx len| `cortex run phi3:mini-gguf` | -| qwen2 | 7b-gguf | 7B | `cortex run qwen2:7b-gguf` | -| tinyllama | 1b-gguf | 1.1B | `cortex run tinyllama:1b-gguf` | - - -| Model ID | Variant (Branch) | Model size | CLI command | -|------------------|------------------|-------------------|------------------------------------| -| gemma | 7b-onnx | 7B | `cortex run gemma:7b-onnx` | -| llama3 | onnx | 8B | `cortex run llama3:onnx` | -| mistral | 7b-onnx | 7B | `cortex run mistral:7b-onnx` | -| openhermes-2.5 | 7b-onnx | 7B | `cortex run openhermes-2.5:7b-onnx`| -| phi3 | mini-onnx | 3.82B - 4k ctx len| `cortex run phi3:mini-onnx` | -| phi3 | medium-onnx | 14B - 4k ctx len | `cortex run phi3:medium-onnx` | - - - -| Model ID | Variant (Branch) | Model size | CLI command | -|------------------|-------------------------------|-------------------|------------------------------------| -| llama3 | 8b-tensorrt-llm-windows-ampere | 8B | `cortex run llama3:8b-tensorrt-llm-windows-ampere` | -| llama3 | 8b-tensorrt-llm-linux-ampere | 8B | `cortex run llama3:8b-tensorrt-llm-linux-ampere` | -| llama3 | 8b-tensorrt-llm-linux-ada | 8B | `cortex run llama3:8b-tensorrt-llm-linux-ada`| -| llama3 | 8b-tensorrt-llm-windows-ada | 8B | `cortex run llama3:8b-tensorrt-llm-windows-ada` | -| mistral | 7b-tensorrt-llm-linux-ampere | 7B | `cortex run mistral:7b-tensorrt-llm-linux-ampere`| -| mistral | 7b-tensorrt-llm-windows-ampere | 7B | `cortex run mistral:7b-tensorrt-llm-windows-ampere` | -| mistral | 7b-tensorrt-llm-linux-ada | 7B | `cortex run mistral:7b-tensorrt-llm-linux-ada`| -| mistral | 7b-tensorrt-llm-windows-ada | 7B | `cortex run mistral:7b-tensorrt-llm-windows-ada` | -| openhermes-2.5 | 7b-tensorrt-llm-windows-ampere | 7B | `cortex run openhermes-2.5:7b-tensorrt-llm-windows-ampere`| -| openhermes-2.5 | 7b-tensorrt-llm-windows-ada | 7B | `cortex run openhermes-2.5:7b-tensorrt-llm-windows-ada`| -| openhermes-2.5 | 7b-tensorrt-llm-linux-ada | 7B | `cortex run openhermes-2.5:7b-tensorrt-llm-linux-ada`| - - - */} diff --git a/docs/docs/quickstart.mdx b/docs/docs/quickstart.mdx index 818594df4..c8e325a44 100644 --- a/docs/docs/quickstart.mdx +++ b/docs/docs/quickstart.mdx @@ -168,28 +168,6 @@ This command stops the Cortex.cpp API server at `localhost:39281` or whichever o - - ## What's Next? Now that Cortex is set up, you can continue on to any of the following sections: