Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 0 additions & 72 deletions .github/workflows/python-script-package.yml

This file was deleted.

275 changes: 0 additions & 275 deletions .github/workflows/python-venv-package.yml

This file was deleted.

1 change: 0 additions & 1 deletion docs/docs/architecture.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -144,4 +144,3 @@ The sequence diagram above outlines the interactions between various components
Our development roadmap outlines key features and epics we will focus on in the upcoming releases. These enhancements aim to improve functionality, increase efficiency, and expand Cortex's capabilities.

- **RAG**: Improve response quality and contextual relevance in our AI models.
- **Cortex Python Runtime**: Provide a scalable Python execution environment for Cortex.
3 changes: 1 addition & 2 deletions docs/docs/basic-usage/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,7 @@ curl --request DELETE \
```

## Engines
Cortex currently supports a general Python Engine for highly customised deployments and
2 specialized ones for different multi-modal foundation models: llama.cpp and ONNXRuntime.
Cortex currently supports 2 specialized ones for different multi-modal foundation models: llama.cpp and ONNXRuntime.

By default, Cortex installs `llama.cpp` as it main engine as it can be used in most laptops,
desktop environments and operating systems.
Expand Down
2 changes: 0 additions & 2 deletions docs/docs/capabilities/models/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,6 @@ Cortex supports three model formats and each model format require specific engin
- GGUF - run with `llama-cpp` engine
- ONNX - run with `onnxruntime` engine

Within the Python Engine (currently under development), you can run models in other formats

:::info
For details on each format, see the [Model Formats](/docs/capabilities/models/model-yaml#model-formats) page.
:::
Expand Down
Loading
Loading