This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Description
Cortex version
v172
Describe the Bug
cortex run returns model failed 409 status code.
But model starts successfully, can successfully chat (CLI, API postman).
This is confusing to users as they may think there is an error
Seems like Louis also got a 409 error in #1475
> cortex run model
Starting server ...
Host: 127.0.0.1 Port: 39281
Server started
API Documentation available at: http://127.0.0.1:39281
Error: Model failed to get model status with status code: 409
tinyllama:1b-gguf model started successfully. Use `cortex-nightly chat tinyllama:1b-gguf` for interactive chat shell
Steps to Reproduce
No response
Screenshots / Logs
What is your OS?
What engine are you running?