Some common problems and solutions.
If the server is disconnected without any error, it's probably a problem with the llama.cpp binaries.
The solution is to recompile the binaries:
catai cpp
You can configure the download location by changing the CATAI_DIR
environment variable.
More environment variables configuration can be found in the configuration
In case you have a GPU that supports CUDA, but the server doesn't recognize it, you can try to install the CUDA toolkit, and rebuild the binaries.
Rebuild the binaries with CUDA support:
catai cpp --cuda
In case of an error, check the cuda troubleshooting here.
In case you have an unsupported processor, you can try to rebuild the binaries.
catai cpp