This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Description
Problem
Current way of handling is more bruteforcing the base code from llamacpp, should be handled more elegantly.
task_id = llama.request_completion(data, false, false, -1);
The above part must be implemented inside drogon stream mechanism through lambda not outside of it like right now