Describe the feature you'd like
To be able to use all this system locally, so we can use local models like Wizard-Vicuna and not having to share our data with OpenAI or other sites or clouds.
Maybe an option to avoid having to do a full local LLM implementation is to make it communicate with Oobabooga with it's API, not sure thought, but I suspect it's similar to talking with ChatGPT.
Will this be implemented at some point?
Thanks!