A RL approach to enable cost-effective, intelligent interactions between a local agent and a remote LLM
-
Updated
Aug 22, 2024 - Python
A RL approach to enable cost-effective, intelligent interactions between a local agent and a remote LLM
Concepts and examples on using and training LLMs
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
KATI-LLAMA is an AI desktop chat application using Large Language Models. It supports voice and visual emotion feedback of the AI. The goal of the development goes in the direction of J.A.R.V.I.S or HAL 9000. I imagine an application that is uncomplicated to set up and does not cost anything. Just download, launch and use.
[Work In Progress] Server/Cloud-ready FastChat Docker images.
This is the repo for Vicuna Chemical Expert, which can help to solve some chemical questions.
fastchat/Integrate Langchain/Create Private Knowledge Base
Node-RED Flow (and web page example) for the Vicuna AI model
A speech-to-speech talking bot (in development)
Vicuna 7B is a large language model that runs in the browser. Exposes programmatic access with minimal configuration.
Add a description, image, and links to the vicuna-7b topic page so that developers can more easily learn about it.
To associate your repository with the vicuna-7b topic, visit your repo's landing page and select "manage topics."