Leveraging LLM to build Conversational UIs
-
Updated
Jun 19, 2024 - TypeScript
Leveraging LLM to build Conversational UIs
A Dockerized Streamlit app leveraging a RAG LLM with FAISS to offer answers from uploaded markdown files, deployed on GCP Cloud.
RAG-LLM enables interactive question answering leveraging RAG architecture and Large Language Models (LLMs) applied to custom dataset regarding Medium articles.
Efficient Serving of Large-scale Vector Search with Sharded Indexes
A repo for my MS Project titled "Fake-news detection".
Add a description, image, and links to the rag-llm topic page so that developers can more easily learn about it.
To associate your repository with the rag-llm topic, visit your repo's landing page and select "manage topics."