Use LLMs for building real-world apps
-
Updated
Mar 10, 2024 - HTML
Use LLMs for building real-world apps
Medical RAG QA App using Meditron 7B LLM, Qdrant Vector Database, and PubMedBERT Embedding Model.
Bedrock Knowledge Base and Agents for Retrieval Augmented Generation (RAG)
Source code for the Gilded Age Gourmet, a cooking chat app based on the Boston Cooking-School Cook Book.
'Talk to your slide deck' (Multimodal RAG) using foundation models (FMs) hosted on Amazon Bedrock and Amazon SageMaker
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
This is a RAG implementation using Open Source stack. BioMistral 7B has been used to build this app along with PubMedBert as an embedding model, Qdrant as a self hosted Vector DB, and Langchain & Llama CPP as an orchestration frameworks.
A proof-of-concept for a RAG to query the scikit-learn documentation
Implementing Retrieval-Augmented Generation (RAG) with constructed Knowledge Graph
Convert HTML to Markdown with Elixir
Ever thought of talking to your Email Inbox, like talking to a Real-human 😲. Well, you can do it completely on Device!! 🔥🔥🔥 No privacy issues. I used Chroma with Docker, Mistral-7B-Instruct, and Ollama.
List of experiments on Gen AI ecosystem
The objective of this project is to create a chatbot that can be used to communicate with users to provide answers to their health issues. This is a RAG implementation using open source stack.
This Python application creates a simple document assistant using Streamlit, pinecone (vector store) and a language model (openai) for generating responses to user queries.
Just training on langchain to improve RAG skills
Add a description, image, and links to the rag topic page so that developers can more easily learn about it.
To associate your repository with the rag topic, visit your repo's landing page and select "manage topics."