Django LLM Portal
-
Updated
May 27, 2024 - HTML
Django LLM Portal
AI-based search engine done right
This is a RAG implementation using Open Source stack. BioMistral 7B has been used to build this app along with PubMedBert as an embedding model, Qdrant as a self hosted Vector DB, and Langchain & Llama CPP as an orchestration frameworks.
Retrieval-Augmented Generation using Azure OpenAI
Locust Early Warning System(loews) project implementation(Locally)
Advancing the next generation of Retrieval Augmented Generation (RAG): A dynamic exploration of RAG technology's evolving landscape. This repository is the go-to resource for state-of-the-art developments, conceptual advancements, and the future trajectory of AI-driven information retrieval and generation.
'Talk to your slide deck' (Multimodal RAG) using foundation models (FMs) hosted on Amazon Bedrock and Amazon SageMaker
An Art-Deco bot that utilizes RAG. Benchmarking of RAG vs. LLMs on QA and timing
This Python application creates a simple document assistant using Streamlit, pinecone (vector store) and a language model (openai) for generating responses to user queries.
Super simple RAG supported HTML to web code generator
Development and evaluation of a Retrieval-augmented generation (RAG) system based on Cleantech Media Articles
A proof-of-concept for a RAG to query the scikit-learn documentation
List of experiments on Gen AI ecosystem
Legal Assistant is an innovative application that leverages RAG (Retrieval-Augmented Generation) technology to deliver personalized legal advice and guidance based on Moroccan law.
Source code for the Gilded Age Gourmet, a cooking chat app based on the Boston Cooking-School Cook Book.
Add a description, image, and links to the rag topic page so that developers can more easily learn about it.
To associate your repository with the rag topic, visit your repo's landing page and select "manage topics."