A simple Large Language Model (LLM) chatbot project, where users can upload PDF files to receive tailored responses generated directly from the document contents.
-
Updated
Mar 3, 2024 - Python
A simple Large Language Model (LLM) chatbot project, where users can upload PDF files to receive tailored responses generated directly from the document contents.
The ultimate teaching planning tool for generating guided questions and answers for your students
RAG-LLM enables interactive question answering leveraging RAG architecture and Large Language Models (LLMs) applied to custom dataset regarding Medium articles.
A container GitHub Action to review a pull request by Groq Inference API
A simple FASTAPI chatbot that uses LlamaIndex and LlamaParse to read custom PDF data.
It is an LLM powered assistant (MultiModal RAG) that can read directories, even images, to gather information about companies. You can use HuggingFace or Ollama models,Claude,OpenAI and Google LLMs.I am inspired by IBM Discovery.
Flask API with a Streamlit ChatBot, this project provides a dynamic interface for database interaction and AI-driven user engagement, featuring secure, Docker-deployed components and advanced GPT-3.5 integration.
Ask Wikipedia Pages ChatBot : Streamlit App, llama-index, llama3, HuggingFace embeddings
Generate documentation using Hugging Face embeddings and local LLMs
RAG-based chatbot system using open-source models and context-aware chunking strategy
A self deployable LLM application with our innovational IC framework (a RAG method)
Use RAG with Langchain to chat with your data and display the retrieved source(s)
LLM RAG enabled bookshelf
Development and deployment of a question-answer LLM model using Llama2 with 7B parameters and RAG with LangChain
Unleash the power of LLMs over your data 🦙 ~ Exploring Different QA Engines
Chatting with PDF documents using large language models (GPT)
Add a description, image, and links to the llama-index topic page so that developers can more easily learn about it.
To associate your repository with the llama-index topic, visit your repo's landing page and select "manage topics."