A Framework of Small-scale Large Multimodal Models
-
Updated
Apr 26, 2025 - Python
A Framework of Small-scale Large Multimodal Models
Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.
A simple Python script for running LLMs on Intel's Neural Processing Units (NPUs)
This application allows users to upload PDF files, process them, and ask questions about the content using a locally hosted language model. The system uses Retrieval-Augmented Generation (RAG) to provide accurate answers based on the uploaded PDFs.
Most simple and minimal code to run an LLM chatbot from HuggingFace hub with OpenVINO
An offline AI-powered chatbot built using Streamlit and TinyLlama. It responds to your messages in real-time without needing internet access. Great for experimenting with lightweight local language models.
MindEase is a mental health assistant that combines IoT hardware with AI to provide emotional support. It uses an ESP32 for audio input/output and integrates with AI models and cloud services for natural language understanding and response generation.
A real-time offline voice-to-voice AI assistant built for Raspberry Pi
This project is a chat application with a web interface developed using Streamlit and a backend developed with FastAPI. Use LLM TinyLlama Model as chat assistant.
Fine-tuning the Tiny Llama model to mimic my professor's writing style using the Llama Factory. The project involves data collection, preprocessing, preparation, fine-tuning, and evaluation.
The LLM FineTuning and Evaluation project 🚀 enhances FLAN-T5 models for tasks like summarizing Spanish news articles 🇪🇸📰. It features detailed notebooks 📚 on fine-tuning and evaluating models to optimize performance for specific applications. 🔍✨
Electronic Health Management Application(Mobile+Web)
An integrated AI suite combining intelligent PDF analysis, automated research capabilities, and multi-agent academic paper generation, powered by both cloud and local language models to streamline research and document processing workflows.
The project was undertaken as part of the Intel Unnati Industrial Training program for the year 2024. The primary objective of this project aligns with Problem Statement PS-04: Introduction to GenAI LLM Inference on CPUs and subsequent LLM Model Finetuning for the development of a Custom Chatbot.
Repository for course project for the Applied ML course
Add a description, image, and links to the tinyllama topic page so that developers can more easily learn about it.
To associate your repository with the tinyllama topic, visit your repo's landing page and select "manage topics."