A generalized information-seeking agent system with Large Language Models (LLMs).
-
Updated
Jun 19, 2024 - Python
A generalized information-seeking agent system with Large Language Models (LLMs).
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
MVP of an idea using multiple local LLM models to simulate and play D&D
A local chatbot for managing docs
Read your local files and answer your queries
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
A minimal CLI tool to locally summarize any text using LLM!
This is a basic workflow with CrewAI agents working with sales transactions to draw business insights and marketing recommendations. The agents will work on everything from the execution plan to the business insights report. It works with local LLM via Ollama (I'm using llama3:8B but you can easily change it).
This repository contains a Python-based tool for summarizing web content using the Ollama API. It scrapes articles from URLs, cleans and processes the HTML content, and generates summaries using a pre-trained language model. The repository also includes a rich-based logging utility for improved console output.
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."