Use your locally running AI models to assist you in your web browsing
-
Updated
Nov 10, 2024 - TypeScript
Use your locally running AI models to assist you in your web browsing
A generalized information-seeking agent system with Large Language Models (LLMs).
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
MVP of an idea using multiple local LLM models to simulate and play D&D
Read your local files and answer your queries
Chat with your pdf using your local LLM, OLLAMA client.
Local AI Search assistant web or CLI for ollama and llama.cpp. Lightweight and easy to run, providing a Perplexity-like experience.
This is a basic workflow with CrewAI agents working with sales transactions to draw business insights and marketing recommendations. The agents will work on everything from the execution plan to the business insights report. It works with local LLM via Ollama (I'm using llama3:8B but you can easily change it).
A minimal CLI tool to locally summarize any text using LLM!
Run gguf LLM models in Latest Version TextGen-webui
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."