Chat with local LLM about your PDF and text documents, privacy ensured [llamaindex and llama3]
-
Updated
Jun 2, 2024 - Python
Chat with local LLM about your PDF and text documents, privacy ensured [llamaindex and llama3]
this repo demonstrates the ai capabilities over spring boot
C program for interacting with Ollama server from a Linux terminal
Frontend for the Ollama LLM, built with React.js and Flux architecture.
macOS app for interacting with local LLMs, including Ollama. Focused on superprompting and accessing local data.
📜 A quest will be assigned to you by LLM.
A DBMS project with Streamlit Frontend for stock management simulation with Backtesting.
Ollama Chat is a GUI for Ollama designed for macOS.
Desktop UI for Ollama made with PyQT
A client library that makes it easy to connect microcontrollers running MicroPython to the Ollama server
OllamaChat: A user-friendly GUI to interact with llama2 and llama2-uncensored AI models. Host them locally with Python and KivyMD. Installed Ollama for Windows. For more, visit Ollama on GitHub
Language Server Protocol for accessing Large Language Models
ollama plugin for asdf version manager
llamachan is a project that realises the idea of a dead internet for an imageboard
CrewAI Local LLM is a GitHub repository for a locally hosted large language model (LLM) designed to enable private, offline AI model usage and experimentation.
Add a description, image, and links to the ollama-client topic page so that developers can more easily learn about it.
To associate your repository with the ollama-client topic, visit your repo's landing page and select "manage topics."