llama
Here are 30 public repositories matching this topic...
LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
-
Updated
Jul 27, 2023 - Rust
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
-
Updated
Aug 3, 2023 - Rust
AIonic: A unified, user-friendly Rust library for seamless integration with various public Large Language Model APIs, such as openAI or Bard
-
Updated
Aug 10, 2023 - Rust
Generic god (Ollama discord bot) but in Rust.
-
Updated
Feb 26, 2024 - Rust
Ask LLaMa about image in your clipboard
-
Updated
Mar 9, 2024 - Rust
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
-
Updated
Mar 13, 2024 - Rust
A terminal style user interface to chat with AI characters using llama LLMs for locally processed AI.
-
Updated
Mar 21, 2024 - Rust
A lightweight Rust application to test interaction with large language models. Currently supports running GGUF quantized models with hardware acceleration.
-
Updated
Apr 19, 2024 - Rust
Production-Ready LLM Agent SDK for Every Developer
-
Updated
Apr 19, 2024 - Rust
🦙 A Kubernetes operator for Ollama
-
Updated
Apr 20, 2024 - Rust
Improve this page
Add a description, image, and links to the llama topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama topic, visit your repo's landing page and select "manage topics."