The AI-native database built for LLM applications, providing incredibly fast full-text and vector search
-
Updated
May 29, 2024 - C++
The AI-native database built for LLM applications, providing incredibly fast full-text and vector search
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan
Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
Epsilla is a high performance Vector Database Management System. Try out hosted Epsilla at https://cloud.epsilla.com/
Cuda implementation of Extended Long Short Term Memory (xLSTM) with C++ and PyTorch ports
Add a description, image, and links to the llms topic page so that developers can more easily learn about it.
To associate your repository with the llms topic, visit your repo's landing page and select "manage topics."