The open-source serverless GPU container runtime.
-
Updated
Jul 11, 2024 - Go
The open-source serverless GPU container runtime.
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
A holistic way of understanding how LLaMA and its components run in practice, with code and detailed documentation.
A diverse, simple, and secure one-stop LLMOps platform
An AI assisted kubectl helper
AWS Go SDK examples for Amazon Bedrock
Inference Llama 2 in one file of pure go
Implement RAG (using LangChain and PostgreSQL) for Go applications to improve the accuracy and relevance of LLM outputs
Go framework for language model-powered applications with composability and chaining. Inspired by LangChain.
This repository is a work in progress (WIP).
Vectoria is an embedded vector database.
A declarative DSL (domain-specific language) for IDD (Inference-Driven-Development) and testing on any codebase in any programming language
Go package and example utilities for using Ollama / LLMs
A prototype of a turn-based story generating software, which makes use of a GPT model, DALL-E and wikidata auto-searching techniques to spin remarkably accurate narratives.
Onefile can both serialize and deserialize code, enabling the conversion of project files into a single text file and vice versa for seamless integration with LLM queries.
A smart Dialogue Based Task Manager capable of managing your schedule through conversation
Interact with your shell in natural language and goracle will output the corresponding command
Add a description, image, and links to the large-language-models topic page so that developers can more easily learn about it.
To associate your repository with the large-language-models topic, visit your repo's landing page and select "manage topics."