The following video demonstrates the below steps:
MaKLlama
- 2 followers
- China
Popular repositories Loading
-
-
containerd
containerd PublicForked from containerd/containerd
An open and reliable container runtime
Go 1
-
-
-
ollama
ollama PublicForked from ollama/ollama
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Go
Repositories
- ollama Public Forked from ollama/ollama
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
makllama/ollama’s past year of commit activity - vllm Public Forked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
makllama/vllm’s past year of commit activity - fastfetch Public Forked from gpustack/fastfetch
Like neofetch, but much faster because written mostly in C.
makllama/fastfetch’s past year of commit activity - k8sgpt-operator Public Forked from k8sgpt-ai/k8sgpt-operator
Automatic SRE Superpowers within your Kubernetes cluster
makllama/k8sgpt-operator’s past year of commit activity - llama-box Public Forked from gpustack/llama-box
LLM inference server implementation based on llama.cpp.
makllama/llama-box’s past year of commit activity - ollama-registry-pull-through-proxy Public Forked from simonfrey/ollama-registry-pull-through-proxy
A proxy that allows you to host ollama images in your local environment
makllama/ollama-registry-pull-through-proxy’s past year of commit activity