Skip to content
@EmbeddedLLM

EmbeddedLLM

EmbeddedLLM is the creator behind JamAI Base, a platform designed to orchestrate AI with spreadsheet-like simplicity.

Pinned Loading

  1. JamAIBase Public

    The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate…

    Python 820 25

  2. vllm Public

    Forked from vllm-project/vllm

    vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 88 5

  3. embeddedllm Public

    EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU

    Python 33 1

Repositories

Showing 10 of 48 repositories
  • vllm-rocmfork Public Forked from ROCm/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 0 Apache-2.0 6,233 0 0 Updated Mar 9, 2025
  • vllm Public Forked from vllm-project/vllm

    vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 88 Apache-2.0 6,233 1 0 Updated Mar 8, 2025
  • JamAIBase Public

    The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate on AI applications.

    Python 820 Apache-2.0 25 1 0 Updated Mar 5, 2025
  • aiter Public Forked from ROCm/aiter

    AI Tensor Engine for ROCm

    Cuda 0 MIT 12 0 0 Updated Feb 28, 2025
  • Python 0 Apache-2.0 1 0 0 Updated Feb 24, 2025
  • lmcache-vllm Public Forked from LMCache/lmcache-vllm

    The driver for LMCache core to run in vLLM

    Python 0 Apache-2.0 19 0 0 Updated Jan 24, 2025
  • LMCache Public Forked from LMCache/LMCache

    ROCm support of Ultra-Fast and Cheaper Long-Context LLM Inference

    Python 0 Apache-2.0 57 0 0 Updated Jan 24, 2025
  • Python 0 7 0 0 Updated Jan 23, 2025
  • Python 0 Apache-2.0 89 0 0 Updated Jan 22, 2025
  • kvpress Public Forked from NVIDIA/kvpress

    LLM KV cache compression made easy

    Python 0 Apache-2.0 28 0 0 Updated Jan 21, 2025

Top languages

Loading…

Most used topics

Loading…