Skip to content
@vLLM-HUST

vLLM-HUST

Upstream-compatible vLLM fork organization for domestic hardware enablement and AGI4S serving

vLLM-HUST

国产算力友好的 vLLM fork 组织,围绕推理运行时、Ascend 使能、开发工作区、Benchmark、Website 与 AI 应用集成构建完整工程链路。

An upstream-compatible vLLM fork organization focused on domestic-hardware enablement, Ascend support, AGI4S serving, benchmark-driven validation, and a practical multi-repository developer experience.

Quick Links

What We Build

vLLM-HUST 以上游 vLLM 生态为基础,重点面向下面几类工作:

  • 保持与上游 vLLM / vLLM Ascend 的兼容与可持续同步
  • 支持 Ascend 等国产硬件上的推理与部署
  • 强化 AGI4S 场景,包括长上下文、工具调用、结构化输出与服务稳定性
  • 提供从开发工作区到 Website、Benchmark、Workstation 的完整配套仓库

In practice, the organization concentrates on four goals:

  • keep vllm-hust mergeable with upstream vllm whenever possible
  • isolate hardware-specific logic in plugins, managers, and deployment tooling
  • validate runtime behavior with real benchmarks, smoke tests, and website-facing artifacts
  • connect low-level serving infrastructure to end-user and research-facing products

组织仓库关系

flowchart TD
    A[vllm-hust\n核心运行时与 OpenAI 兼容服务]
    B[vllm-ascend-hust\nAscend 硬件插件]
    C[ascend-runtime-manager\nAscend 运行时诊断与修复]
    D[vllm-hust-dev-hub\n多仓开发工作区与 quickstart]
    E[vllm-hust-benchmark\nBenchmark 编排与结果导出]
    F[vllm-hust-website\n官网与 Leaderboard 展示]
    G[vllm-hust-workstation\n本地/私有化 Web 工作台]
    H[vllm-hust-docs\n操作手册与同步记录]
    I[EvoScientist\n面向科研智能体的上层应用]

    B --> A
    C --> A
    D --> A
    D --> B
    D --> C
    E --> A
    E --> F
    G --> A
    H --> A
    H --> B
    I --> A
Loading

仓库地图

Repository Map At A Glance

Repository Primary role Depends on / connects to
vllm-hust core inference runtime and serving fork upstream vllm, benchmark, workstation, plugin
vllm-ascend-hust Ascend hardware plugin vllm-hust, upstream vllm-ascend
ascend-runtime-manager runtime repair and deployment tooling vllm-hust, vllm-ascend-hust
vllm-hust-dev-hub multi-repo workspace and bootstrap all local sibling repos
vllm-hust-benchmark benchmark orchestration and export vllm-hust, vllm-hust-website
vllm-hust-website landing page and leaderboard snapshots benchmark exports, workstation embeds
vllm-hust-workstation user-facing web console vllm-hust, EvoScientist
vllm-hust-docs operations, sync notes, internal docs runtime and plugin repos
EvoScientist higher-level research agent product vllm-hust APIs and tools

核心运行时

  • vllm-hust 基于上游 vLLM 的主运行时 fork,是整个组织的核心仓库,负责推理引擎、OpenAI 兼容服务、CLI 与主要 CI。

  • vllm-ascend-hust vllm-hust 的 Ascend 插件与本地化发行仓库,遵循上游硬件插件模式,尽量把硬件相关逻辑隔离在插件层。

  • ascend-runtime-manager 独立的 Ascend 运行时修复与诊断工具,负责环境探测、容器化部署、依赖修复与 Python 栈对齐。

工程与开发体验

  • vllm-hust-dev-hub 多仓开发入口,提供 VS Code workspace、quickstart、clone 脚本与自托管 CI 相关工具。

  • vllm-hust-docs 组织级文档仓库,用于放置部署手册、兼容性说明、上游同步记录和团队操作指南。

验证、展示与应用层

  • vllm-hust-benchmark vllm-hust benchmark 的稳定包装层,负责场景编排、结果导出和与 Website 的对接。

  • vllm-hust-website 官网、Leaderboard 与演示入口,展示组织介绍、版本信息和 Benchmark 结果快照。

  • vllm-hust-workstation 面向终端用户的 Web 工作站,提供统一推理入口、可视化控制台与 EvoScientist 嵌入能力。

  • EvoScientist 面向科研工作流的智能体应用,可把 vllm-hust 作为底层推理与工具调用后端。

推荐理解顺序

如果你第一次进入 vLLM-HUST 组织,推荐按这个顺序理解:

  1. vllm-hust 开始,理解核心运行时与服务接口。
  2. 如果你关注 Ascend 或国产硬件,再看 vllm-ascend-hustascend-runtime-manager
  3. 如果你要搭本地开发环境,直接使用 vllm-hust-dev-hub
  4. 如果你要做结果展示或性能验证,再看 vllm-hust-benchmarkvllm-hust-website
  5. 如果你关注最终用户体验或上层应用,再看 vllm-hust-workstationEvoScientist

For English-speaking contributors, the same reading order applies:

  1. Start with vllm-hust for the runtime and serving surface.
  2. Move to vllm-ascend-hust and ascend-runtime-manager for Ascend-specific support.
  3. Use vllm-hust-dev-hub for the intended multi-repo development workflow.
  4. Read vllm-hust-benchmark and vllm-hust-website for validation and result publication.
  5. Finish with vllm-hust-workstation and EvoScientist for user-facing and research-facing applications.

与上游的关系

vLLM-HUST 不是从零开始的新推理栈,而是围绕上游项目进行工程化增强:

  • 上游运行时参考:vllm-project/vllm
  • 上游 Ascend 插件参考:vllm-project/vllm-ascend
  • 相关比较与生态参考:sgl-project/sglang

组织内仓库默认优先保持可维护、可同步、可验证,而不是无边界地与上游分叉。

开始贡献

欢迎通过 issue、pull request 和 benchmark / deployment 反馈一起完善这个组织。

Community Defaults

This organization also uses this repository for shared community health files:

  • default issue templates
  • default pull request template
  • shared security policy
  • shared code of conduct

If a specific repository does not override those files, GitHub will fall back to the defaults provided here.

Popular repositories Loading

  1. vllm-hust vllm-hust Public

    Forked from vllm-project/vllm

    Upstream-compatible vLLM fork for domestic hardware enablement and AGI4S serving

    Python 4 2

  2. vllm-hust-dev-hub vllm-hust-dev-hub Public

    C++ 1 1

  3. vllm-hust-website vllm-hust-website Public

    vllm-hust official website and benchmark-driven serving showcase

    JavaScript

  4. vllm-hust-workstation vllm-hust-workstation Public

    vllm-hust workstation - Private AI workstation with one-click local deployment and realtime metrics

    TypeScript

  5. vllm-hust-docs vllm-hust-docs Public

    Python

  6. vllm-ascend-hust vllm-ascend-hust Public

    IntelliStream fork of vllm-ascend for vllm-hust integration and Ascend runtime convergence

    Python

Repositories

Showing 10 of 11 repositories

Top languages

Loading…

Most used topics

Loading…