Skip to content

flashinfer-ai/flashinfer-bench

Repository files navigation

logo

Documentation License PyPI

Building the Virtuous Cycle for AI-driven LLM Systems

Get Started | Documentation | Blogpost

FlashInfer-Bench is a benchmark suite and production workflow designed to build a virtuous cycle of self-improving AI systems.

It is part of a broader initiative to build the virtuous cycle of AI improving AI systems — enabling AI agents and engineers to collaboratively optimize the very kernels that power large language models.

Installation

Install FlashInfer-Bench with pip:

pip install flashinfer-bench

Import FlashInfer-Bench:

import flashinfer_bench as fib

print(fib.__version__)

Get Started

This guide shows you how to use FlashInfer-Bench python module with the FlashInfer-Trace dataset.

FlashInfer Trace Dataset

We provide an official dataset called FlashInfer-Trace with kernels and workloads in real-world AI system deployment environments. FlashInfer-Bench can use this dataset to measure and compare the performance of kernels. It follows the FlashInfer Trace Schema.

The official dataset is on HuggingFace: https://huggingface.co/datasets/flashinfer-ai/flashinfer-trace

Collaborators

Our collaborators include:

About

Building the Virtuous Cycle for AI-driven LLM Systems

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 9