-
autra tech
- Beijing
-
07:25
(UTC +08:00)
Block or Report
Block or report autra-weiliu
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
Megatron-LM
Megatron-LM PublicForked from NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
Python
-
DeepSpeed
DeepSpeed PublicForked from microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Python
-
how-to-optim-algorithm-in-cuda
how-to-optim-algorithm-in-cuda PublicForked from BBuf/how-to-optim-algorithm-in-cuda
how to optimize some algorithm in cuda.
Cuda
-
TensorRT-LLM
TensorRT-LLM PublicForked from NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
C++
-
flash-attention
flash-attention PublicForked from Dao-AILab/flash-attention
Fast and memory-efficient exact attention
Python
-
vllm
vllm PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Python
If the problem persists, check the GitHub status page or contact support.