🌻 Deep Learning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Coding assistance for JupyterLab (code navigation + hover suggestions + linters + autocompletion + rename) using Language Server Protocol
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
ncnn is a high-performance neural network inference framework optimized for the mobile platform
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Publication-ready NN-architecture schematics.
A performant and modular runtime for TensorFlow
Turn any PDF or image document into structured data for your AI. A powerful, lightweight OCR toolkit that bridges the gap between images/PDFs and LLMs. Supports 100+ languages.
The implementation of various lightweight networks by using PyTorch. such as:MobileNetV2,MobileNeXt,GhostNet,ParNet,MobileViT、AdderNet,ShuffleNetV1-V2,LCNet,ConvNeXt,etc. ⭐⭐⭐⭐⭐
min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch
Make bilingual epub books Using AI translate
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training


