Profile PyTorch models for FLOPs and parameters, helping to evaluate computational efficiency and memory usage.
-
Updated
Jun 22, 2024 - Python
Profile PyTorch models for FLOPs and parameters, helping to evaluate computational efficiency and memory usage.
A small OpenCL benchmark program to measure peak GPU/CPU performance.
🛠 Toolbox to extend PyTorch functionalities
MethodsCmp: A Simple Toolkit for Counting the FLOPs/MACs, Parameters and FPS of Pytorch-based Methods
AI and Memory Wall
A toolkit for scaling law research ⚖
Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)
A simple program to calculate and visualize the FLOPs and Parameters of Pytorch models, with handy CLI and easy-to-use Python API.
PyTorch module FLOPS counter
FLOPs calculator with tf.profiler for neural network architecture written in tensorflow 2.2+ (tf.keras)
Dynamic Frame Interpolation in Wavelet Domain (TIP 2023)
Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption
【瑞士军刀般的工具】用最短的代码完成对模型的分析,包含 ImageNet Val、FLOPs、Params、Throuthput、CAM 等
benchmark pytorch models
Utilities to perform deep learning models benchmarking (number of parameters, FLOPS and inference latency)
FLOPs and other statistics COunter for tf.keras neural networks
Add a description, image, and links to the flops topic page so that developers can more easily learn about it.
To associate your repository with the flops topic, visit your repo's landing page and select "manage topics."