PyTorch original implementation of Cross-lingual Language Model Pretraining.
-
Updated
Jul 28, 2020 - Python
PyTorch original implementation of Cross-lingual Language Model Pretraining.
Training Using Multiple GPUs
AI核心库
Very minimal pytorch boilerplate with wandb logging and multi gpu support
Distributed_compy is a distributed computing library that offers multi-threading, heterogeneous (CPU + mult-GPU), and multi-node support
Keras light-weight model for sketch images classification using Quick!Draw dataset
asynchoronous learning example working inside localhost
Recommendation Engine powered by Matrix Factorization.
Custom Iterable Dataset Class for Large-Scale Data Loading
multi_gpu_infer 多gpu预测 multiprocessing or subprocessing
CRNN(Convolutional Recurrent Neural Network), with optional STN(Spatial Transformer Network), in Tensorflow, multi-gpu supported.
TOmographic MOdel-BAsed Reconstruction (ToMoBAR) software
Gradually-Warmup Learning Rate Scheduler for PyTorch
MobileNet build with Tensorflow
Add a description, image, and links to the multigpu topic page so that developers can more easily learn about it.
To associate your repository with the multigpu topic, visit your repo's landing page and select "manage topics."