ALBERT model Pretraining and Fine Tuning using TF2.0
-
Updated
Mar 24, 2023 - Python
ALBERT model Pretraining and Fine Tuning using TF2.0
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python ⚡
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
Provides code to serialize the different models involved in Stable Diffusion as SavedModels and to compile them with XLA.
Tensorflow2 training code with jit compiling on multi-GPU.
deep learning inference perf analysis
Fast and easy distributed model training examples.
A Multivariate Gaussian Bayes classifier written using JAX
Optimal choice for 🛰 classification problem.
A practical method for training energy-based language models.
Versatile Data Ingestion Pipelines for Jax
Classification of multilingual dataset trained only on English training data using pre-trained models. Model is trained on TPUs using PyTorch and torch_xla library.
Access the Xspec models and corresponding JAX/XLA ops.
Modern Graph TensorFlow implementation of Super-Resolution GAN
As the quality of large language models increases, so do our expectations of what they can do. Since the release of OpenAI's GPT-2, text generation capabilities have received attention. And for good reason - these models can be used for summarization, translation, and even real-time learning in some language tasks.
Add a description, image, and links to the xla topic page so that developers can more easily learn about it.
To associate your repository with the xla topic, visit your repo's landing page and select "manage topics."