Skip to content

PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

92 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Accelerate Model Training with PyTorch 2.X

no-image

This is the code repository for Accelerate Model Training with PyTorch 2.X, published by Packt.

Build more accurate models by boosting the model training process

What is this book about?

This book will help you use a set of optimization techniques and strategies to speed up the training process of ML models. You’ll learn how to identify performance bottlenecks, decide the most suitable approach, and implement the correct solution.

This book covers the following exciting features:

  • Compile the model to train it faster
  • Use specialized libraries to optimize the training on the CPU
  • Build a data pipeline to boost GPU execution
  • Simplify the model through pruning and compression techniques
  • Adopt automatic mixed precision without penalizing the model's accuracy
  • Distribute the training step across multiple machines and devices

If you feel this book is for you, get your copy today! https://www.packtpub.com/

Instructions and Navigations

All of the code is organized into folders. For example, Chapter06.

The code will look like the following:

config_list = [{
    'op_types': ['Linear'],
    'exclude_op_names': ['layer4'],
    'sparse_ratio': 0.3
}]

Following is what you need for this book: This book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors.

With the following software and hardware list you can run all code files present in the book (Chapter 1-11).

Software and Hardware List

Chapter Software required OS required
1-11 PyTorch 2.X Windows, Linux, or macOS

To enhance your experience, we recommend executing the code on a system equipped with an NVIDIA graphics card with CUDA support and ensure that you run the code in a suitable environment with all the necessary libraries and moudles installed.

Related products

Get to Know the Author

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontificia Universidade Catolica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.

About

Accelerate Model Training with PyTorch 2.X, published by Packt

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published