Skip to content

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

License

Notifications You must be signed in to change notification settings

lbreitk/intel-extension-for-pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intel® Extension for Pytorch*

💻Examples   |   📖CPU Documentations   |   📖GPU Documentations

Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.

Intel® Extension for PyTorch* provides optimizations both for eager and graph modes. However, compared to the eager mode, the graph mode in PyTorch* normally yields better performance from the optimization techniques like operation fusion. Intel® Entension for PyTorch* amplifies them with more comprehensive graph optimizations. Both PyTorch Torchscript and TorchDynamo graph modes are supported. With Torchscript, we recommend using torch.jit.trace() as your preferred option, as it generally supports a wider range of workloads compared to torch.jit.script().

The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts, you can enable it dynamically by importing intel_extension_for_pytorch.

Large Language Models (LLMs) Optimization

In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch*. Check LLM optimizations CPU and LLM optimizations GPU for details.

Installation

CPU version

Use one of the following commands to install the CPU version of Intel® Extension for PyTorch*.

python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
python -m pip install intel-extension-for-pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
# for PRC user, you can check with the following link
python -m pip install intel-extension-for-pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/

Note: Intel® Extension for PyTorch* has PyTorch version requirement. Intel® Extension for PyTorch* v2.1.100+cpu requires PyTorch*/libtorch v2.1.* to be installed.

For more installation methods and installation guidance for previous versions, refer to Installation.

GPU version

Use the command below to install Intel® Extension for PyTorch* for GPU:

python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

Note: Intel® Extension for PyTorch* v2.1.10+xpu requires PyTorch*/libtorch v2.1.* (patches needed) to be installed.

For more installation methods and installation guidance for previous versions, refer to Installation.

Getting Started

The following resources will help you get started with the Intel® Extension for PyTorch*:

Intel® AI Reference Models

Use cases that have already been optimized by Intel engineers are available at Intel® AI Reference Models. A bunch of PyTorch use cases for benchmarking are also available on the Github page. You can get performance benefits out-of-box by simply running scripts in the Intel® AI Reference Models.

Support

The team tracks bugs and enhancement requests using GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

License

Apache License, Version 2.0. As found in LICENSE file.

Security

See Intel's Security Center for information on how to report a potential security issue or vulnerability.

See also: Security Policy

About

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 49.4%
  • C++ 44.9%
  • C 3.8%
  • CMake 1.4%
  • Shell 0.5%