🤗 Optimum Intel: Accelerate inference with Intel optimization tools
-
Updated
May 29, 2024 - Jupyter Notebook
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Simple project that combine the power of Assembly language with the power of C language
The Remote Provisioning Service (RPS) is a Node.js-based microservice that works with the Remote Provisioning Client (RPC) to activate Intel® AMT platforms using a pre-defined profile.
The Management Presence Server (MPS) is a cloud-agnostic microservice that enables platforms featuring Intel® AMT to be managed over the internet.
Lightweight Armoury Crate alternative for Asus laptops and ROG Ally. Control tool for ROG Zephyrus G14, G15, G16, M16, Flow X13, Flow X16, TUF, Strix, Scar and other models
This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow and PyTorch that have been optimized for Intel platforms. Scaling is done with python, Docker, kubernetes, kubeflow, cnvrg.io, Helm, and other container orchestration frameworks for use in the cloud and on-premise
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Add a description, image, and links to the intel topic page so that developers can more easily learn about it.
To associate your repository with the intel topic, visit your repo's landing page and select "manage topics."