Windows Machine Learning
The Windows ML API is a Windows Runtime Component and is suitable for high-performance, low-latency applications such as frameworks, games, and other real-time applications as well as applications built with high-level languages.
This repo contains Windows Machine Learning samples and tools that demonstrate how to build machine learning powered scenarios into Windows applications.
- Getting Started with Windows ML
- Model Samples
- Advanced Scenario Samples
- Developer Tools
- External Links
For additional information on Windows ML, including step-by-step tutorials and how-to guides, please visit the Windows ML documentation.
Getting Started with Windows ML
Windows ML offers machine learning inferencing via the inbox Windows SDK as well as a redistributable NuGet package. The table below highlights the availability, distribution, language support, servicing, and forward compatibility aspects of the In-Box and NuGet package for Windows ML.
|Availability||Windows 10 - Build 17763 (RS5) or Newer
For more detailed information about version support, checkout our docs.
|Windows 8.1 or Newer
NOTE: Some APIs (ie: VideoFrame) are not available on older OSes.
|Windows SDK||Windows SDK - Build 17763 (RS5) or Newer||Windows SDK - Build 17763 (RS5) or Newer|
|Distribution||Built into Windows||Package and distribute as part of your application|
|Servicing||Microsoft-driven (customers benefit automatically)||Developer-driven|
|Forward||compatibility Automatically rolls forward with new features||Developer needs to update package manually|
Learn more here.
In this section you will find various model samples for a variety of scenarios across the different Windows ML API offerings.
A subdomain of computer vision in which an algorithm looks at an image and assigns it a tag from a collection of predefined tags or categories that it has been trained on.
A computer vision technique that allows us to recompose the content of an image in the style of another.
|Windows App Type
Advanced Scenario Samples
These advanced samples show how to use various binding and evaluation features in Windows ML:
Custom Tensorization: A Windows Console Application (C++/WinRT) that shows how to do custom tensorization.
Custom Operator (CPU): A desktop app that defines multiple custom cpu operators. One of these is a debug operator which we invite you to integrate into your own workflow.
Adapter Selection: A desktop app that demonstrates how to choose a specific device adapter for running your model.
Plane Identifier: a UWP app and a WPF app packaged with the Desktop Bridge, sharing the same model trained using Azure Custom Vision service. For step-by-step instructions for this sample, please see the blog post Upgrade your WinML application to the latest bits.
Custom Vision and Windows ML: The tutorial shows how to train a neural network model to classify images of food using Azure Custom Vision service, export the model to ONNX format, and deploy the model in a Windows Machine Learning application running locally on Windows device.
ML.NET and Windows ML: This tutorial shows you how to train a neural network model to classify images of food using ML.NET Model Builder, export the model to ONNX format, and deploy the model in a Windows Machine Learning application running locally on a Windows device.
PyTorch Data Analysis: The tutorial shows how to solve a classification task with a neural network using the PyTorch library, export the model to ONNX format and deploy the model with the Windows Machine Learning application that can run on any Windows device.
PyTorch Image Classification: The tutorial shows how to train an image classification neural network model using PyTorch, export the model to the ONNX format, and deploy it in a Windows Machine Learning application running locally on your Windows device.
YoloV4 Object Detection: This tutorial shows how to build a UWP C# app that uses the YOLOv4 model to detect objects in video streams.
Windows ML provides inferencing capabilities powered by the ONNX Runtime engine. As such, all models run in Windows ML must be converted to the ONNX Model format. Models built and trained in source frameworks like TensorFlow or PyTorch must be converted to ONNX. Check out the documentation for how to convert to an ONNX model:
- WinMLTools: a Python tool for converting models from different machine learning toolkits into ONNX for use with Windows ML.
Models may need further optimizations applied post conversion to support advanced features like batching and quantization. Check out the following tools for optimizing your model:
WinML Dashboard (Preview): a GUI-based tool for viewing, editing, converting, and validating machine learning models for Windows ML inference engine. This tool can be used to enable free dimensions on models that were built with fixed dimensions. Download Preview Version
Graph Optimizations: Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
Graph Quantization: Quantization in ONNX Runtime refers to 8 bit linear quantization of an ONNX model.
WinMLRunner: a command-line tool that can run .onnx or .pb models where the input and output variables are tensors or images. It is a very handy tool to quickly validate an ONNX model. It will attempt to load, bind, and evaluate a model and print out helpful messages. It also captures performance measurements.
WinML Code Generator (mlgen): a Visual Studio extension to help you get started using WinML APIs on UWP apps by generating a template code when you add a trained ONNX file into the UWP project. From the template code you can load a model, create a session, bind inputs, and evaluate with wrapper codes. See docs for more info.
WinML Samples Gallery: explore a variety of ML integration scenarios and models.
- For issues, file a bug on GitHub Issues.
- Ask questions on Stack Overflow.
- Vote for popular feature requests on Windows Developer Feedback or include your own request.
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator.
- ONNX: Open Neural Network Exchange Project.
We're always looking for your help to fix bugs and improve the samples. Create a pull request, and we'll be happy to take a look.