Skip to content
Branch: master
Find file History
prathyusha12345 and CESARDELATORRE Add Microsoft.ML.Extesnions Integration package to create scalable we…
…bapps (#467)

* Added  code to classify github issues into the best 3 labels.
1.Created a class to hold Prediction values
2.Added Score filed in GitHubIssuePrediction class.
3.Changed the existing code in Labeler class.
4.Added new method to find the best 3 scores and return their indexes so that we can get the labels of those matched indexes from slotnames.

* Changed the images

* Changed Names

* changed names

* Removed images

* Added images

* removed seed parameter during prediction

* WIP: pushed sample for object detection

* Removing object detection solution from master branch

* Added a new sample i.e Scalable WEBAPI  for Real time scenario in the repo. so added link for that sample in ReadME file in seperate table.

* Added code to retrieve labels from scores.
Added comments how the scores are mapped to labels.

* updated ReadMe file according to upstream master

* Create dummy.html

* Added third party license for content in  Customer segmentation sample file i.e datahelper.cs

* Added third party notice file

* Added third party notices text file

* Added third party notices text file

* Added third party notices for Credit Card Fraud Detection sample.

* Changed the license information according pedigree scan results.

* added license info according to predigree scn results.

* Added license file in data folder as well as Readme file.

* changed info in license file

* Added license file for HTML5

* Added License file under name Nicolas Gallagher and Jonathan Neal

* Added license file for jquesry CSS transition library where the code contains

* Added License for files having statement "// Source:"

* Added License file for CVE where we have refereed CVE entries

* Added license for num2fraction

* Added license file for Java Script Undo manager

* Added license files for different copyright code

* Product Recommendation:

Added citation in datafolder.
Added citation link  in ReadMe file

* Added license file

* No need of this file

* Minor change

* Changed info in license file according to pedigree scan results

* Minor change fixes issue during pedigree scan

* Minor change to fix issues during pedigree scan

* Minor change to fix issue during pedigree scan

* Revert "Minor change to fix issue during pedigree scan"

This reverts commit 52ebb67.

* Minor change to fix during pedigree scan

* Minor change to fix issue during pedigree scan

* Added citation for datasets in data folder

* Changed the license file of tensor flow.

* Changed the license file

* Added datasets-citation file in data folder

* Added dataset-citation file in the data folder of MNIST. Added link in ReadMe file as well

* Added third party notice

* Added third party notice

* Added license file

* minor changes

* removed Heart disease detection as build is not runnign locally.will add it while merging upstream

* minor change

* Added Heart Disease sample again

* Added ML integration package to Create Prediction Engine pool to handle number of requests. The prediction engine objects are created in object pool and when a request comes an object from the obj pool is taken and handle the object to request.
Removed object pooling console proj as its replaced with PredictionEnginePool service.

* Removed ObjectPooling Console App from sln file

* Added Integration package for Movie Recommender E2e sample.

* Removed unneessary code

* Added Integration package so that prediction engine objecct will be accesse from the object pool.

* Refactored this code.
Removed tags.tsv file as its confusing.
Added iamges that have licenses.

* Refactored method name.

* These are not needed

* Refactored code to setup modelsettings in the starting of execution so that model file will be created only once instead of per every request.

Created OnnxModelSettings.cs file as static class so the execution inside the class happens only once.

* Removed temporary images.
Removed extra lines
Latest commit 27d2763 May 20, 2019

Object Detection

ML.NET version API type Status App Type Data type Scenario ML Task Algorithms
v1.0.0 Dynamic API Up-to-date Console app image files Object Detection Deep Learning Tiny Yolo2 ONNX model


Object detection is one of the classical problems in computer vision: Recognize what the objects are inside a given image and also where they are in the image. For these cases, you can either use pre-trained models or train your own model to classify images specific to your custom domain.


The dataset contains images which are located in the assets folder. These images are taken from wikimedia commons site. Go to to refer to the image urls and their licenses.

Pre-trained model

There are multiple models which are pre-trained for identifying multiple objects in the images. here we are using the pretrained model, Tiny Yolo2 in ONNX format. This model is a real-time neural network for object detection that detects 20 different classes. It is made up of 9 convolutional layers and 6 max-pooling layers and is a smaller version of the more complex full YOLOv2 network.

The Open Neural Network eXchange i.e ONNX is an open format to represent deep learning models. With ONNX, developers can move models between state-of-the-art tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners.

The model is downloaded from the ONNX Model Zoo which is a is a collection of pre-trained, state-of-the-art models in the ONNX format.

The Tiny YOLO2 model was trained on the Pascal VOC dataset. Below are the model's prerequisites.

Model input and output


Input image of the shape (3x416x416)


Output is a (1x125x13x13) array

Pre-processing steps

Resize the input image to a (3x416x416) array of type float32.

Post-processing steps

The output is a (125x13x13) tensor where 13x13 is the number of grid cells that the image gets divided into. Each grid cell corresponds to 125 channels, made up of the 5 bounding boxes predicted by the grid cell and the 25 data elements that describe each bounding box (5x25=125). For more information on how to derive the final bounding boxes and their corresponding confidence scores, refer to this post.


The console application project ObjectDetection can be used to to identify objects in the sample images based on the Tiny Yolo2 ONNX model.

Again, note that this sample only uses/consumes a pre-trained ONNX model with ML.NET API. Therefore, it does not train any ML.NET model. Currently, ML.NET supports only for scoring/detecting with existing ONNX trained models.

You need to follow next steps in order to execute the classification test:

  1. Set VS default startup project: Set ObjectDetection as starting project in Visual Studio.
  2. Run the training model console app: Hit F5 in Visual Studio. At the end of the execution, the output will be similar to this screenshot: image

Code Walkthrough

There is a single project in the solution named ObjectDetection, which is responsible for loading the model in Tiny Yolo2 ONNX format and then detects objects in the images.

ML.NET: Model Scoring

Define the schema of data in a class type and refer that type while loading data using TextLoader. Here the class type is ImageNetData.

public class ImageNetData
        public string ImagePath;

        public string Label;

        public static IEnumerable<ImageNetData> ReadFromCsv(string file, string folder)
            return File.ReadAllLines(file)
             .Select(x => x.Split('\t'))
             .Select(x => new ImageNetData { ImagePath = Path.Combine(folder, x[0]), Label = x[1] } );

The first step is to load the data using TextLoader

var data = mlContext.Data.LoadFromTextFile<ImageNetData>(imagesLocation, hasHeader: true);

The image file used to load images has two columns: the first one is defined as ImagePath and the second one is the Label corresponding to the image.

It is important to highlight that the label in the ImageNetData class is not really used when scoring with the Tiny Yolo2 Onnx model. It is used when to print the labels on the console.

dog2.jpg	dog2
Intersection-Counts.jpg	intersection
ManyPets.jpg	ManyPets

As you can observe, the file does not have a header row.

The second step is to define the estimator pipeline. Usually, when dealing with deep neural networks, you must adapt the images to the format expected by the network. This is the reason images are resized and then transformed (mainly, pixel values are normalized across all R,G,B channels).

 var pipeline = mlContext.Transforms.LoadImages(outputColumnName: "image", imageFolder: imagesFolder, inputColumnName: nameof(ImageNetData.ImagePath))
                            .Append(mlContext.Transforms.ResizeImages(outputColumnName: "image", imageWidth: ImageNetSettings.imageWidth, imageHeight: ImageNetSettings.imageHeight, inputColumnName: "image"))
                            .Append(mlContext.Transforms.ExtractPixels(outputColumnName: "image"))
                            .Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelLocation, outputColumnNames: new[] { TinyYoloModelSettings.ModelOutput }, inputColumnNames: new[] { TinyYoloModelSettings.ModelInput }));

You also need to check the neural network, and check the names of the input / output nodes. In order to inspect the model, you can use tools like Netron, which is automatically installed with Visual Studio Tools for AI. These names are used later in the definition of the estimation pipe: in the case of the inception network, the input tensor is named 'image' and the output is named 'grid'

Define the input and output parameters of the Tiny Yolo2 Onnx Model.

    public struct TinyYoloModelSettings
            // for checking TIny yolo2 Model input and  output  parameter names,
            //you can use tools like Netron, 
            // which is installed by Visual Studio AI Tools

            // input tensor name
            public const string ModelInput = "image";

            // output tensor name
            public const string ModelOutput = "grid";

inspecting neural network with netron

Finally, we extract the prediction engine after fitting the estimator pipeline. The prediction engine receives as parameter an object of type ImageNetData (containing 2 properties: ImagePath and Label), and then returns and object of type ImagePrediction.

  var model = pipeline.Fit(data);
  var predictionEngine = mlContext.Model.CreatePredictionEngine<ImageNetData, ImageNetPrediction>(model);

When obtaining the prediction, we get an array of floats in the property PredictedLabels. The array is a float array of size 21125. This is the output of model i,e 125x13x13 as discussed earlier. This output is interpreted by YoloMlPraser class and returns a number of bounding boxes for each image. Again these boxes are filtered so that we retrieve only 5 bounding boxes which have better confidence(how much certain that a box contains the obejct) for each object of the image. On console we display the label value of each bounding box.

Note The Tiny Yolo2 model is not having much accuracy compare to full YOLO2 model. As this is a sample program we are using Tiny version of Yolo model i.e Tiny_Yolo2

You can’t perform that action at this time.