Skip to content
This competition was hosted by WWW 2015 / BIG 2015 and the following Microsoft groups: Microsoft Malware Protection Center, Microsoft Azure Machine Learning and Microsoft Talent Management.
Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
Final_MicrosoftMalwareDetection - Jupyter Notebook.pdf


This competition was hosted by WWW 2015 / BIG 2015 and the following Microsoft groups: Microsoft Malware Protection Center, Microsoft Azure Machine Learning and Microsoft Talent Management.

1.1. What is Malware?

The term malware is a contraction of malicious software. Put simply, malware is any piece of software that was written with the intent of doing harm to data, devices or to people.


1.2. Problem Statement

In the past few years, the malware industry has grown very rapidly that, the syndicates invest heavily in technologies to evade traditional protection, forcing the anti-malware groups/communities to build more robust softwares to detect and terminate these attacks. The major part of protecting a computer system from a malware attack is to identify whether a given piece of file/software is a malware.

1.3 Source/Useful Links

Microsoft has been very active in building anti-malware products over the years and it runs it’s anti-malware utilities over 150 million computers around the world. This generates tens of millions of daily data points to be analyzed as potential malware. In order to be effective in analyzing and classifying such large amounts of data, we need to be able to group them into groups and identify their respective families.

This dataset provided by Microsoft contains about 9 classes of malware. ,


1.4. Real-world/Business objectives and constraints.

Minimize multi-class error.

Multi-class probability estimates.

Malware detection should not take hours and block the user's computer. It should fininsh in a few seconds or a minute.

2. Machine Learning Problem

2.1. Data

2.1.1. Data Overview

Source :

For every malware, we have two files

.asm file (read more:
bytes file (the raw data contains the hexadecimal representation of the file's binary content, without the PE header)

Total train dataset consist of 200GB data out of which 50Gb of data is .bytes files and 150GB of data is .asm files:

Lots of Data for a single-box/computer.

There are total 10,868 .bytes files and 10,868 asm files total 21,736 files

There are 9 types of malwares (9 classes) in our give data

Types of Malware:










2.2. Mapping the real-world problem to an ML problem

2.2.1. Type of Machine Learning Problem

There are nine different classes of malware that we need to classify a given a data point => Multi class classification problem

2.2.2. Performance Metric



Multi class log-loss

Confusion matrix

2.2.3. Machine Learing Objectives and Constraints

Objective: Predict the probability of each data-point belonging to each of the nine classes.


  • Class probabilities are needed. * Penalize the errors in class probabilites => Metric is Log-loss. * Some Latency constraints.

2.3. Train and Test Dataset

Split the dataset randomly into three parts train, cross validation and test with 64%,16%, 20% of data respectively

2.4. Useful blogs, videos and reference papers

First place solution in Kaggle competition:

Solution steps:

Step:0 Setup my GCP ,by referencing I have tried my hand with both GCP and Colab for the problem.

Step:1 I took Byte unigram feature first because it seems to be a very good feature for this problem.

Step:2 Then I construct ASM image features and took top 500 features to my dataset.

Step:3 Then I took all asmoutput features which are unigram for ASM files.

Step:4 As per winner solution video Opcode features are worth. I referred Dchad notebook and get my Opcode feature for Bi_gram and tri_gram features and took only top 1000 feature to my final dataset.

Step:5 By referencing to I got my entropy features.

Step:6 Finally I took top 1000 Byte bigram features to improve my performance.

Step:7 I train my model with hyperparameter tunning with Xgboost. Where, 'learning_rate':[0.01,0.03,0.05,0.1,0.15,0.2], 'n_estimators':[100,200,500,1000,2000], 'max_depth':[3,5,10], 'colsample_bytree':[0.1,0.3,0.5,1], 'subsample':[0.1,0.3,0.5,1]

Step:8 After successfully got my best parameter I train my model and test on test data and got following results. image

Step:9 After geting a loss of 0.011, I trained XgBoost model by using some important feture using Random Forest feature_importance. And got <0.01 test loss. image

You can’t perform that action at this time.