New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using sklearn #3
Comments
As I look through the code with fresh eyes I think I can work it out. The thing that was throwing me yesterday was the feature hashing/importing steps. |
You're probably on the right track, but I'll just stress that you can use this code to vectorize the features only. At that point, you'll have a large feature space that you can read in and hand to whatever model you're interested in, including all those in scikit-learn. |
To confirm, if i just want to vectorize and go from there, I should follow the "Import usage" steps up to: I might even want to take over earlier, can't wait to dig in! |
That's right. If you complete those steps, then the |
(Temp re-open) I imagine there is some data pre-processing required to select only certain features or to remove the "-1" labeled rows from the datasets for a purely supervised approach. I can remove them from the dataframe step but the |
"""remove the "-1" labeled rows""" Check out the train_ember function for how I filter out the -1 labeled rows from the ember benchmark model training: https://github.com/endgameinc/ember/blob/master/ember/__init__.py#L146-L160 """select certain features""" If you want to select only certain columns from X_train or X_test, you can use numpy indexing to achieve this: If you want to select rows from the feature matrix based on the metadata dataframe, I would suggest doing something like:
Good luck! |
What is not clear is the mapping from the header columns to the numpy array needed to slice a particular feature. For instance, what if I only want to look at Imports info, how can I determine which array indices that corresponds to in X_train? After all, there are 2,351 feature vectors to choose from in X_train/X_test. The FeatureHasher also makes it very difficult to characterize feature importance. |
Since the hashing trick is used to convert, e.g., a ragged count of imports to a fixed-length vector, you'd only be able to back out "these columns are imports", but have a many-to-one problem of many imports mapping to any one column. (One column corresponds to many imported names.) Hashing trick: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html For any one section, you can figure out where the feature type begins by noting the order of features: features = [
ByteHistogram(), ByteEntropyHistogram(), StringExtractor(), GeneralFileInfo(),
HeaderFileInfo(), SectionInfo(), ImportsInfo(), ExportsInfo()
] and noting that every FeatureType has a imports_offset = sum([fe.dim for fe in features[:6]) and is |
I have it working now. Dug into the code and figured out where each of those fixed length feature sizes are so I can compare the various categories against each-other. Its working well! |
First off, thank you for this awesome dataset! Completely agree that this level of control on both benign and malware sets of this size has been a shortfall based on my researching. As a relatively new ML'r I would like to use the dataset with more traditional sklearn modules instead of the provided Ember ones. Apologize if this isn't a great place to ask, but what steps can I take to prep the dataset to then take over and apply say a basic LogisticRegression model to the Import calls? Thanks!
The text was updated successfully, but these errors were encountered: