This repository is the Model ZOO for Pytorch, Tensorflow, Keras, Gluon, LightGBM, Keras, Sklearn models etc with Lightweight Functional interface to wrap access to Recent and State of Art Deep Learning, ML models and Hyper-Parameter Search, cross platforms that follows the logic of sklearn, such as fit, predict, transform, metrics, save, load etc. Now, more than 60 recent models (> 2018) are available in those domains :
- Time Series,
- Text classification,
- Vision,
- Image Generation,Text generation,
- Gradient Boosting, Automatic Machine Learning tuning,
- Hyper-parameter search.
Main characteristics :
- Functional type interface : reduce boilerplate code, good for scientific computing.
- JSON based input : reduce boilerplate code, easy for experiment management.
- Focus to move research/script code to prod/batch
We are looking for contributors (!!)
Having a simple framework for both machine learning models and deep learning models, without BOILERPLATE code.
Collection of models, model zoo in Pytorch, Tensorflow, Keras allows richer possibilities in model re-usage, model batching and benchmarking. Unique and simple interface, zero boilerplate code (!), and recent state of art models/frameworks are the main strength of MLMODELS. Different domain fields are available, such as computer vision, NLP, Time Series prediction, tabular data classification.
Here you can find usages guide
If you want to contribute, contribution guide
Time Series:
-
Montreal AI, Nbeats: 2019, Advanced interpretable Time Series Neural Network, [Link]
-
Amazon Deep AR: 2019, Multi-variate Time Series NNetwork, [Link]
-
Facebook Prophet 2017, Time Series prediction [Link]
-
ARMDN, Advanced Multi-variate Time series Prediction : 2019, Associative and Recurrent Mixture Density Networks for time series. [Link]
-
LSTM Neural Network prediction : Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction [Link]
NLP:
-
Sentence Transformers : 2019, Embedding of full sentences using BERT, [Link]
-
Transformers Classifier : Using Transformer for Text Classification, [Link]
-
TextCNN Pytorch : 2016, Text CNN Classifier, [Link]
-
TextCNN Keras : 2016, Text CNN Classifier, [Link]
-
Bi-directionnal Conditional Random Field LSTM for Name Entiryt Recognition, [Link]
-
DRMM: Deep Relevance Matching Model for Ad-hoc Retrieval.[Link]
-
DRMMTKS: Deep Top-K Relevance Matching Model for Ad-hoc Retrieval. [Link]
-
ARC-I: Convolutional Neural Network Architectures for Matching Natural Language Sentences [Link]
-
ARC-II: Convolutional Neural Network Architectures for Matching Natural Language Sentences [Link]
-
DSSM: Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [Link]
-
CDSSM: Learning Semantic Representations Using Convolutional Neural Networks for Web Search [Link]
-
MatchLSTM: Machine Comprehension Using Match-LSTM and Answer Pointer [Link]
-
DUET: Learning to Match Using Local and Distributed Representations of Text for Web Search [Link]
-
KNRM: End-to-End Neural Ad-hoc Ranking with Kernel Pooling [Link]
-
ConvKNRM: Convolutional neural networks for soft-matching n-grams in ad-hoc search [Link]
-
ESIM: Enhanced LSTM for Natural Language Inference [Link]
-
BiMPM: Bilateral Multi-Perspective Matching for Natural Language Sentences [Link]
-
MatchPyramid: Text Matching as Image Recognition [Link]
-
Match-SRNN: Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN [Link]
-
aNMM: aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model [Link]
-
MV-LSTM: [Link]
-
DIIN: Natural Lanuguage Inference Over Interaction Space [Link]
-
HBMP: Sentence Embeddings in NLI with Iterative Refinement Encoders [Link]
TABULAR:
All sklearn models :
linear_model.ElasticNet
linear_model.ElasticNetCV
linear_model.Lars
linear_model.LarsCV
linear_model.Lasso
linear_model.LassoCV
linear_model.LassoLars
linear_model.LassoLarsCV
linear_model.LassoLarsIC
linear_model.OrthogonalMatchingPursuit
linear_model.OrthogonalMatchingPursuitCV
svm.LinearSVC
svm.LinearSVR
svm.NuSVC
svm.NuSVR
svm.OneClassSVM
svm.SVC
svm.SVR
svm.l1_min_c
neighbors.KNeighborsClassifier
neighbors.KNeighborsRegressor
neighbors.KNeighborsTransformer
Binary Neural Prediction from tabular data:
-
A Convolutional Click Prediction Model]([Link |)]
-
Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction]([Link |)]
-
Product-based neural networks for user response prediction]([Link |)]
-
Wide & Deep Learning for Recommender Systems]([Link |)]
-
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction]([Link |)]
-
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction]([Link |)]
-
Deep & Cross Network for Ad Click Predictions]([Link |)]
-
Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks]([Link |)]
-
Neural Factorization Machines for Sparse Predictive Analytics]([Link |)]
-
xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems]([Link |)]
-
AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks]([Link |)]
-
Deep Interest Network for Click-Through Rate Prediction]([Link |)]
-
Deep Interest Evolution Network for Click-Through Rate Prediction]([Link |)]
-
Operation-aware Neural Networks for User Response Prediction]([Link |)]
-
Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction ]([Link |)]
-
Deep Session Interest Network for Click-Through Rate Prediction ]([Link |)]
-
FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction]([Link |)]
VISION:
-
Vision Models (pre-trained) :
alexnet: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size [Link] -
densenet121: Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space [Link]
-
densenet169: Classification of TrashNet Dataset Based on Deep Learning Models [Link]
-
densenet201: Utilization of DenseNet201 for diagnosis of breast abnormality [Link]
-
densenet161: Automated classification of histopathology images using transfer learning [Link]
-
inception_v3: Menfish Classification Based on Inception_V3 Convolutional Neural Network [Link]
-
resnet18: Leveraging the VTA-TVM Hardware-Software Stack for FPGA Acceleration of 8-bit ResNet-18 Inference [Link]
-
resnet34: Automated Pavement Crack Segmentation Using Fully Convolutional U-Net with a Pretrained ResNet-34 Encoder [Link]
-
resnet50: Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes [Link]
-
resnet101: Classification of Cervical MR Images using ResNet101 [Link]
-
resnet152: Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network [Link]
-
resnext50_32x4d: Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs using Deep Convolutional Neural Networks [Link]
-
resnext101_32x8d: DEEP LEARNING BASED PLANT PART DETECTION IN GREENHOUSE SETTINGS [Link]
-
wide_resnet50_2: Identificac¸˜ao de Esp´ecies de ´Arvores por Imagens de Tronco Utilizando Aprendizado de Ma´quina Profundo [Link]
-
wide_resnet101_2: Identification of Tree Species by Trunk Images Using Deep Machine Learning [Link]
-
squeezenet1_0: Classification of Ice Crystal Habits Observed From Airborne Cloud Particle Imager by Deep Transfer Learning [Link]
-
squeezenet1_1: Benchmarking parts based face processing in-the-wild for gender recognition and head pose estimation [Link]
-
vgg11: ernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation [Link]
-
vgg13: Convolutional Neural Network for Raindrop Detection [Link]
-
vgg16: Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder [Link]
-
vgg19: A New Transfer Learning Based on VGG-19 Network for Fault Diagnosis [Link]
-
vgg11_bn:Shifted Spatial-Spectral Convolution for Deep Neural Networks [Link]
-
vgg13_bn: DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation [Link]
-
vgg16_bn: Partial Convolution based Padding [Link]
-
vgg19_bn: NeurIPS 2019 Disentanglement Challenge: Improved Disentanglement through Learned Aggregation of Convolutional Feature Maps [Link]
-
googlenet: On the Performance of GoogLeNet and AlexNet Applied to Sketches [Link]
-
shufflenet_v2_x0_5: Exemplar Normalization for Learning Deep Representation [Link]
-
shufflenet_v2_x1_0: Tree Species Identification by Trunk Images Using Deep Machine Learning [Link]
-
mobilenet_v2: MobileNetV2: Inverted Residuals and Linear Bottlenecks [Link]
More resources are available on model list here
Dev-Documentation link
Starting contributing : link
Colab creation :link
Model benchmarking : link
Add new models : link
Core compute : link
User-Documentation: link
Colab :link
Installation Guide:
Will copy template, dataset, example to your folder
ml_models --init /yourworkingFolder/
ml_optim
ml_models
Testing : debugging Process Read-more
Tutorial : Code Design, Testing Read-more
Tests: github actions to add Read-more
Tutorial : New contributors Read-more
Tutorial : Code Design, Testing Read-more
Tutorial : Usage of dataloader Read-more
TUTORIAL : Use Colab for Code Development Read-more
TUTORIAL : Do a PR or add model in mlmodels Read-more
TUTORIAL : Using Online editor for mlmodels Read-more
Example Notebooks
LSTM example in TensorFlow (Example notebook)
LSTM example in TensorFlow
# import library
import mlmodels
model_uri = "model_tf.1_lstm.py"
model_pars = { "num_layers": 1,
"size": ncol_input, "size_layer": 128, "output_size": ncol_output, "timestep": 4,
}
data_pars = {"data_path": "/folder/myfile.csv" , "data_type": "pandas" }
compute_pars = { "learning_rate": 0.001, }
out_pars = { "path": "ztest_1lstm/", "model_path" : "ztest_1lstm/model/"}
save_pars = { "path" : "ztest_1lstm/model/" }
load_pars = { "path" : "ztest_1lstm/model/" }
#### Load Parameters and Train
from mlmodels.models import module_load
module = module_load( model_uri= model_uri ) # Load file definition
model = module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance
model, sess = module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)
#### Inference
metrics_val = module.fit_metrics( model, sess, data_pars, compute_pars, out_pars) # get stats
ypred = module.predict(model, sess, data_pars, compute_pars, out_pars) # predict pipeline
AutoML example in Gluon (Example notebook)
AutoML example in Gluon
# import library
import mlmodels
import autogluon as ag
#### Define model and data definitions
model_uri = "model_gluon.gluon_automl.py"
data_pars = {"train": True, "uri_type": "amazon_aws", "dt_name": "Inc"}
model_pars = {"model_type": "tabular",
"learning_rate": ag.space.Real(1e-4, 1e-2, default=5e-4, log=True),
"activation": ag.space.Categorical(*tuple(["relu", "softrelu", "tanh"])),
"layers": ag.space.Categorical(
*tuple([[100], [1000], [200, 100], [300, 200, 100]])),
'dropout_prob': ag.space.Real(0.0, 0.5, default=0.1),
'num_boost_round': 10,
'num_leaves': ag.space.Int(lower=26, upper=30, default=36)
}
compute_pars = {
"hp_tune": True,
"num_epochs": 10,
"time_limits": 120,
"num_trials": 5,
"search_strategy": "skopt"
}
out_pars = {
"out_path": "dataset/"
}
#### Load Parameters and Train
from mlmodels.models import module_load
module = module_load( model_uri= model_uri ) # Load file definition
model = module.Model(model_pars=model_pars, compute_pars=compute_pars) # Create Model instance
model, sess = module.fit(model, data_pars=data_pars, model_pars=model_pars, compute_pars=compute_pars, out_pars=out_pars)
#### Inference
ypred = module.predict(model, data_pars, compute_pars, out_pars) # predict pipeline
RandomForest example in Scikit-learn (Example notebook)
RandomForest example in Scikit-learn
``` # import library import mlmodels
model_uri = "model_sklearn.sklearn.py"
model_pars = {"model_name": "RandomForestClassifier", "max_depth" : 4 , "random_state":0}
data_pars = {'mode': 'test', 'path': "../mlmodels/dataset", 'data_type' : 'pandas' }
compute_pars = {'return_pred_not': False}
out_pars = {'path' : "../ztest"}
from mlmodels.models import module_load
module = module_load( model_uri= model_uri ) # Load file definition model = module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance model, sess = module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # fit the model
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # predict pipeline
</details>
---
### TextCNN example in keras ([Example notebook](example/textcnn.ipynb))
<details>
<summary> TextCNN example in keras </summary>
<br>
```python
# import library
import mlmodels
#### Define model and data definitions
model_uri = "model_keras.textcnn.py"
data_pars = {"path" : "../mlmodels/dataset/text/imdb.csv", "train": 1, "maxlen":400, "max_features": 10}
model_pars = {"maxlen":400, "max_features": 10, "embedding_dims":50}
compute_pars = {"engine": "adam", "loss": "binary_crossentropy", "metrics": ["accuracy"] ,
"batch_size": 32, "epochs":1, 'return_pred_not':False}
out_pars = {"path": "ztest/model_keras/textcnn/"}
#### Load Parameters and Train
from mlmodels.models import module_load
module = module_load( model_uri= model_uri ) # Load file definition
model = module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance
module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # fit the model
#### Inference
data_pars['train'] = 0
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)
Using json config file for input (Example notebook, JSON file)
Using json config file for input
# import library
import mlmodels
#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config
model_uri = "model_tf.1_lstm.py"
module = module_load( model_uri= model_uri ) # Load file definition
model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
'choice':'json',
'config_mode':'test',
'data_path':'../mlmodels/example/1_lstm.json'
})
#### Load parameters and train
model = module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance
model, sess = module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # fit the model
#### Check inference
ypred = module.predict(model, sess=sess, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # predict pipeline
Using Scikit-learn's SVM for Titanic Problem from json file (Example notebook, JSON file)
Using Scikit-learn's SVM for Titanic Problem from json file
# import library
import mlmodels
#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config
model_uri = "model_sklearn.sklearn.py"
module = module_load( model_uri= model_uri ) # Load file definition
model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
'choice':'json',
'config_mode':'test',
'data_path':'../mlmodels/example/sklearn_titanic_svm.json'
})
#### Load Parameters and Train
model = module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance
model, sess = module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # fit the model
#### Inference
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # predict pipeline
ypred
#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score
y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)
Using Scikit-learn's Random Forest for Titanic Problem from json file (Example notebook, JSON file)
Using Scikit-learn's Random Forest for Titanic Problem from json file
# import library
import mlmodels
#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config
model_uri = "model_sklearn.sklearn.py"
module = module_load( model_uri= model_uri ) # Load file definition
model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
'choice':'json',
'config_mode':'test',
'data_path':'../mlmodels/example/sklearn_titanic_randomForest.json'
})
#### Load Parameters and Train
model = module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance
model, sess = module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # fit the model
#### Inference
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # predict pipeline
ypred
#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score
y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)
Using Autogluon for Titanic Problem from json file (Example notebook, JSON file)
Using Autogluon for Titanic Problem from json file
# import library
import mlmodels
#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config
model_uri = "model_gluon.gluon_automl.py"
module = module_load( model_uri= model_uri ) # Load file definition
model_pars, data_pars, compute_pars, out_pars = module.get_params(
choice='json',
config_mode= 'test',
data_path= '../mlmodels/example/gluon_automl.json'
)
#### Load Parameters and Train
model = module.Model(model_pars=model_pars, compute_pars=compute_pars) # Create Model instance
model = module.fit(model, model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # fit the model
model.model.fit_summary()
#### Check inference
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # predict pipeline
#### Check metrics
model.model.model_performance
import pandas as pd
from sklearn.metrics import roc_auc_score
y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)
Using hyper-params (optuna) for Titanic Problem from json file (Example notebook, JSON file)
Using hyper-params (optuna) for Titanic Problem from json file
# import library
from mlmodels.models import module_load
from mlmodels.optim import optim
from mlmodels.util import params_json_load
#### Load model and data definitions from json
### hypermodel_pars, model_pars, ....
model_uri = "model_sklearn.sklearn.py"
config_path = path_norm( 'example/hyper_titanic_randomForest.json' )
config_mode = "test" ### test/prod
#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)
module = module_load( model_uri= model_uri )
model_pars_update = optim(
model_uri = model_uri,
hypermodel_pars = hypermodel_pars,
model_pars = model_pars,
data_pars = data_pars,
compute_pars = compute_pars,
out_pars = out_pars
)
#### Load Parameters and Train
model = module.Model(model_pars=model_pars_update, data_pars=data_pars, compute_pars=compute_pars)y
model, sess = module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)
#### Check inference
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # predict pipeline
ypred
#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score
y = pd.read_csv( path_norm('dataset/tabular/titanic_train_preprocessed.csv') )
y = y['Survived'].values
roc_auc_score(y, ypred)
Using LightGBM for Titanic Problem from json file (Example notebook, JSON file)
Using LightGBM for Titanic Problem from json file
# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm
from jsoncomment import JsonComment ; json = JsonComment()
#### Load model and data definitions from json
# Model defination
model_uri = "model_sklearn.model_lightgbm.py"
module = module_load( model_uri= model_uri)
# Path to JSON
data_path = '../dataset/json/lightgbm_titanic.json'
# Model Parameters
pars = json.load(open( data_path , mode='r'))
for key, pdict in pars.items() :
globals()[key] = path_norm_dict( pdict ) ###Normalize path
#### Load Parameters and Train
model = module.Model(model_pars, data_pars, compute_pars) # create model instance
model, session = module.fit(model, data_pars, compute_pars, out_pars) # fit model
#### Check inference
ypred = module.predict(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars) # get predictions
ypred
#### Check metrics
metrics_val = module.fit_metrics(model, data_pars, compute_pars, out_pars)
metrics_val
Using Vision CNN RESNET18 for MNIST dataset (Example notebook, JSON file)
Using Vision CNN RESNET18 for MNIST dataset
# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm, params_json_load
from jsoncomment import JsonComment ; json = JsonComment()
#### Model URI and Config JSON
model_uri = "model_tch.torchhub.py"
config_path = path_norm( 'model_tch/torchhub_cnn.json' )
config_mode = "test" ### test/prod
#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)
#### Setup Model
module = module_load( model_uri)
model = module.Model(model_pars, data_pars, compute_pars)
`
#### Fit
model, session = module.fit(model, data_pars, compute_pars, out_pars) #### fit model
metrics_val = module.fit_metrics(model, data_pars, compute_pars, out_pars) #### Check fit metrics
print(metrics_val)
#### Inference
ypred = module.predict(model, session, data_pars, compute_pars, out_pars)
print(ypred)
Using ARMDN Time Series (Example notebook, JSON file)
Using ARMDN Time Serie
# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm, params_json_load
from jsoncomment import JsonComment ; json = JsonComment()
#### Model URI and Config JSON
model_uri = "model_keras.ardmn.py"
config_path = path_norm( 'model_keras/ardmn.json' )
config_mode = "test" ### test/prod
#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)
#### Setup Model
module = module_load( model_uri)
model = module.Model(model_pars, data_pars, compute_pars)
`
#### Fit
model, session = module.fit(model, data_pars, compute_pars, out_pars) #### fit model
metrics_val = module.fit_metrics(model, data_pars, compute_pars, out_pars) #### Check fit metrics
print(metrics_val)
#### Inference
ypred = module.predict(model, session, data_pars, compute_pars, out_pars)
print(ypred)
#### Save/Load
module.save(model, save_pars ={ 'path': out_pars['path'] +"/model/"})
model2 = module.load(load_pars ={ 'path': out_pars['path'] +"/model/"})