Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Isotree Isolation forest generating large model pickle file #48

Closed
jimmychordia opened this issue May 11, 2023 · 3 comments
Closed

Isotree Isolation forest generating large model pickle file #48

jimmychordia opened this issue May 11, 2023 · 3 comments

Comments

@jimmychordia
Copy link

jimmychordia commented May 11, 2023

I am building anomaly detection model using isotree and the model pickle file if I dump via joblib without any compression, generates file of size 65GB. To load this model file for any realtime scoring requires around 256GB RAM for loading it into a python object and then scoring the new data. Is there any better way to do this or any tips on reducing the model size without impacting the accuracy of the model.

@david-cortes
Copy link
Owner

Thanks for the bug report. A couple questions:

  • Is this happening with the latest version of isotree? Currently, this is version 0.5.17-5.
  • Does it also happen if you use the method export_model instead of pickle?
  • Are you passing non-default parameters to pickle? (e.g. some specific protocol, compression, etc.)
  • What kind of hyperparameters are you using?
  • Are you using a regular HDD/SDD or are you trying to write to e.g. a networked drive, or some other special storage media?

@jimmychordia
Copy link
Author

jimmychordia commented May 12, 2023

Below are my answers in bold

  1. Is this happening with the latest version of isotree? Currently, this is version 0.5.17-5. -
    using 0.5.17

  2. Does it also happen if you use the method export_model instead of pickle?
    - Using both the methods size is 65GB only after export

  3. Are you passing non-default parameters to pickle? (e.g. some specific protocol, compression, etc.)
    joblib.dump(iso_model, 'model.pkl'.format(prod))
    joblib.dump(iso_model, 'model_compresses.pkl'.format(prod), compress = 9)

    ** with export_model same size issue**

  4. What kind of hyperparameters are you using?
    iso_model = IsolationForest( ntrees=20,nthreads=1,categ_split_type="single_categ",scoring_metric="density",ndim=1,
                          missing_action="impute", random_state = 1)
    iso_model.fit_transform(data_sub.drop(drop,axis = 1))

  5. Are you using a regular HDD/SDD or are you trying to write to e.g. a networked drive, or some other special storage media?
    I am running the code on Sagemaker Studio Notebook and using S3 for storage

@david-cortes
Copy link
Owner

Thanks for the information. So it seems there's no bug in here: if you call fit_transform, then there will not be any row sub-sampling applied to trees, and if you use the default value for max_depth, then that in turn will be determined from the full number of rows. And if the number of rows is very large, then the depth of each tree will also be very large, which will lead to producing very heavy models.

Additionally, if using it as a missing value imputer (as it does when using fit_transform), the trees that it builds for imputation are expected to be much heavier than the trees for producing anomaly scores.

As for what you could do: if the amount of rows is very large, you should call fit and perhaps manually change the number of rows per tree (parameter sample_size). Then the models should become smaller. Also, if you don't plan to use it as a missing value imputer, you shouldn't call fit_transform, nor pass build_imputer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants