You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, all
I followed the code in test_trees.py. However, I got an error with a loaded pyspark model. But the trained pyspark model works well. The error is:
AssertionError: The background dataset you provided does not cover all the leaves in the model, so TreeExplainer cannot run with the feature_perturbation="tree_path_dependent" option! Try providing a larger background dataset, no background dataset, or using feature_perturbation="interventional".
It seems fully_defined_weighting turns to be False while loading the saved model, which it will be True if the model was trained. There should be somethings wrong with SingleTree class but I couldn't correct them (I guess node_sample_weight is the key).
In [84]: explainer_train.model.fully_defined_weighting
Out[84]: True
In [85]: explainer_model.model.fully_defined_weighting
Out[85]: False
# ensure that the passed background dataset lands in every leafifnp.min(self.trees[i].node_sample_weight) <=0:
self.fully_defined_weighting=False
Hi, all
I followed the code in test_trees.py. However, I got an error with a loaded pyspark model. But the trained pyspark model works well. The error is:
It seems fully_defined_weighting turns to be False while loading the saved model, which it will be True if the model was trained. There should be somethings wrong with SingleTree class but I couldn't correct them (I guess node_sample_weight is the key).
The code I used is below:
The text was updated successfully, but these errors were encountered: