New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handle floating point values more accurate #277
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,3 @@ | ||
import numpy as np | ||
|
||
from m2cgen import ast | ||
from m2cgen.assemblers import utils | ||
from m2cgen.assemblers.base import ModelAssembler | ||
|
@@ -49,11 +47,5 @@ def _assemble_leaf(self, node_id): | |
|
||
def _assemble_cond(self, node_id): | ||
feature_idx = self._tree.feature[node_id] | ||
threshold = self._tree.threshold[node_id] | ||
|
||
# sklearn's trees internally work with float32 numbers, so in order | ||
# to have consistent results across all supported languages, we convert | ||
# all thresholds into float32. | ||
threshold_num_val = ast.NumVal(threshold, dtype=np.float32) | ||
|
||
threshold_num_val = ast.NumVal(self._tree.threshold[node_id]) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Refer to #190 (review). Now threshold matches original type in scikit-learn ( |
||
return utils.lte(ast.FeatureRef(feature_idx), threshold_num_val) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,7 @@ | ||
import re | ||
|
||
import numpy as np | ||
|
||
from collections import namedtuple | ||
from functools import lru_cache | ||
from math import ceil, log | ||
|
@@ -22,3 +24,7 @@ def _get_handler_name(expr_tpe): | |
|
||
def _normalize_expr_name(name): | ||
return re.sub("(?!^)([A-Z]+)", r"_\1", name).lower() | ||
|
||
|
||
def format_float(value): | ||
return np.format_float_positional(value, unique=True, trim="0") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/dmlc/xgboost/blob/1d22a9be1cdeb53dfa9322c92541bc50e82f3c43/src/tree/tree_model.cc#L316
https://github.com/dmlc/xgboost/blob/1d22a9be1cdeb53dfa9322c92541bc50e82f3c43/include/xgboost/tree_model.h#L152-L155
https://github.com/dmlc/xgboost/blob/1d22a9be1cdeb53dfa9322c92541bc50e82f3c43/include/xgboost/base.h#L110-L111
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly that
weight
andbias
are alsofloat
internally:https://github.com/dmlc/xgboost/blob/1d22a9be1cdeb53dfa9322c92541bc50e82f3c43/src/gbm/gblinear_model.h#L81-L82
https://github.com/dmlc/xgboost/blob/1d22a9be1cdeb53dfa9322c92541bc50e82f3c43/src/gbm/gblinear_model.h#L90-L91
But at Python side they are loaded into
double
numpy array:https://github.com/dmlc/xgboost/blob/12110c900eff0aaa06045ecf717e6c5a36a164d5/python-package/xgboost/sklearn.py#L717-L718
https://github.com/dmlc/xgboost/blob/12110c900eff0aaa06045ecf717e6c5a36a164d5/python-package/xgboost/sklearn.py#L748