Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG+1] Convert y in GradientBoosting to float64 instead of float32 #13524

Merged
merged 6 commits into from
Apr 9, 2019

Conversation

adrinjalali
Copy link
Member

@adrinjalali adrinjalali commented Mar 26, 2019

Fixes #9098

If not float64, the decision trees would each convert it to a double.

@adrinjalali adrinjalali changed the title fix y validation Convert y in GradientBoosting to float64 instead of float32 Mar 26, 2019
@@ -1432,7 +1432,7 @@ def fit(self, X, y, sample_weight=None, monitor=None):
self._clear_state()

# Check input
X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE)
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's worth commenting why check_X_y is not used.

@adrinjalali adrinjalali added this to the 0.21 milestone Mar 27, 2019
@adrinjalali adrinjalali changed the title Convert y in GradientBoosting to float64 instead of float32 [MRG+1] Convert y in GradientBoosting to float64 instead of float32 Mar 27, 2019
@@ -44,7 +44,7 @@
from time import time
from ..model_selection import train_test_split
from ..tree.tree import DecisionTreeRegressor
from ..tree._tree import DTYPE
from ..tree._tree import DTYPE, DOUBLE
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is interesting how this is our naming convention for the dtypes of X and y. In the future we may consider X_DTYPE and Y_DTYPE.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I know. You're the second +1 here now @thomasjpfan, wanna push the green button? ;)

@jnothman jnothman merged commit eda99f3 into scikit-learn:master Apr 9, 2019
@jnothman
Copy link
Member

jnothman commented Apr 9, 2019

Ta @adrinjalali

@adrinjalali adrinjalali deleted the gbc/float32 branch April 25, 2019 11:25
jeremiedbb pushed a commit to jeremiedbb/scikit-learn that referenced this pull request Apr 25, 2019
xhluca pushed a commit to xhluca/scikit-learn that referenced this pull request Apr 28, 2019
xhluca pushed a commit to xhluca/scikit-learn that referenced this pull request Apr 28, 2019
xhluca pushed a commit to xhluca/scikit-learn that referenced this pull request Apr 28, 2019
koenvandevelde pushed a commit to koenvandevelde/scikit-learn that referenced this pull request Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants