Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental Loads #56

Closed
rkirana opened this issue Aug 30, 2014 · 8 comments
Closed

Incremental Loads #56

rkirana opened this issue Aug 30, 2014 · 8 comments

Comments

@rkirana
Copy link

rkirana commented Aug 30, 2014

XGboost is a great package. Thanks for writing this.

When the dataset is very big and does not fit in memory, Vowpal wabbit has a nice way of building the model incrementally loading chunks into memory, making a model and updating the model with new data that is loaded. It will be great to have this feature in xgboost

@tqchen
Copy link
Member

tqchen commented Aug 30, 2014

Hi, the tree construction algorithm works differently than SGD. In a sense that SGD can be incremental, while tree construction algorithm is in nature batch, you want to see all the data before you decide which one is the best split. So it is not as straightforward as what does VW

@pommedeterresautee
Copy link
Member

In the link you put in your README they say they are working on an out-of-core algo.

Gradient Boosted Trees in GLC is a fork of XGBoost, an open source C++ implementation of GBM which is 20 times faster than scikit-learn. Tianqi Chen, the author of XGBoost, is currently interning at GraphLab, helping us improve the toolkit for better scalability. In the coming release, GBM will support out-of-core computation so the data does not have to fit in memory.

I don't know if you are still working with them, but does this algo will be implemented in XGBoost? Would be awsome to add a "challenger" to Vowpal! And my big datasets would be so happy (I can hear them asking for it).

Kind regards,
Michaël

@tqchen
Copy link
Member

tqchen commented Dec 26, 2014

Doing out-of-core trees were not be as straight forward and possible involve approximation and speed slow-down . My personal next goal of xgb is be distributed version for even larger scale. I will come back to out-of-core after that.

In the meanwhile, you can also try use the GLC version, which support out-of-core computation, backed by GLC's SFrame.

@pommedeterresautee
Copy link
Member

@rkirana the package Feature Hashin (https://github.com/wush978/FeatureHashing) can do the Feature Hashing for you (like Vowpal). For a dataset with many features, it would be easy to load a sample of the original dataset (like the first 10%), hash it and get a lower number of features because of the hashing, then do the same for the next 10% of the dataset, hash it, merge the new matrix with the previous hash matrix and so on...

That way, you would be able to reduce the size of the dataset and be able to load in memory one that would not fit in memory otherwise.

@tqchen
Copy link
Member

tqchen commented Mar 24, 2015

this is not normally the common scenario.. Normally for trees there won't
be as many features. And as I said, this is not straightforward as sgd to
do incremental thing

On Tuesday, March 24, 2015, Michaël Benesty notifications@github.com
wrote:

@rkirana https://github.com/rkirana the package Feature Hashin (
https://github.com/wush978/FeatureHashing) can do the Feature Hashing for
you (like Vowpal). For a dataset with many features, it would be easy to
load a sample of the original dataset (like the first 10%), hash it and get
a lower number of features because of the hashing, then do the same for the
next 10% of the dataset, hash it, merge the new matrix with the previous
hash matrix and so on...

That way, you would be able to reduce the size of the dataset and be able
to load in memory one that would not fit in memory otherwise.


Reply to this email directly or view it on GitHub
#56 (comment).

Sincerely,

Tianqi Chen
Computer Science & Engineering, University of Washington

@pommedeterresautee
Copy link
Member

What I present in my comment is just an idea to build incrementally a hashed dataset to avoid having full dataset in memory (all columns for all observations, after hashing you get less columns but all observations) but the learning part would be classical (and not incremental).

However I was also thinking to go deeper and do learning on part of the dataset and continuing the learning on the second part and so on (as xgboost support continuing a previous learning). I understand in your comment that it won't work as expected because of the way gradient boosting works.

I imagine that the trees during the second analysis will get a weight lower to the trees learned during the analysis of the first part because of the gradient descent mechanism, making difficult to learn new pattern in the second (and following) part of the dataset not present in the first part.

I just looked at the papers available on Google scholar and very few are about incremental boosting.

Is there a paper of interest you are aware of and available on Internet to do increment gradient boosting?

Kind regards,
Michaël

@tqchen
Copy link
Member

tqchen commented Mar 25, 2015

Boosting is incremental in nature. However, there can be accuracy issues when not using the entire data for each tree. Also there will be data structure issue. So yes it is possible but not as trivial.

Tianqi

@tqchen
Copy link
Member

tqchen commented Apr 19, 2015

The discussion of external memory version has been moved to #244

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants