Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the ways of treatng missing values in XGboost? #21

Closed
naggar1 opened this issue Aug 12, 2014 · 20 comments
Closed

What are the ways of treatng missing values in XGboost? #21

naggar1 opened this issue Aug 12, 2014 · 20 comments

Comments

@naggar1
Copy link

naggar1 commented Aug 12, 2014

Generally does the model performance get better with that ?

@tqchen tqchen changed the title What are the ways of treatng missing values in Xgboost? What are the ways of treatng missing values in XGboost? Aug 12, 2014
@tqchen
Copy link
Member

tqchen commented Aug 12, 2014

xgboost naturally accepts sparse feature format, you can directly feed data in as sparse matrix, and only contains non-missing value.

i.e. features that are not presented in the sparse feature matrix are treated as 'missing'. XGBoost will handle it internally and you do not need to do anything on it.

@tqchen
Copy link
Member

tqchen commented Aug 12, 2014

Internally, XGBoost will automatically learn what is the best direction to go when a value is missing. Equivalently, this can be viewed as automatically "learn" what is the best imputation value for missing values based on reduction on training loss.

@tqchen tqchen closed this as completed Aug 12, 2014
@tqchen
Copy link
Member

tqchen commented Aug 12, 2014

I haven't done formal comparison with other methods, but I think it should be comparable, and it also gives computation benefit when your feature matrix is sparse

@rkirana
Copy link

rkirana commented Aug 29, 2014

well - if values are not provided, it takes it as missing. So are all 0 values also treated as missing?

Example: A column has 25 values, 15 are 1, 5 are missing/NA and 5 are 0.
Are the 5 + 5 = 10 treated as missing?

@tqchen
Copy link
Member

tqchen commented Aug 29, 2014

It will depends on how you present the data. If you put data in as LIBSVM format, and list zero features there, it will not be treated as missing

@rkirana
Copy link

rkirana commented Aug 30, 2014

it maybe extremely difficult to list 0 features in case of sparse data. So should we avoid xgboost in cases where there is missing data and many 0 features?

@maxliu
Copy link

maxliu commented Aug 30, 2014

Just gave a quick glance of the code ( it is beautiful ,by the way), it is very interesting the way you treat the missing values - it is depending how to make the tree better. Does this method/algorithm have name?

@tqchen
Copy link
Member

tqchen commented Aug 30, 2014

Normally, it is fine that you treat missing and zero all as zero:)

On Sat, Aug 30, 2014 at 5:11 AM, rkirana notifications@github.com wrote:

it maybe extremely difficult to list 0 features in case of sparse data. So
should we avoid xgboost in cases where there is missing data and many 0
features?


Reply to this email directly or view it on GitHub
https://github.com/tqchen/xgboost/issues/21#issuecomment-53956745.

Sincerely,

Tianqi Chen
Computer Science & Engineering, University of Washington

@tqchen
Copy link
Member

tqchen commented Aug 30, 2014

I invent the protocol and tricks by my self, maybe you can just call it
xgboost. The general algorithm however, fits into framework of gradient
boosting.

On Sat, Aug 30, 2014 at 8:56 AM, maxliu notifications@github.com wrote:

Just gave a quick glance of the code ( it is beautiful ,by the way), it is
very interesting the way you treat the missing values - it is depending how
to make the tree better. Does this method/algorithm has name?


Reply to this email directly or view it on GitHub
https://github.com/tqchen/xgboost/issues/21#issuecomment-53962310.

Sincerely,

Tianqi Chen
Computer Science & Engineering, University of Washington

@maxliu
Copy link

maxliu commented Aug 30, 2014

I am not surprised by the seed of xgboost but the score is better than sklearn-GBR. The trick of missing value might be one of the reasons.

Have you published any paper for the boosting algorithm you used for xgboost? Unlike random forest, I could not find many code for boosting with parallel algorithm - may need to improve my google skill though.

@tqchen
Copy link
Member

tqchen commented Aug 30, 2014

I didn't yet publish any paper describing xgboost.

For parallel boosting tree code, the only one I am aware of so far is
http://machinelearning.wustl.edu/pmwiki.php/Main/Pgbrt . You can try it out
and compare with xgb if you are interested

On Sat, Aug 30, 2014 at 9:40 AM, maxliu notifications@github.com wrote:

I am not surprised by the seed of xgboost but the score is better than
sklearn-GBR. The trick of missing value might be one of the reasons.

Have you published any paper for the boosting algorithm you used for
xgboost? Unlike random forest, I could not find many code for boosting with
parallel algorithm - may need to improve my google skill though.


Reply to this email directly or view it on GitHub
https://github.com/tqchen/xgboost/issues/21#issuecomment-53963590.

Sincerely,

Tianqi Chen
Computer Science & Engineering, University of Washington

@Acriche
Copy link

Acriche commented Jul 6, 2015

A follow up question-

While I understand how XGboost handles missing values within discrete variables, I'm not sure how does it handle continues (numeric) variables.
Can you please explain?

@tqchen
Copy link
Member

tqchen commented Jul 7, 2015

For continuous features, a missing(default) direction is learnt for missing value data to go into, so when the data of the speficific value is missing, then it goes to the default direction

@Acriche
Copy link

Acriche commented Jul 7, 2015

Thanks Tianqi.
And what about missing continuous features in generalized linear models?

@akshenndra
Copy link

Hi Tianqi
I am looking for an algo which does no imputation of the missing values internally and yet works .How does the xgboost work internally to handle missing values(can you drop in some basic idea) ?

@tqchen
Copy link
Member

tqchen commented Apr 29, 2016

see https://arxiv.org/abs/1603.02754 sec 3.4

@akshenndra
Copy link

Xgboost also works in the presence of categorical features ?We don't need to prepocess them (binarisation,etc.).For e.g my dataset has a feature called city which has values-"Milan","Rome","venice".Can I present them to xgboost without any preproccessing at all?

@johnsonr05
Copy link

Tianqi,

I have question about xgb.importance function. When I run this and look at the Real Cover it seems as though if there is any missing data in a feature, the Real Cover is NA. Is there anyway to deal with this issue to get some co-occurence count for each split?

Rex

@nyutal
Copy link

nyutal commented Jan 19, 2017

Hi Tianqi,
my processing pipeline include normalize the features before learning. Also, i have a lot of indicators which are missing and not zero for negative indication.
as a result, my normalized indicators are 0 for indicated values and missing for non-indicated values.
will xgboost handle such behavior properly? (changing non-exists features to 0 will cause problems...)

Thanks,
Nadav

@acc-to-learn
Copy link

acc-to-learn commented Nov 22, 2017

@tqchen
You wrote:

Internally, XGBoost will automatically learn what is the best direction to go when a value is missing. Equivalently, this can be viewed as automatically "learn" what is the best imputation value for missing values based on reduction on training loss.

What about a case when the train set has not missing values, but the test has?

@lock lock bot locked as resolved and limited conversation to collaborators Oct 25, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants