New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests for shapley.R and explanation.R #128
Conversation
Smooth! |
Could you add a script, i.e. |
Sure. The |
Ah, ok. So if we wanted to update the results at some point we would just delete the file that is currently there, right? And then the tests would write a new file? |
Exactly. If there is no file the test will pass, but give a warning. I will make a script creating the model, though. |
Renames to ensure shapley.R test runs before explanation.R since the latter depends on the object stored by the former.
Question; Then I deleted |
If not, the xgboost version might be it.
@martinju Do you want me to take a look? |
Ok, this has to be a versioning issue. I upgraded to CRAN version of xgboost (0.90.2), and the tests fail to predict with xgboost (i.e. completely different issue). |
Sure! |
And, yes, feel free to try to debug on your end if that doesn't do the trick. |
Woohoo, test passed! |
Both tests rely on an xgboost model (not fitted in the test to ensure consistency across different xgboost versions) and a small subset of the Boston data.
The tests check if the complete output from shapr and explain is the same as before.
For the explain function the 18 different examples from
test-refactor.R
. All examples are tested in the same call, but the error messages specifies where the error is for easy debugging.