Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

revamp gradients implementations #34

Merged
merged 5 commits into from
Sep 27, 2016

Conversation

maximerischard
Copy link
Contributor

revamp gradients implementations to avoid unnecessary memory allocations, fix some gradients along the way, add some tests, and make things generally faster

I'm dealing with some relatively large dataset (N>10000 or so), and very quickly started running out of RAM when optimizing hyperparameters. I've modified the way kernel gradients are obtained, to reduce the number of memory allocations that are necessary.

Along the way I found a few gradients that were wrong (I've added tests so they should all be correct now), and generally rewrote a lot of functions to make them faster.

Incidentally, the new gradient functions make it easy to get the derivative of the kernel with respect to a single parameter (with the grad_slice function), which should make #33 easier to achieve.

…ons, fix some gradients along the way, add some tests, and make things generally faster
@fairbrot
Copy link
Member

This is a very impressive set of changes! Thanks a lot. I would like to study and thoroughly test these before integrating them into master, and incrementing the version. I'm a little busy in the next week or so, but I hope this could be done (at the latest) by the end of the month.

At the moment I'm having some issues using this with julia v0.4. It would be good if we could maintain compatibility with this version, so if you are to fix this in the meantime, that would be a massive help.

@maximerischard
Copy link
Contributor Author

I think this now passes all tests in julia 0.4 and julia 0.5, but it does require the master version of Compat, which hasn't yet been tagged. I've updated REQUIRE to ask for Compat 0.9.2, which doesn't exist yet, but hopefully will very soon.

@maximerischard
Copy link
Contributor Author

A new Compat version has been tagged, so I'm hoping Travis would now pass. Could you rerun the build just to check?

@fairbrot
Copy link
Member

It seems to be passing now. Thanks!


function grad_stack!(stack::AbstractArray, rq::RQIso, X::Matrix{Float64}, data::IsotropicData)
nobsv = size(X,2)
function addcov!(s::AbstractMatrix{Float64}, rq::RQIso, X::Matrix{Float64}, data::IsotropicData)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is the same as that defined in stationary.jl. We'll delete this one unless there is a reason for keeping.

@fairbrot fairbrot merged commit bdab4dd into STOR-i:master Sep 27, 2016
@fairbrot
Copy link
Member

Thanks for the pull request. Lots of great changes and speed improvements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants