New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA matrix factorization #63

Merged
merged 2 commits into from Nov 15, 2017

Conversation

Projects
None yet
2 participants
@benfred
Owner

benfred commented Nov 5, 2017

Add CUDA kernels and code to train the implicit ALS model on the GPU.

@benfred benfred force-pushed the cuda_als branch from b4422f6 to a46665e Nov 10, 2017

@benfred benfred changed the title from First draft CUDA matrix factorization to CUDA matrix factorization Nov 10, 2017

CUDA Matrix Factorization
Add CUDA kernels and code to train the implicit ALS model on the GPU.

@benfred benfred force-pushed the cuda_als branch from a46665e to 3280cc9 Nov 11, 2017

@benfred benfred force-pushed the cuda_als branch from 9daa2ad to 3d31f3a Nov 15, 2017

@benfred benfred merged commit 0fcf147 into master Nov 15, 2017

4 checks passed

continuous-integration/appveyor/branch AppVeyor build succeeded
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details

@benfred benfred deleted the cuda_als branch Nov 15, 2017

@maciejkula

This comment has been minimized.

maciejkula commented Nov 15, 2017

Let me just say, this is pretty great! I've been meaning to get into CUDA programming for a while, and now I have a good example to motivate me.

@benfred

This comment has been minimized.

Owner

benfred commented Nov 15, 2017

Thanks! It was fun thing to learn.

I actually initially experimented with BPR for this (using sgd only - no item biases etc), and came up with something like

__global__ void bpr_update_kernel(int samples, int * userids, int * likedids, int * dislikedids,
                                  int item_count, int user_count, int factors,
                                  float * X, float * Y, float * item_bias,
                                  float learning_rate, float reg) {
    extern __shared__ float shared_memory[];
    float * temp = &shared_memory[0];

    for (int i = blockIdx.x; i < samples; i += gridDim.x) {
        float * user = &X[userids[i] * factors + threadIdx.x],
              * liked = &Y[likedids[i] * factors + threadIdx.x],
              * disliked = &Y[dislikedids[i] * factors + threadIdx.x];
 
        float user_val = *user, liked_val = *liked, disliked_val = * disliked;

        temp[threadIdx.x] = liked_val - disliked_val;
        float z = 1.0 / (1.0 + exp(dot(user, temp)));

        // update the factors
        *user     += learning_rate * ( z * (liked_val - disliked_val) - reg * user_val);
        *liked    += learning_rate * ( z * user_val - reg * liked_val);
        *disliked += learning_rate * (-z * user_val - reg * disliked_val);
    }
}

But since BPR can already be done efficiently on the GPU with your spotlight package (and realistically I would need to switch from sgd to adam or something for optimizing this) - I didn't bother finishing =)

@benfred benfred restored the cuda_als branch Nov 15, 2017

@benfred benfred deleted the cuda_als branch Feb 8, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment