Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA matrix factorization #63

Merged
merged 2 commits into from
Nov 15, 2017
Merged

CUDA matrix factorization #63

merged 2 commits into from
Nov 15, 2017

Conversation

benfred
Copy link
Owner

@benfred benfred commented Nov 5, 2017

Add CUDA kernels and code to train the implicit ALS model on the GPU.

@benfred benfred changed the title First draft CUDA matrix factorization CUDA matrix factorization Nov 10, 2017
Add CUDA kernels and code to train the implicit ALS model on the GPU.
@maciejkula
Copy link

Let me just say, this is pretty great! I've been meaning to get into CUDA programming for a while, and now I have a good example to motivate me.

@benfred
Copy link
Owner Author

benfred commented Nov 15, 2017

Thanks! It was fun thing to learn.

I actually initially experimented with BPR for this (using sgd only - no item biases etc), and came up with something like

__global__ void bpr_update_kernel(int samples, int * userids, int * likedids, int * dislikedids,
                                  int item_count, int user_count, int factors,
                                  float * X, float * Y, float * item_bias,
                                  float learning_rate, float reg) {
    extern __shared__ float shared_memory[];
    float * temp = &shared_memory[0];

    for (int i = blockIdx.x; i < samples; i += gridDim.x) {
        float * user = &X[userids[i] * factors + threadIdx.x],
              * liked = &Y[likedids[i] * factors + threadIdx.x],
              * disliked = &Y[dislikedids[i] * factors + threadIdx.x];
 
        float user_val = *user, liked_val = *liked, disliked_val = * disliked;

        temp[threadIdx.x] = liked_val - disliked_val;
        float z = 1.0 / (1.0 + exp(dot(user, temp)));

        // update the factors
        *user     += learning_rate * ( z * (liked_val - disliked_val) - reg * user_val);
        *liked    += learning_rate * ( z * user_val - reg * liked_val);
        *disliked += learning_rate * (-z * user_val - reg * disliked_val);
    }
}

But since BPR can already be done efficiently on the GPU with your spotlight package (and realistically I would need to switch from sgd to adam or something for optimizing this) - I didn't bother finishing =)

@benfred benfred restored the cuda_als branch November 15, 2017 23:32
@benfred benfred deleted the cuda_als branch February 8, 2018 01:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants