Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

user and item biases in WRMF and explicit feedback #44

Closed
dselivanov opened this issue Nov 29, 2020 · 30 comments
Closed

user and item biases in WRMF and explicit feedback #44

dselivanov opened this issue Nov 29, 2020 · 30 comments

Comments

@dselivanov
Copy link
Owner

As of 35247b4

without biases

library(rsparse)
library(lgr)
lg = get_logger('rsparse')
lg$set_threshold('debug')
data('movielens100k')
options("rsparse_omp_threads" = 1)

train = movielens100k

set.seed(1)
model = WRMF$new(rank = 10,  lambda = 1, feedback  = 'explicit', solver = 'cholesky', with_bias = FALSE)
user_emb = model$fit_transform(train, n_iter = 10, convergence_tol = -1)

INFO [23:09:40.158] starting factorization with 1 threads
INFO [23:09:40.268] iter 1 loss = 4.4257
INFO [23:09:40.302] iter 2 loss = 1.2200
INFO [23:09:40.332] iter 3 loss = 0.8617
INFO [23:09:40.361] iter 4 loss = 0.7752
INFO [23:09:40.391] iter 5 loss = 0.7398
INFO [23:09:40.420] iter 6 loss = 0.7191
INFO [23:09:40.456] iter 7 loss = 0.7046
INFO [23:09:40.488] iter 8 loss = 0.6935
INFO [23:09:40.522] iter 9 loss = 0.6845
INFO [23:09:40.555] iter 10 loss = 0.6769

with biases

set.seed(1)
model = WRMF$new(rank = 10,  lambda = 1, feedback  = 'explicit', solver = 'cholesky', with_bias = TRUE)
user_emb = model$fit_transform(train, n_iter = 10, convergence_tol = -1)

INFO [23:10:06.605] starting factorization with 1 threads
INFO [23:10:06.637] iter 1 loss = 0.8411
INFO [23:10:06.671] iter 2 loss = 0.6251
INFO [23:10:06.704] iter 3 loss = 0.5950
INFO [23:10:06.736] iter 4 loss = 0.5820
INFO [23:10:06.769] iter 5 loss = 0.5751
INFO [23:10:06.805] iter 6 loss = 0.5712
INFO [23:10:06.840] iter 7 loss = 0.5688
INFO [23:10:06.875] iter 8 loss = 0.5673
INFO [23:10:06.916] iter 9 loss = 0.5663
INFO [23:10:06.951] iter 10 loss = 0.5657

cc @david-cortes

@dselivanov dselivanov changed the title user and item biases in ALS user and item biases in WRMF Nov 29, 2020
@david-cortes
Copy link
Contributor

With biases plus mean centering:

DEBUG [22:10:21.514] initializing biases 
INFO  [22:10:21.548] starting factorization with 1 threads 
INFO  [22:10:21.582] iter 1 loss = 0.8305 
INFO  [22:10:21.607] iter 2 loss = 0.6170 
INFO  [22:10:21.631] iter 3 loss = 0.5822 
INFO  [22:10:21.655] iter 4 loss = 0.5662 
INFO  [22:10:21.680] iter 5 loss = 0.5568 
INFO  [22:10:21.706] iter 6 loss = 0.5507 
INFO  [22:10:21.730] iter 7 loss = 0.5464 
INFO  [22:10:21.755] iter 8 loss = 0.5431 
INFO  [22:10:21.781] iter 9 loss = 0.5405 
INFO  [22:10:21.806] iter 10 loss = 0.5383

With rnorm init + biases + mean centering:

DEBUG [22:12:08.799] initializing biases 
INFO  [22:12:08.843] starting factorization with 1 threads 
INFO  [22:12:08.883] iter 1 loss = 0.7696 
INFO  [22:12:08.912] iter 2 loss = 0.6211 
INFO  [22:12:08.938] iter 3 loss = 0.5836 
INFO  [22:12:08.963] iter 4 loss = 0.5660 
INFO  [22:12:08.989] iter 5 loss = 0.5557 
INFO  [22:12:09.015] iter 6 loss = 0.5489 
INFO  [22:12:09.041] iter 7 loss = 0.5442 
INFO  [22:12:09.067] iter 8 loss = 0.5408 
INFO  [22:12:09.093] iter 9 loss = 0.5382 
INFO  [22:12:09.120] iter 10 loss = 0.5362

Although, the loss function is taking the regularization incorrectly so these aren't final numbers yet.

@dselivanov
Copy link
Owner Author

@david-cortes

Although, the loss function is taking the regularization incorrectly so these aren't final numbers yet.

Could you please elaborate more on that?

@david-cortes
Copy link
Contributor

It had a bug in which it was taking the squared sum of X twice instead of taking Y. Also I think it's adding a row of all-ones into the calculation.

@dselivanov
Copy link
Owner Author

dselivanov commented Nov 30, 2020

It had a bug in which it was taking the squared sum of X twice instead of taking Y

Ok, this fixed now

Also I think it's adding a row of all-ones into the calculation.

Doesn't seem so as arma::span(1, X.n_rows - 1) skips first and last row (ones and biases)
Ah, you are right, need to be -2

@dselivanov
Copy link
Owner Author

# no bias
INFO  [14:55:08.288] starting factorization with 1 threads 
INFO  [14:55:08.394] iter 1 loss = 6.0649 
INFO  [14:55:08.424] iter 2 loss = 0.8184 
INFO  [14:55:08.450] iter 3 loss = 0.7426 
INFO  [14:55:08.480] iter 4 loss = 0.7154 
INFO  [14:55:08.511] iter 5 loss = 0.6984 
INFO  [14:55:08.546] iter 6 loss = 0.6861 
INFO  [14:55:08.581] iter 7 loss = 0.6767 
INFO  [14:55:08.618] iter 8 loss = 0.6691 
INFO  [14:55:08.650] iter 9 loss = 0.6629 
INFO  [14:55:08.691] iter 10 loss = 0.6577 

# user + item bias
INFO  [14:55:18.805] starting factorization with 1 threads 
INFO  [14:55:18.838] iter 1 loss = 0.7335 
INFO  [14:55:18.873] iter 2 loss = 0.5918 
INFO  [14:55:18.907] iter 3 loss = 0.5624 
INFO  [14:55:18.943] iter 4 loss = 0.5496 
INFO  [14:55:18.982] iter 5 loss = 0.5427 
INFO  [14:55:19.022] iter 6 loss = 0.5384 
INFO  [14:55:19.064] iter 7 loss = 0.5355 
INFO  [14:55:19.101] iter 8 loss = 0.5338 
INFO  [14:55:19.148] iter 9 loss = 0.5328 
INFO  [14:55:19.184] iter 10 loss = 0.5323 

# user + item bias + better init
DEBUG [15:21:49.763] initializing biases 
INFO  [15:21:49.767] starting factorization with 1 threads 
INFO  [15:21:49.804] iter 1 loss = 0.7281 
INFO  [15:21:49.842] iter 2 loss = 0.5933 
INFO  [15:21:49.880] iter 3 loss = 0.5619 
INFO  [15:21:49.920] iter 4 loss = 0.5484 
INFO  [15:21:49.961] iter 5 loss = 0.5413 
INFO  [15:21:50.000] iter 6 loss = 0.5370 
INFO  [15:21:50.034] iter 7 loss = 0.5341 
INFO  [15:21:50.074] iter 8 loss = 0.5321 
INFO  [15:21:50.116] iter 9 loss = 0.5308 
INFO  [15:21:50.148] iter 10 loss = 0.5298


# user + item bias + better init + global
DEBUG [15:00:05.874] initializing biases 
INFO  [15:00:05.962] starting factorization with 1 threads 
INFO  [15:26:17.413] iter 1 loss = 0.7213 
INFO  [15:26:17.429] iter 2 loss = 0.5798 
INFO  [15:26:17.440] iter 3 loss = 0.5461 
INFO  [15:26:17.451] iter 4 loss = 0.5317 
INFO  [15:26:17.464] iter 5 loss = 0.5241 
INFO  [15:26:17.471] iter 6 loss = 0.5194 
INFO  [15:26:17.481] iter 7 loss = 0.5164 
INFO  [15:26:17.493] iter 8 loss = 0.5144 
INFO  [15:26:17.500] iter 9 loss = 0.5130 
INFO  [15:26:17.507] iter 10 loss = 0.5121

@david-cortes
Copy link
Contributor

david-cortes commented Nov 30, 2020

I think the more correct way of adding the regularization with biases to the loss would be to exclude only the row that has ones while still adding the row that has biases, as the regularization is also applied to the user/item biases.

@david-cortes
Copy link
Contributor

I'm pretty sure there is still some unintended data copying going on with the current approach. I tried timing it with the ML10M data, got these results:

  • With biases:
Unit: seconds
                                                                                                                                                                                                                                                             expr
 {     m = rsparse::WRMF$new(rank = 40, lambda = 0.05, dynamic_lambda = TRUE,          feedback = "explicit", with_global_bias = TRUE, with_user_item_bias = TRUE,          solver = "conjugate_gradient")     A = m$fit_transform(ML10M, convergence_tol = -1) }
      min       lq     mean   median       uq      max neval
 16.36217 16.36217 16.36217 16.36217 16.36217 16.36217     1
  • Without biases:
Unit: seconds
                                                                                                                                                                                                                                                              expr
 {     m = rsparse::WRMF$new(rank = 40, lambda = 0.05, dynamic_lambda = TRUE,          feedback = "explicit", with_global_bias = TRUE, with_user_item_bias = FALSE,          solver = "conjugate_gradient")     A = m$fit_transform(ML10M, convergence_tol = -1) }
      min       lq     mean   median       uq      max neval
 10.97332 10.97332 10.97332 10.97332 10.97332 10.97332     1

Adding the biases made it take 49% longer to fit the model.

For comparison purposes, these are the same times using the package cmfrec, which has a less efficient approach for the biases (uses rank+1 matrices and copies/replaces bias/constant component at each iteration):

  • With biases:
Unit: seconds
                                                                                                                                                   expr
 {     m = CMF(X = ML10M, k = 40, lambda = 0.05, scale_lam = TRUE,          use_cg = TRUE, finalize_chol = FALSE, precompute_for_predictions = FALSE) }
     min      lq    mean  median      uq     max neval
 8.07948 8.07948 8.07948 8.07948 8.07948 8.07948     1
  • Without biases:
Unit: seconds
                                                                                                                                                                                                  expr
 {     m = CMF(X = ML10M, k = 40, lambda = 0.05, scale_lam = TRUE,          use_cg = TRUE, finalize_chol = FALSE, precompute_for_predictions = FALSE,          user_bias = FALSE, item_bias = FALSE) }
      min       lq     mean   median       uq      max neval
 6.385816 6.385816 6.385816 6.385816 6.385816 6.385816     1

It took only 26% longer with the biases added.

@dselivanov
Copy link
Owner Author

@david-cortes should be fixed now, can you try? I'm surprised cmfrec is almost 2x faster! I would expect arma to be translated into effecient blas calls.

@david-cortes
Copy link
Contributor

Now it's producing a segmentation fault.

@david-cortes
Copy link
Contributor

Actually, the segmentation fault was not from 3731ca0, but from the commits that follow. However, this commit actually makes it slower:

Unit: seconds
                                                                                                                                                                                                                                                             expr
 {     m = rsparse::WRMF$new(rank = 40, lambda = 0.05, dynamic_lambda = TRUE,          feedback = "explicit", with_global_bias = TRUE, with_user_item_bias = TRUE,          solver = "conjugate_gradient")     A = m$fit_transform(ML10M, convergence_tol = -1) }
      min       lq     mean   median       uq      max neval
 21.44936 21.44936 21.44936 21.44936 21.44936 21.44936     1

@david-cortes
Copy link
Contributor

Ran it again after using "clean and rebuild" from the current commit at the master branch, now it didn't segfault anymore, and took the same time as before with the biases (~16s). Perhaps I had some issue with the package calling the wrong shared object functions.

@david-cortes
Copy link
Contributor

Another interesting thing however: using float precision makes it take less than half the time when not using biases (~4.7s vs ~10.7s). But when using biases now it returns NANs with float.

@dselivanov
Copy link
Owner Author

I test it with following code

library(Matrix)
set.seed(1)
m = rsparsematrix(100000, 10000, 0.01)
m@x = sample(5, size = length(m@x), replace = T)
rank = 8
n_user = nrow(m)
n_item = ncol(m)
user_factors = matrix(rnorm(n_user * rank, 0, 0.01), nrow = rank, ncol = n_user)
item_factors = matrix(rnorm(n_item * rank, 0, 0.01), nrow = rank, ncol = n_item)

library(rsparse)
system.time({
  res = rsparse:::als_explicit_double(
    m_csc_r = m,
    X = user_factors,
    Y = item_factors,
    cnt_X = numeric(ncol(user_factors)),
    lambda = 0,
    dynamic_lambda = FALSE,
    n_threads = 1,
    solver = 0L,
    cg_steps = 1L,
    with_biases = FALSE,
    is_x_bias_last_row = TRUE)
})

user system elapsed
0.482 0.003 0.485

rank = 10
n_user = nrow(m)
n_item = ncol(m)
user_factors = matrix(rnorm(n_user * rank, 0, 0.01), nrow = rank, ncol = n_user)
user_factors[1, ] = rep(1.0, n_user)

item_factors = matrix(rnorm(n_item * rank, 0, 0.01), nrow = rank, ncol = n_item)
item_factors[rank, ] = rep(1.0, n_item)

system.time({
  res = rsparse:::als_explicit_double(
    m_csc_r = m,
    X = user_factors,
    Y = item_factors,
    lambda = 0,
    cnt_X = numeric(ncol(user_factors)),
    dynamic_lambda = 0,
    n_threads = 1,
    solver = 0L,
    cg_steps = 1L,
    with_biases = T,
    is_x_bias_last_row = TRUE)
})

user system elapsed
0.624 0.006 0.629

The later used to be ~0.9-1

@dselivanov
Copy link
Owner Author

@david-cortes check 7fcb1d3 - I've remove subview.

Another interesting thing however: using float precision makes it take less than half the time when not using biases (~4.7s
vs ~10.7s). But when using biases now it returns NANs with float.

I will take a look

@dselivanov
Copy link
Owner Author

Another interesting thing however: using float precision makes it take less than half the time when not using biases (~4.7s vs ~10.7s).

This seems related to LAPACK which is used by the system. If you only use LAPACK which is shipped with R (which only works with double) then float pkg provides it's own single precision LAPACK. Which seems more then twice faster compared to R's reference LAPACK.
If you have high performance system-wide BLAS and LAPACK then float pkg will detect it and use it. And rsparse and arma will also link to system-wide BLAS and LAPACK.

But when using biases now it returns NANs with float.

Haven't noticed that. Reproducible example will help.

@david-cortes
Copy link
Contributor

I'm using openblas. Tried ldd on the generated .so and it doesn't look like it's linking to anything from float:

ldd rsparse.so
        linux-vdso.so.1 (0x00007ffc1bdf0000)
        liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x00007fbafff81000)
        libblas.so.3 => /lib/x86_64-linux-gnu/libblas.so.3 (0x00007fbafff1c000)
        libR.so => /lib/libR.so (0x00007fbaffa6c000)
        libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbaff89f000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbaff75b000)
        libgomp.so.1 => /lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fbaff71b000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbaff6ff000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbaff53a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fbb006e1000)
        libopenblas.so.0 => /lib/x86_64-linux-gnu/libopenblas.so.0 (0x00007fbafd235000)
        libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007fbafcf7f000)
        libreadline.so.8 => /lib/x86_64-linux-gnu/libreadline.so.8 (0x00007fbafcf28000)
        libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007fbafce98000)
        liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fbafce6d000)
        libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fbafce5a000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fbafce3d000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fbafce37000)
        libicuuc.so.67 => /lib/x86_64-linux-gnu/libicuuc.so.67 (0x00007fbafcc4f000)
        libicui18n.so.67 => /lib/x86_64-linux-gnu/libicui18n.so.67 (0x00007fbafc94a000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbafc926000)
        libquadmath.so.0 => /lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007fbafc8dd000)
        libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fbafc8ae000)
        libicudata.so.67 => /lib/x86_64-linux-gnu/libicudata.so.67 (0x00007fbafad95000)

@david-cortes
Copy link
Contributor

Here's an example:

library(rsparse)
library(lgr)
lg = get_logger('rsparse')
lg$set_threshold('debug')
data('movielens100k')
options("rsparse_omp_threads" = 16)

X = movielens100k

set.seed(1)
model = WRMF$new(rank = 100,  lambda = 0.05, dynamic_lambda = TRUE,
                 feedback  = 'explicit', solver = 'conjugate_gradient',
                 with_user_item_bias = TRUE, with_global_bias=TRUE,
                 precision = "float")
user_emb = model$fit_transform(X, n_iter = 10, convergence_tol = -1)
DEBUG [21:32:16.367] initializing biases 
INFO  [21:32:16.374] starting factorization with 16 threads 
INFO  [21:32:16.388] iter 1 loss = NaN 
Error in if (loss_prev_iter/loss - 1 < convergence_tol) { : 
  missing value where TRUE/FALSE needed

@dselivanov
Copy link
Owner Author

dselivanov commented Dec 7, 2020 via email

@dselivanov
Copy link
Owner Author

Here's an example:

On my machine it works fine... Numerical issues on a particular setup?

DEBUG [23:39:40.707] initializing biases 
INFO  [23:39:40.748] starting factorization with 16 threads 
INFO  [23:39:40.787] iter 1 loss = 0.8346 
INFO  [23:39:40.804] iter 2 loss = 0.6115 
INFO  [23:39:40.821] iter 3 loss = 0.5629 
INFO  [23:39:40.837] iter 4 loss = 0.5554 
INFO  [23:39:40.855] iter 5 loss = 0.5514 
INFO  [23:39:40.873] iter 6 loss = 0.5494 
INFO  [23:39:40.888] iter 7 loss = 0.5483 
INFO  [23:39:40.908] iter 8 loss = 0.5477 
INFO  [23:39:40.924] iter 9 loss = 0.5474 
INFO  [23:39:40.941] iter 10 loss = 0.5473 
> 
> 
> 
> user_emb
# A float32 matrix: 943x102
#    [,1]      [,2]      [,3]       [,4]       [,5]
# 1     1  0.310238 -0.187435  0.0070898  0.3378164
# 2     1  0.318511 -0.032087  0.0918757  0.3566173
# 3     1 -0.061583 -0.164409  0.0510152  0.1838035
# 4     1  0.216120 -0.049797 -0.0742759 -0.0062153
# 5     1  0.246393  0.013174  0.0510887 -0.1232737
# 6     1  0.418677 -0.090236 -0.4091501 -0.2156503
# 7     1  0.185473  0.050313 -0.1264562 -0.1054751
# 8     1  0.044565 -0.153883 -0.1510867 -0.0126681
# 9     1  0.240177 -0.168154 -0.3026042 -0.2351764
# 10    1 -0.033645 -0.017147  0.0695353 -0.0026545
# ...

@david-cortes
Copy link
Contributor

I don't think it's an issue with numerical precision, because it works fine under commit a7860f1 and the same problem seems to occur with MKL.

By the way, the current commit on master doesn't compile on windows, something to do with unsigned integer types being undefined.

@dselivanov
Copy link
Owner Author

I'm not sure what is wrong in your case. I've tried latest commit on my ubuntu workstation with openblas and it works normally, not throws NaN.
What compilation flags do you use?

@david-cortes
Copy link
Contributor

I'm using the default flags:

david@debian:~$ R CMD config CXXFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g
david@debian:~$ R CMD config CFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g
david@debian:~$ R CMD config FFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong
david@debian:~$ R CMD config FCFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong
david@debian:~$ R CMD config CXXPICFLAGS
-fpic

@david-cortes
Copy link
Contributor

By the way, the non-negative CD algorithm still works fine when the data is mean centered.

@dselivanov
Copy link
Owner Author

How product of two non negative vectors can give potentially negative value (after centering)

@david-cortes
Copy link
Contributor

david-cortes commented Dec 8, 2020

Thing is, if the data is non-negative, the mean will also be non-negative.

Sorry, misunderstood - it won't give a negative value, but the algorithm still works by outputting something close to zero.

@dselivanov
Copy link
Owner Author

dselivanov commented Dec 8, 2020

Sorry, misunderstood - it won't give a negative value, but the algorithm still works by outputting something close to zero.

Yes, it is kind of working, but loss is huge and model is not that useful.

@david-cortes
Copy link
Contributor

david-cortes commented Dec 9, 2020

I'm not sure what is wrong in your case. I've tried latest commit on my ubuntu workstation with openblas and it works normally, not throws NaN.
What compilation flags do you use?

I think this has to do with AVX instructions and array padding. If I disable newer instructions by setting -march=x86-64 it works correctly, but with -march=native it won't. These are the instruction set extensions in my CPU:

Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr s
                                 se sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop
                                 _tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe
                                  popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misa
                                 lignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_ll
                                 c mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx s
                                 map clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbr
                                 v svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshol
                                 d avic v_vmsave_vmload vgif overflow_recov succor smca

Perhaps something to do with the unsafe keywork?

EDIT: hmm, actually now it's working correctly with -march=native as of the current commit at master.

EDIT2: ok, I actually realize now that commit 7fcb1d39b6ff9ff7acabe3cf5af8f9e2fa208d34 replaced subview with Mat. That's what fixes it.

@dselivanov
Copy link
Owner Author

EDIT2: ok, I actually realize now that commit 7fcb1d3 replaced subview with Mat

Yes, I had segfaults this subviews as well...

@dselivanov dselivanov changed the title user and item biases in WRMF user and item biases in WRMF and explicit feedback Dec 9, 2020
@dselivanov
Copy link
Owner Author

I think we are done here. There will a separate thread for a model with biases withimplicit feedback data.

At the moment I'm little bit short in time. @david-cortes feel free to make a shot for a biases on implicit feedback model (if you are interested of course).

@david-cortes
Copy link
Contributor

@dselivanov I’m not so sure it’s something desirable to have actually. I tried playing with centering and biases with implicit-feedback data, and I see that adding user biases usually gives a very small lift in metrics like HR@5, but item biases makes them much worse.

You can play with cmfrec (version from git, the one from CRAN has bugs for this use-case) like this with e.g. the lastFM data or similar, which would fit the same model as WRMF with feedback="implicit":

library(cmfrec)
Xvalues <- Xcoo@x
Xcoo@x <- rep(1, length(Xcoo@x))
model <- CMF(Xcoo, weight=Xvalues, NA_as_zero=TRUE,
             center=TRUE, user_bias=TRUE, item_bias=TRUE)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants