We consider the classical matrix factorization problem which can be formulated as a constrained optimizaition problem: $$ \min{X,Y} \ \sum_{(i,j)\in \Omega} loss(A{ij}, (XY^T){ij}) + \lambda | X|_F^2 + \lambda | Y|_F^2 $$
We use the following three loss functions, the algorithm are based on stochastic gradient descent with learnint rate
Square Loss:
run the code: make and ./multithread