Skip to content

Commit

Permalink
new version, full rewrite
Browse files Browse the repository at this point in the history
  • Loading branch information
michalovadek committed Aug 23, 2023
1 parent 010aa7b commit 66216f8
Show file tree
Hide file tree
Showing 10 changed files with 704 additions and 300 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Authors@R: c(person(given = "Michal",
role = c("aut", "cre", "cph"),
email = "michal.ovadek@gmail.com",
comment = c(ORCID = "0000-0002-2552-2580")))
Description: What the package does (one paragraph).
Description: Factorize binary matrices into rank-k components.
License: MIT + file LICENSE
Encoding: UTF-8
Language: en-GB
Expand Down
4 changes: 4 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# nmfbin 0.2.0

* Full rewrite, simplification, improved terminology

# nmfbin 0.1.0

* Initial experimental release, buggy and incomplete
53 changes: 53 additions & 0 deletions R/binary_cross_entropy.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#' Sigmoid function
#'
#' @param z A numeric value or vector.
#' @return The sigmoid of z.
#' @noRd
sigmoid <- function(z) {
1 / (1 + exp(-z))
}

#' Binary cross-entropy loss function
#'
#' @param p Predicted probabilities.
#' @param y Actual labels (0 or 1).
#' @return The binary cross-entropy loss.
#' @noRd
binary_crossentropy <- function(p, y) {
-sum(y * log(p) + (1 - y) * log(1 - p))
}

#' Gradient of binary cross-entropy with respect to weight w
#'
#' @param x Input features.
#' @param y Actual labels (0 or 1).
#' @param w Weight.
#' @return The gradient of the loss with respect to w.
#' @noRd
gradient <- function(x, y, w) {
p <- sigmoid(w * x)
sum(x * (p - y))
}

#' Gradient Descent for minimizing binary cross-entropy
#'
#' @param x Input features.
#' @param y Actual labels (0 or 1).
#' @param starting_point Initial weight value.
#' @param learning_rate Learning rate for gradient descent.
#' @param n_iterations Number of iterations for the gradient descent.
#' @return Estimated weight after gradient descent.
#' @noRd
gradient_descent <- function(x, y, starting_point, learning_rate, n_iterations) {
w <- starting_point
for (i in 1:n_iterations) {
grad <- gradient(x, y, w)
w <- w - learning_rate * grad

# Print progress
p <- sigmoid(w * x)
loss <- binary_crossentropy(p, y)
cat(sprintf("Iteration %d: w = %f, Loss = %f\n", i, w, loss))
}
return(w)
}
Loading

0 comments on commit 66216f8

Please sign in to comment.