This repository has been archived by the owner on Apr 4, 2024. It is now read-only.
-
-
Notifications
You must be signed in to change notification settings - Fork 3
Loss Functions
Benny Nottonson edited this page Dec 29, 2023
·
6 revisions
Loss functions are crucial in machine learning as they quantify the difference between predicted and actual values, serving as a guide for the model to optimize its parameters. Here are various loss functions implemented in Mojo's machine learning library.
-
Function Signature:
fn mse(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Mean Squared Error (MSE) between the predicted and expected tensors. MSE is a common loss function for regression problems, measuring the average squared difference between corresponding elements.
-
Function Signature:
fn mae(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Mean Absolute Error (MAE) between the predicted and expected tensors. MAE is another loss function for regression, measuring the average absolute difference between corresponding elements.
-
Function Signature:
fn mape(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Mean Absolute Percentage Error (MAPE) between the predicted and expected tensors. MAPE is a loss function suitable for regression tasks, measuring the average percentage difference between corresponding elements.
-
Function Signature:
fn msle(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Mean Squared Logarithmic Error (MSLE) between the predicted and expected tensors. MSLE is often used when the scale of the target variable varies widely.
-
Function Signature:
fn bce(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Binary Cross-Entropy (BCE) between the predicted and expected tensors. BCE is a common loss function for binary classification problems, measuring the cross-entropy between the predicted probabilities and true labels.
-
Function Signature:
fn cce(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Categorical Cross-Entropy (CCE) between the predicted and expected tensors. CCE is a widely used loss function for multi-class classification problems, measuring the cross-entropy between predicted class probabilities and true one-hot encoded labels.
-
Function Signature:
fn cfce(predicted: Tensor, expected: Tensor) raises -> Tensor
- Brief Explanation: Computes the Center-Focused Categorical Cross-Entropy (CFCE) between the predicted and expected tensors. This may be a custom loss function designed to focus on center classes in multi-class classification problems.
These loss functions provide a diverse set of options for different types of machine learning tasks, helping to guide the training process towards optimal model parameters.