Why do artificial neurons use a bias term? Shouldn't only using weights be enough?
The bias term in artificial neurons provides a way to shift the activation function to the left or right, regardless of the input. It allows the model to capture patterns that are not aligned with the origin, and it helps prevent the output of the activation function from being zero, even when all the inputs are zero. This is particularly important in deep networks, where multiple layers of neurons are connected to each other. Without the bias term, it would be more difficult for the network to learn complex relationships between inputs and outputs. Therefore, the use of bias terms can improve the representational power and generalization ability of artificial neural networks.
What is the relationship between weights and biases in artificial neurons?
The relationship between weights and biases in artificial neurons is that weights determine the strength of the input-output relationship and biases determine the activation threshold. In other words, the weights determine how much each input contributes to the final output, while the bias determines at what point the activation function should fire and produce an output.

Weights are assigned to each input, with a larger weight indicating that the corresponding input has a greater impact on the output. The bias, on the other hand, is a single value that is added to the sum of weighted inputs, allowing for a shift in the activation function.

Together, the weights and biases determine the overall behavior of the artificial neuron and how it responds to inputs. By adjusting the weights and biases, the network can learn to recognize patterns and perform various tasks.
what are the pros and cons of the different commonly used activation functions?