One of the simplest first-order optimization techniques is the gradient descent method (Hao, 2020). This approach relies on the gradient
Given a step size
where
As is well known, this procedure aims to adjust the parameter in the direction of the greatest rate of change of the objective function, a behavior that is commonly illustrated through geometric representations (see Figure 1).
Figure 1. Illustration of the gradient descent method
Despite its simplicity, this method may sometimes lead to updates in undesirable directions, corresponding to parameter values that become fixed within the model (for example, parameters constrained to zero).
To cite this Streamlit application in your academic work, teaching, or research:
Llinás Marimón, H., & Llinás Solano, H. (2026). Interactive gradient descent in two (2D) and three (3D) dimensions [Streamlit application]. Streamlit. https://gradientdescent-jkz8f8lzb49vc9efxjye5h.streamlit.app/
@misc{llinas2026gradientdescent,
author = {Humberto Llinás Marimón and Humberto Llinás Solano},
title = {Interactive Gradient Descent in Two (2D) and Three (3D) Dimensions},
year = {2025},
howpublished = {\url{https://gradientdescent-jkz8f8lzb49vc9efxjye5h.streamlit.app/}},
note = {Streamlit application}
}
