Skip to content

Commit

Permalink
Improved comments and documentation
Browse files Browse the repository at this point in the history
- moved HTML documentation to a separate folder
- wrote better explanatory docstrings comments to some algorithms
  • Loading branch information
jerela committed Nov 5, 2023
1 parent ed9b319 commit 737b64f
Show file tree
Hide file tree
Showing 10 changed files with 41 additions and 21 deletions.
9 changes: 6 additions & 3 deletions mola.clustering.html → documentation/mola.clustering.html
Original file line number Diff line number Diff line change
Expand Up @@ -36,17 +36,20 @@
Arguments:<br>
p1&nbsp;--&nbsp;list:&nbsp;the&nbsp;first&nbsp;point<br>
p2&nbsp;--&nbsp;list:&nbsp;the&nbsp;second&nbsp;point</tt></dd></dl>
<dl><dt><a name="-find_c_means"><strong>find_c_means</strong></a>(data: mola.matrix.Matrix, num_centers=2, max_iterations=100, distance_function=&lt;function distance_euclidean_pow at 0x00000256295E9550&gt;, initial_centers=None)</dt><dd><tt>Return&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;and&nbsp;the&nbsp;membership&nbsp;matrix&nbsp;of&nbsp;points&nbsp;using&nbsp;soft&nbsp;k-means&nbsp;clustering&nbsp;(also&nbsp;known&nbsp;as&nbsp;fuzzy&nbsp;c-means).<br>
<dl><dt><a name="-find_c_means"><strong>find_c_means</strong></a>(data: mola.matrix.Matrix, num_centers=2, max_iterations=100, distance_function=&lt;function distance_euclidean_pow at 0x000002348F1974C0&gt;, initial_centers=None)</dt><dd><tt>Return&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;and&nbsp;the&nbsp;membership&nbsp;matrix&nbsp;of&nbsp;points&nbsp;using&nbsp;soft&nbsp;k-means&nbsp;clustering&nbsp;(also&nbsp;known&nbsp;as&nbsp;fuzzy&nbsp;c-means).<br>
&nbsp;<br>
This&nbsp;algorithm&nbsp;is&nbsp;well-suited&nbsp;to&nbsp;cluster&nbsp;data&nbsp;that&nbsp;is&nbsp;not&nbsp;clearly&nbsp;separable&nbsp;into&nbsp;distinct&nbsp;clusters.<br>
Fuzzy&nbsp;c-means&nbsp;clustering&nbsp;is&nbsp;an&nbsp;iterative&nbsp;algorithm&nbsp;that&nbsp;finds&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;by&nbsp;first&nbsp;assigning&nbsp;each&nbsp;point&nbsp;to&nbsp;each&nbsp;cluster&nbsp;center&nbsp;with&nbsp;a&nbsp;certain&nbsp;membership&nbsp;value&nbsp;(0&nbsp;to&nbsp;1)&nbsp;and&nbsp;then&nbsp;updating&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;to&nbsp;be&nbsp;the&nbsp;weighted&nbsp;mean&nbsp;of&nbsp;the&nbsp;points&nbsp;assigned&nbsp;to&nbsp;them.&nbsp;This&nbsp;process&nbsp;is&nbsp;repeated&nbsp;for&nbsp;a&nbsp;set&nbsp;number&nbsp;of&nbsp;iterations&nbsp;or&nbsp;until&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;converge.&nbsp;The&nbsp;initial&nbsp;cluster&nbsp;centers&nbsp;are&nbsp;either&nbsp;randomized&nbsp;or&nbsp;given&nbsp;by&nbsp;the&nbsp;user.<br>
A&nbsp;major&nbsp;difference&nbsp;between&nbsp;hard&nbsp;k-means&nbsp;clustering&nbsp;and&nbsp;fuzzy&nbsp;c-means&nbsp;clustering&nbsp;is&nbsp;that&nbsp;in&nbsp;fuzzy&nbsp;c-means&nbsp;clustering,&nbsp;the&nbsp;points&nbsp;may&nbsp;belong&nbsp;partially&nbsp;to&nbsp;several&nbsp;clusters&nbsp;instead&nbsp;of&nbsp;belonging&nbsp;completely&nbsp;to&nbsp;one&nbsp;cluster,&nbsp;like&nbsp;in&nbsp;hard&nbsp;k-means&nbsp;clustering.&nbsp;Therefore,&nbsp;this&nbsp;algorithm&nbsp;is&nbsp;well-suited&nbsp;to&nbsp;cluster&nbsp;data&nbsp;that&nbsp;is&nbsp;not&nbsp;clearly&nbsp;separable&nbsp;into&nbsp;distinct&nbsp;clusters&nbsp;(e.g.,&nbsp;symmetric&nbsp;distribution&nbsp;of&nbsp;data&nbsp;points).<br>
&nbsp;&nbsp;&nbsp;&nbsp;<br>
Arguments:<br>
data&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;data&nbsp;containing&nbsp;the&nbsp;points&nbsp;to&nbsp;be&nbsp;clustered<br>
num_centers&nbsp;--&nbsp;int:&nbsp;the&nbsp;number&nbsp;of&nbsp;cluster&nbsp;centers&nbsp;to&nbsp;be&nbsp;found&nbsp;(default&nbsp;2)<br>
max_iterations&nbsp;--&nbsp;int:&nbsp;the&nbsp;maximum&nbsp;number&nbsp;of&nbsp;iterations&nbsp;where&nbsp;cluster&nbsp;centers&nbsp;are&nbsp;updated&nbsp;(default&nbsp;100)<br>
distance_function&nbsp;--&nbsp;function:&nbsp;the&nbsp;distance&nbsp;function&nbsp;to&nbsp;be&nbsp;used&nbsp;(default&nbsp;Euclidean&nbsp;distance);&nbsp;options&nbsp;are&nbsp;squared&nbsp;Euclidean&nbsp;distance&nbsp;(distance_euclidean_pow)&nbsp;and&nbsp;taxicab&nbsp;distance&nbsp;(distance_taxicab)<br>
initial_centers&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;initial&nbsp;cluster&nbsp;centers;&nbsp;if&nbsp;not&nbsp;specified,&nbsp;they&nbsp;are&nbsp;initialized&nbsp;randomly&nbsp;(default&nbsp;None)</tt></dd></dl>
<dl><dt><a name="-find_k_means"><strong>find_k_means</strong></a>(data: mola.matrix.Matrix, num_centers=2, max_iterations=100, distance_function=&lt;function distance_euclidean_pow at 0x00000256295E9550&gt;, initial_centers=None)</dt><dd><tt>Return&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;using&nbsp;hard&nbsp;k-means&nbsp;clustering.<br>
<dl><dt><a name="-find_k_means"><strong>find_k_means</strong></a>(data: mola.matrix.Matrix, num_centers=2, max_iterations=100, distance_function=&lt;function distance_euclidean_pow at 0x000002348F1974C0&gt;, initial_centers=None)</dt><dd><tt>Return&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;using&nbsp;hard&nbsp;k-means&nbsp;clustering.<br>
&nbsp;<br>
K-means&nbsp;clustering&nbsp;is&nbsp;an&nbsp;iterative&nbsp;algorithm&nbsp;that&nbsp;finds&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;by&nbsp;first&nbsp;assigning&nbsp;each&nbsp;point&nbsp;to&nbsp;the&nbsp;closest&nbsp;cluster&nbsp;center&nbsp;and&nbsp;then&nbsp;updating&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;to&nbsp;be&nbsp;the&nbsp;mean&nbsp;of&nbsp;the&nbsp;points&nbsp;assigned&nbsp;to&nbsp;them.&nbsp;This&nbsp;process&nbsp;is&nbsp;repeated&nbsp;for&nbsp;a&nbsp;set&nbsp;number&nbsp;of&nbsp;iterations&nbsp;or&nbsp;until&nbsp;the&nbsp;cluster&nbsp;centers&nbsp;converge.&nbsp;The&nbsp;initial&nbsp;cluster&nbsp;centers&nbsp;are&nbsp;either&nbsp;randomized&nbsp;or&nbsp;given&nbsp;by&nbsp;the&nbsp;user.<br>
&nbsp;<br>
Note&nbsp;that&nbsp;there&nbsp;is&nbsp;no&nbsp;guarantee&nbsp;that&nbsp;the&nbsp;algorithm&nbsp;converges.&nbsp;This&nbsp;is&nbsp;why&nbsp;you&nbsp;should&nbsp;use&nbsp;several&nbsp;restarts&nbsp;or&nbsp;fuzzy&nbsp;k-means&nbsp;(function&nbsp;<a href="#-find_c_means">find_c_means</a>()&nbsp;in&nbsp;this&nbsp;module).<br>
&nbsp;<br>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@
<font color="#ffffff" face="helvetica, arial"><big><strong>Functions</strong></big></font></td></tr>

<tr><td bgcolor="#eeaa77"><tt>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</tt></td><td>&nbsp;</td>
<td width="100%"><dl><dt><a name="-eigend"><strong>eigend</strong></a>(S: mola.matrix.Matrix)</dt><dd><tt>Calculate&nbsp;the&nbsp;eigenvalue&nbsp;decomposition&nbsp;of&nbsp;matrix&nbsp;S&nbsp;and&nbsp;return&nbsp;the&nbsp;matrix&nbsp;of&nbsp;eigenvalues&nbsp;E&nbsp;and&nbsp;matrix&nbsp;of&nbsp;eigenvectors&nbsp;V.<br>
Uses&nbsp;the&nbsp;Jacobi&nbsp;eigendecomposition&nbsp;algorithm.<br>
<td width="100%"><dl><dt><a name="-eigend"><strong>eigend</strong></a>(S: mola.matrix.Matrix)</dt><dd><tt>Return&nbsp;the&nbsp;matrix&nbsp;of&nbsp;eigenvalues&nbsp;E&nbsp;and&nbsp;matrix&nbsp;of&nbsp;eigenvectors&nbsp;V&nbsp;from&nbsp;the&nbsp;eigendecomposition&nbsp;of&nbsp;matrix&nbsp;S.<br>
&nbsp;<br>
This&nbsp;implementation&nbsp;uses&nbsp;the&nbsp;Jacobi&nbsp;eigendecomposition&nbsp;algorithm&nbsp;to&nbsp;compute&nbsp;the&nbsp;eigenvalue&nbsp;decomposition.<br>
&nbsp;<br>
Arguments:<br>
S&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;matrix&nbsp;whose&nbsp;eigenvalue&nbsp;decomposition&nbsp;is&nbsp;to&nbsp;be&nbsp;calculated<br>
Expand All @@ -38,8 +39,9 @@
Arguments:<br>
A&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;matrix&nbsp;whose&nbsp;dominant&nbsp;eigenvector&nbsp;is&nbsp;to&nbsp;be&nbsp;calculated</tt></dd></dl>
<dl><dt><a name="-qrd"><strong>qrd</strong></a>(A_original: mola.matrix.Matrix)</dt><dd><tt>Return&nbsp;a&nbsp;two-element&nbsp;tuple&nbsp;of&nbsp;matrices.<br>
&nbsp;<br>
The&nbsp;elements&nbsp;of&nbsp;the&nbsp;tuple&nbsp;are&nbsp;the&nbsp;Q&nbsp;and&nbsp;R&nbsp;matrices&nbsp;from&nbsp;the&nbsp;QR&nbsp;decomposition&nbsp;of&nbsp;the&nbsp;input&nbsp;matrix.<br>
The&nbsp;original&nbsp;input&nbsp;matrix&nbsp;is&nbsp;decomposed&nbsp;into&nbsp;a&nbsp;rotation&nbsp;matrix&nbsp;Q&nbsp;and&nbsp;an&nbsp;upper&nbsp;triangular&nbsp;matrix&nbsp;R.<br>
The&nbsp;original&nbsp;input&nbsp;matrix&nbsp;is&nbsp;decomposed&nbsp;into&nbsp;a&nbsp;rotation&nbsp;matrix&nbsp;Q&nbsp;and&nbsp;an&nbsp;upper&nbsp;triangular&nbsp;matrix&nbsp;R&nbsp;using&nbsp;Householder&nbsp;reflections.<br>
The&nbsp;decomposition&nbsp;is&nbsp;valid&nbsp;for&nbsp;any&nbsp;real&nbsp;square&nbsp;matrix.<br>
&nbsp;<br>
Arguments:<br>
Expand Down
7 changes: 4 additions & 3 deletions mola.html → documentation/mola.html
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,10 @@
<font color="#ffffff" face="helvetica, arial"><big><strong>Package Contents</strong></big></font></td></tr>

<tr><td bgcolor="#aa55cc"><tt>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</tt></td><td>&nbsp;</td>
<td width="100%"><table width="100%" summary="list"><tr><td width="25%" valign=top><a href="mola.decomposition.html">decomposition</a><br>
<td width="100%"><table width="100%" summary="list"><tr><td width="25%" valign=top><a href="mola.clustering.html">clustering</a><br>
<a href="mola.decomposition.html">decomposition</a><br>
</td><td width="25%" valign=top><a href="mola.matrix.html">matrix</a><br>
</td><td width="25%" valign=top><a href="mola.regression.html">regression</a><br>
<a href="mola.regression.html">regression</a><br>
</td><td width="25%" valign=top><a href="mola.utils.html">utils</a><br>
</td></tr></table></td></tr></table>
</td><td width="25%" valign=top></td></tr></table></td></tr></table>
</body></html>
File renamed without changes.
7 changes: 6 additions & 1 deletion mola.regression.html → documentation/mola.regression.html
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
<tr><td bgcolor="#eeaa77"><tt>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</tt></td><td>&nbsp;</td>
<td width="100%"><dl><dt><a name="-fit_nonlinear"><strong>fit_nonlinear</strong></a>(independent_values, dependent_values, h, J, initial=None, max_iters=100)</dt><dd><tt>Return&nbsp;the&nbsp;estimated&nbsp;parameters&nbsp;of&nbsp;a&nbsp;nonlinear&nbsp;model&nbsp;using&nbsp;the&nbsp;Gauss-Newton&nbsp;iteration&nbsp;algorithm.<br>
&nbsp;<br>
The&nbsp;algorithm&nbsp;uses&nbsp;Gauss-Newton&nbsp;iteration&nbsp;to&nbsp;find&nbsp;the&nbsp;parameters&nbsp;that&nbsp;minimize&nbsp;the&nbsp;least&nbsp;squares&nbsp;criterion&nbsp;||y-h(theta)||^2,&nbsp;where&nbsp;y&nbsp;is&nbsp;the&nbsp;vector&nbsp;of&nbsp;dependent&nbsp;values,&nbsp;h&nbsp;is&nbsp;the&nbsp;model&nbsp;function,&nbsp;and&nbsp;theta&nbsp;is&nbsp;the&nbsp;vector&nbsp;of&nbsp;the&nbsp;function's&nbsp;parameters.&nbsp;The&nbsp;estimates&nbsp;are&nbsp;improved&nbsp;iteratively&nbsp;by&nbsp;evaluating&nbsp;the&nbsp;gradient&nbsp;of&nbsp;the&nbsp;least&nbsp;squares&nbsp;criterion&nbsp;and&nbsp;using&nbsp;that&nbsp;gradient&nbsp;to&nbsp;update&nbsp;the&nbsp;parameter&nbsp;estimates&nbsp;in&nbsp;small&nbsp;steps.&nbsp;The&nbsp;gradient&nbsp;is&nbsp;approximated&nbsp;by&nbsp;Jacobian&nbsp;matrices.<br>
&nbsp;<br>
Arguments:<br>
independent_values&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;matrix&nbsp;of&nbsp;independent&nbsp;values<br>
dependent_values&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;matrix&nbsp;of&nbsp;dependent&nbsp;values<br>
Expand All @@ -42,9 +44,12 @@
<dl><dt><a name="-linear_least_squares"><strong>linear_least_squares</strong></a>(H: mola.matrix.Matrix, z: mola.matrix.Matrix, W=None)</dt><dd><tt>Return&nbsp;the&nbsp;parameters&nbsp;of&nbsp;a&nbsp;first-order&nbsp;polynomial&nbsp;in&nbsp;a&nbsp;tuple.<br>
The&nbsp;parameters&nbsp;are&nbsp;the&nbsp;slope&nbsp;(first&nbsp;element)&nbsp;and&nbsp;the&nbsp;intercept&nbsp;(second&nbsp;element).<br>
&nbsp;<br>
This&nbsp;implementation&nbsp;uses&nbsp;the&nbsp;least&nbsp;squares&nbsp;criterion&nbsp;to&nbsp;find&nbsp;the&nbsp;parameters&nbsp;that&nbsp;minimize&nbsp;||y-H*theta||^2,&nbsp;where&nbsp;y&nbsp;is&nbsp;the&nbsp;vector&nbsp;of&nbsp;dependent&nbsp;values,&nbsp;H&nbsp;is&nbsp;the&nbsp;observation&nbsp;matrix,&nbsp;and&nbsp;theta&nbsp;is&nbsp;the&nbsp;vector&nbsp;of&nbsp;parameters.<br>
In&nbsp;common&nbsp;terms,&nbsp;the&nbsp;algorithm&nbsp;finds&nbsp;the&nbsp;function&nbsp;parameters&nbsp;that&nbsp;minimize&nbsp;the&nbsp;squared&nbsp;sum&nbsp;of&nbsp;differences&nbsp;between&nbsp;the&nbsp;observed&nbsp;values&nbsp;and&nbsp;the&nbsp;values&nbsp;given&nbsp;by&nbsp;the&nbsp;function&nbsp;with&nbsp;the&nbsp;estimated&nbsp;parameters&nbsp;for&nbsp;the&nbsp;given&nbsp;independent&nbsp;values.<br>
&nbsp;<br>
Arguments:<br>
H&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;observation&nbsp;matrix&nbsp;of&nbsp;the&nbsp;linear&nbsp;system&nbsp;of&nbsp;equations<br>
z&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;measured&nbsp;values&nbsp;depicting&nbsp;the&nbsp;right&nbsp;side&nbsp;of&nbsp;the&nbsp;linear&nbsp;system&nbsp;of&nbsp;equations<br>
z&nbsp;--&nbsp;Matrix:&nbsp;the&nbsp;observed&nbsp;or&nbsp;dependent&nbsp;values&nbsp;depicting&nbsp;the&nbsp;right&nbsp;side&nbsp;of&nbsp;the&nbsp;linear&nbsp;system&nbsp;of&nbsp;equations<br>
W&nbsp;--&nbsp;Matrix:&nbsp;a&nbsp;weight&nbsp;matrix&nbsp;containing&nbsp;the&nbsp;weights&nbsp;for&nbsp;observations&nbsp;in&nbsp;its&nbsp;diagonals<br>
&nbsp;<br>
If&nbsp;no&nbsp;'W'&nbsp;is&nbsp;given,&nbsp;an&nbsp;identity&nbsp;matrix&nbsp;is&nbsp;assumed&nbsp;and&nbsp;all&nbsp;observations&nbsp;are&nbsp;equally&nbsp;weighted.</tt></dd></dl>
Expand Down
File renamed without changes.
4 changes: 1 addition & 3 deletions mola/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1 @@

from mola.matrix import Matrix
from copy import deepcopy
from mola.matrix import Matrix
12 changes: 8 additions & 4 deletions mola/clustering.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
from mola.matrix import Matrix
from mola.utils import zeros, get_mean, uniques, randoms
from random import random
from copy import deepcopy
import math
from mola.matrix import Matrix
from mola.utils import zeros, get_mean, uniques, randoms

# calculate the Euclidean distance between two points; note that this actually returns the squared distance, but because we only need it to compare distances, it doesn't matter and not including the square root is faster to compute
def distance_euclidean_pow(p1,p2) -> float:
"""Return the squared Euclidean distance between two points.
"""
Return the squared Euclidean distance between two points.
If you want to retrieve the actual Euclidean distance, take the square root of the result. However, using this squared version is computationally more efficient.
Arguments:
Expand Down Expand Up @@ -43,6 +44,8 @@ def find_k_means(data: Matrix, num_centers = 2, max_iterations = 100, distance_f
"""
Return the cluster centers using hard k-means clustering.
K-means clustering is an iterative algorithm that finds the cluster centers by first assigning each point to the closest cluster center and then updating the cluster centers to be the mean of the points assigned to them. This process is repeated for a set number of iterations or until the cluster centers converge. The initial cluster centers are either randomized or given by the user.
Note that there is no guarantee that the algorithm converges. This is why you should use several restarts or fuzzy k-means (function find_c_means() in this module).
Arguments:
Expand Down Expand Up @@ -115,7 +118,8 @@ def find_c_means(data: Matrix, num_centers = 2, max_iterations = 100, distance_f
"""
Return the cluster centers and the membership matrix of points using soft k-means clustering (also known as fuzzy c-means).
This algorithm is well-suited to cluster data that is not clearly separable into distinct clusters.
Fuzzy c-means clustering is an iterative algorithm that finds the cluster centers by first assigning each point to each cluster center with a certain membership value (0 to 1) and then updating the cluster centers to be the weighted mean of the points assigned to them. This process is repeated for a set number of iterations or until the cluster centers converge. The initial cluster centers are either randomized or given by the user.
A major difference between hard k-means clustering and fuzzy c-means clustering is that in fuzzy c-means clustering, the points may belong partially to several clusters instead of belonging completely to one cluster, like in hard k-means clustering. Therefore, this algorithm is well-suited to cluster data that is not clearly separable into distinct clusters (e.g., symmetric distribution of data points).
Arguments:
data -- Matrix: the data containing the points to be clustered
Expand Down
8 changes: 5 additions & 3 deletions mola/decomposition.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@
def qrd(A_original: Matrix):
"""
Return a two-element tuple of matrices.
The elements of the tuple are the Q and R matrices from the QR decomposition of the input matrix.
The original input matrix is decomposed into a rotation matrix Q and an upper triangular matrix R.
The original input matrix is decomposed into a rotation matrix Q and an upper triangular matrix R using Householder reflections.
The decomposition is valid for any real square matrix.
Arguments:
Expand Down Expand Up @@ -62,8 +63,9 @@ def qrd(A_original: Matrix):

def eigend(S: Matrix):
"""
Calculate the eigenvalue decomposition of matrix S and return the matrix of eigenvalues E and matrix of eigenvectors V.
Uses the Jacobi eigendecomposition algorithm.
Return the matrix of eigenvalues E and matrix of eigenvectors V from the eigendecomposition of matrix S.
This implementation uses the Jacobi eigendecomposition algorithm to compute the eigenvalue decomposition.
Arguments:
S -- Matrix: the matrix whose eigenvalue decomposition is to be calculated
Expand Down
7 changes: 6 additions & 1 deletion mola/regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,12 @@ def linear_least_squares(H: Matrix, z: Matrix, W=None):
Return the parameters of a first-order polynomial in a tuple.
The parameters are the slope (first element) and the intercept (second element).
This implementation uses the least squares criterion to find the parameters that minimize ||y-H*theta||^2, where y is the vector of dependent values, H is the observation matrix, and theta is the vector of parameters.
In common terms, the algorithm finds the function parameters that minimize the squared sum of differences between the observed values and the values given by the function with the estimated parameters for the given independent values.
Arguments:
H -- Matrix: the observation matrix of the linear system of equations
z -- Matrix: the measured values depicting the right side of the linear system of equations
z -- Matrix: the observed or dependent values depicting the right side of the linear system of equations
W -- Matrix: a weight matrix containing the weights for observations in its diagonals
If no 'W' is given, an identity matrix is assumed and all observations are equally weighted.
Expand Down Expand Up @@ -70,6 +73,8 @@ def fit_nonlinear(independent_values, dependent_values, h, J, initial=None, max_
"""
Return the estimated parameters of a nonlinear model using the Gauss-Newton iteration algorithm.
The algorithm uses Gauss-Newton iteration to find the parameters that minimize the least squares criterion ||y-h(theta)||^2, where y is the vector of dependent values, h is the model function, and theta is the vector of the function's parameters. The estimates are improved iteratively by evaluating the gradient of the least squares criterion and using that gradient to update the parameter estimates in small steps. The gradient is approximated by Jacobian matrices.
Arguments:
independent_values -- Matrix: the matrix of independent values
dependent_values -- Matrix: the matrix of dependent values
Expand Down

0 comments on commit 737b64f

Please sign in to comment.