Skip to content

Commit

Permalink
docs: fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
DanielVandH committed Oct 1, 2023
1 parent 0865eb8 commit 1d80fcd
Show file tree
Hide file tree
Showing 9 changed files with 16 additions and 16 deletions.
10 changes: 5 additions & 5 deletions Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

* The `AdaptiveWNGrad` stepsize is now availablbe as a new stepsize functor.
* The `AdaptiveWNGrad` stepsize is now available as a new stepsize functor.

### Fixed

* Levenberg-Marquardt now posesses its parameters `initial_residual_values` and
* Levenberg-Marquardt now possesses its parameters `initial_residual_values` and
`initial_jacobian_f` also as keyword arguments, such that their default initialisations
can be adapted, if necessary

Expand Down Expand Up @@ -84,7 +84,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

* More details on the Count and Cache toturial
* More details on the Count and Cache tutorial

### Changed

Expand All @@ -104,7 +104,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations
* A `ManifoldCountObjective` as a decorator for objectives to enable counting of calls to for example the cost and the gradient
* adds a `return_objective` keyword, that switches the return of a solver to a tuple `(o, s)`,
where `o` is the (possibly decorated) objective, and `s` os the “classical” solver return (state or point).
where `o` is the (possibly decorated) objective, and `s` is the “classical” solver return (state or point).
This way the counted values can be accessed and the cache can be reused.
* change solvers on the mid level (form `solver(M, objective, p)`) to also accept decorated objectives

Expand All @@ -123,7 +123,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

* the sub solver for `trust_regions` is now costumizable, i.e. can be exchanged.
* the sub solver for `trust_regions` is now customizable, i.e. can be exchanged.

### Changed

Expand Down
2 changes: 1 addition & 1 deletion docs/src/plans/problem.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Usually, such a problem is determined by the manifold or domain of the optimisat
DefaultManoptProblem
```

The exception to these are the primal dual-based solvers ([Chambolle-Pock](@ref ChambollePockSolver) and the [PD Semismooth Newton](@ref PDRSSNSolver)]), which both need two manifolds as their domain(s), hence thre also exists a
The exception to these are the primal dual-based solvers ([Chambolle-Pock](@ref ChambollePockSolver) and the [PD Semismooth Newton](@ref PDRSSNSolver)]), which both need two manifolds as their domain(s), hence there also exists a

```@docs
TwoManifoldProblem
Expand Down
2 changes: 1 addition & 1 deletion docs/src/references.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Literature

This is all literature mentioned / referenced in the `Manopt.jl` documenation.
This is all literature mentioned / referenced in the `Manopt.jl` documentation.
Usually you will find a small reference section at the end of every documentation page that contains references.

```@bibliography
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ Then it would call the iterate process.

### The manual call

If you generate the correctsponding `problem` and `state` as the previous step does, you can
If you generate the corresponding `problem` and `state` as the previous step does, you can
also use the third (lowest level) and just call

```
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/truncated_conjugate_gradient_descent.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ is to stop as soon as an iteration ``k`` is reached for which

holds, where ``0 < κ < 1`` and ``θ > 0`` are chosen in advance. This is
realized in this method by [`StopWhenResidualIsReducedByFactorOrPower`](@ref).
It can be shown shown that under appropriate conditions the iterates ``x_k``
It can be shown that under appropriate conditions the iterates ``x_k``
of the underlying trust-region method converge to nondegenerate critical
points with an order of convergence of at least ``\min \left( θ + 1, 2 \right)``,
see [Absil, Mahony, Sepulchre, Princeton University Press, 2008](@cite AbsilMahonySepulchre:2008).
Expand Down
8 changes: 4 additions & 4 deletions docs/src/tutorials/GeodesicRegression.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ highlighted = 4;
## Time Labeled Data

If for each data item $d_i$ we are also given a time point $t_i\in\mathbb R$, which are pairwise different,
then we can use the least squares error to state the objetive function as [Fletcher:2013](@cite)
then we can use the least squares error to state the objective function as [Fletcher:2013](@cite)

``` math
F(p,X) = \frac{1}{2}\sum_{i=1}^n d_{\mathcal M}^2(γ_{p,X}(t_i), d_i),
Expand Down Expand Up @@ -362,7 +362,7 @@ where $t = (t_1,\ldots,t_n) \in \mathbb R^n$ is now an additional parameter of t
We write $F_1(p, X)$ to refer to the function on the tangent bundle for fixed values of $t$ (as the one in the last part)
and $F_2(t)$ for the function $F(p, X, t)$ as a function in $t$ with fixed values $(p, X)$.

For the Euclidean case, there is no neccessity to optimize with respect to $t$, as we saw
For the Euclidean case, there is no necessity to optimize with respect to $t$, as we saw
above for the initialization of the fixed time points.

On a Riemannian manifold this can be stated as a problem on the product manifold $\mathcal N = \mathrm{T}\mathcal M \times \mathbb R^n$, i.e.
Expand All @@ -380,7 +380,7 @@ N = M × Euclidean(length(t2))
```

In this tutorial we present an approach to solve this using an alternating gradient descent scheme.
To be precise, we define the cost funcion now on the product manifold
To be precise, we define the cost function now on the product manifold

``` julia
struct RegressionCost2{T}
Expand Down Expand Up @@ -430,7 +430,7 @@ function (a::RegressionGradient2a!)(N, Y, x)
end
```

Finally, we addionally look for a fixed point $x=(p,X) ∈ \mathrm{T}\mathcal M$ at
Finally, we additionally look for a fixed point $x=(p,X) ∈ \mathrm{T}\mathcal M$ at
the gradient with respect to $t∈\mathbb R^n$, i.e. the second component, which is given by

``` math
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/HowToDebug.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ There is two more advanced variants that can be used. The first is a tuple of a

We can for example change the way the `` is printed by adding a format string
and use [`DebugCost`](@ref)`()` which is equivalent to using `:Cost`.
Especially with the format change, the lines are more coniststent in length.
Especially with the format change, the lines are more consistent in length.

``` julia
p2 = exact_penalty_method(
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/InplaceGradient.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Speedup using Inplace Evaluation
Ronny Bergmann

When it comes to time critital operations, a main ingredient in Julia is given by
When it comes to time critical operations, a main ingredient in Julia is given by
mutating functions, i.e. those that compute in place without additional memory
allocations. In the following, we illustrate how to do this with `Manopt.jl`.

Expand Down
2 changes: 1 addition & 1 deletion joss/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ since its norm is approximately `0.858`. But even projecting this back onto the

In the following figure the data `pts` (teal) and the resulting mean (orange) as well as the projected Euclidean mean (small, cyan) are shown.

![40 random points `pts` and the result from the gradient descent to compute the `x_mean` (orange) compared to a projection of their (Eucliean) mean onto the sphere (cyan).](src/img/MeanIllustr.png)
![40 random points `pts` and the result from the gradient descent to compute the `x_mean` (orange) compared to a projection of their (Euclidean) mean onto the sphere (cyan).](src/img/MeanIllustr.png)

In order to print the current iteration number, change and cost every iteration as well as the stopping reason, you can provide a `debug` keyword with the corresponding symbols interleaved with strings. The Symbol `:Stop` indicates that the reason for stopping reason should be printed at the end. The last integer in this array specifies that debugging information should be printed only every $i$th iteration.
While `:x` could be used to also print the current iterate, this usually takes up too much space.
Expand Down

0 comments on commit 1d80fcd

Please sign in to comment.