Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting the docs up and running #225

Merged
merged 10 commits into from
Jun 21, 2016
Merged

Getting the docs up and running #225

merged 10 commits into from
Jun 21, 2016

Conversation

pkofod
Copy link
Member

@pkofod pkofod commented Jun 14, 2016

It's easier to comment on the docs inline if there's a PR, as mentioned in #224 .

matrix. In Gradient Descent, $P$ is simply an appropriately dimensioned identity matrix.
This means that we go in the exact opposite direction of the gradient. This means
that we do not use the curvature information from the Hessian, or an approximation
of it. While it does seem quite logical to go in the opposite direction of the fastest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repetition of "This means" in two consecutive sentences.

@codecov-io
Copy link

codecov-io commented Jun 14, 2016

Current coverage is 84.17%

Merging #225 into master will decrease coverage by 0.04%

@@             master       #225   diff @@
==========================================
  Files            30         30          
  Lines          1895       1896     +1   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
  Hits           1596       1596          
- Misses          299        300     +1   
  Partials          0          0          

Powered by Codecov. Last updated by 2e7d238...966a89e


$ m_k(s) = f(x_n) + \nabla f(x_n)^\top \textbf{s} + \frac{1}{2} \textbf{s}^\top H(x_n) \textbf{s} $

For functions where $H(x_n)$ is difficult, or computationally expensive, to optain, we might
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-> "or computationally expensive to obtain"

@pkofod
Copy link
Member Author

pkofod commented Jun 17, 2016

I think Documenter might have been updated :) It looks different now, but in a good way.

@pkofod
Copy link
Member Author

pkofod commented Jun 17, 2016

Commented out some sections I hadn't written yet. I need to be a bit more sure what is going on in conjugate gradient to formulate something sensible. I know how it works, but I'm not sure how to present it in a paragraph or two.

@pkofod
Copy link
Member Author

pkofod commented Jun 21, 2016

I'm going to push a version of this asap, and then the missing solvers can be added over the summer. I am just trying to find out how to get the travis-things for Documenter set without admin rights :)

@pkofod pkofod merged commit 24ad090 into master Jun 21, 2016
@pkofod
Copy link
Member Author

pkofod commented Jun 21, 2016

Let's see if it works (and if it doesn't, hope that Hatherly is not too busy with JCon :))

@pkofod pkofod deleted the pkm/docs branch April 8, 2017 18:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants