-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
first pass on the module #14
Conversation
README.md
Outdated
pkg> test KnetNLPModels | ||
``` | ||
|
||
This step-by-step example suppose prior knowledge [julia](https://julialang.org/) and [Knet.jl](https://github.com/denizyuret/Knet.jl.git). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step-by-step example suppose prior knowledge [julia](https://julialang.org/) and [Knet.jl](https://github.com/denizyuret/Knet.jl.git). | |
## Example | |
This step-by-step example assumes prior knowledge with [julia](https://julialang.org/) and [Knet.jl](https://github.com/denizyuret/Knet.jl.git). |
- the values of the neural network variable $w$; | ||
- the objective function $\mathcal{L}(X,Y;w)$ of the loss function $\mathcal{L}$ at the point $w$ for a given minibatch $X,Y$ | ||
- the gradient $\nabla \mathcal{L}(X,Y;w)$ of the loss function at the point $w$ for a given mini-batch $X,Y$ | ||
A `KnetNLPModel` gives the user access to: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A `KnetNLPModel` gives the user access to: | |
## Synopsis | |
A `KnetNLPModel` gives the user access to: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move the "Synopsis" section above the "Example" header.
README.md
Outdated
- the gradient $\nabla \mathcal{L}(X,Y;w)$ of the loss function at the point $w$ for a given mini-batch $X,Y$ | ||
A `KnetNLPModel` gives the user access to: | ||
- the values of the neural network variables/weights `w`; | ||
- the objective/loss function `L(X, Y; w)` of the loss function `L` at the point `w` for a given minibatch `(X,Y)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- the objective/loss function `L(X, Y; w)` of the loss function `L` at the point `w` for a given minibatch `(X,Y)` | |
- the value of the objective/loss function `L(X, Y; w)` at `w` for a given minibatch `(X,Y)`; |
README.md
Outdated
A `KnetNLPModel` gives the user access to: | ||
- the values of the neural network variables/weights `w`; | ||
- the objective/loss function `L(X, Y; w)` of the loss function `L` at the point `w` for a given minibatch `(X,Y)` | ||
- the gradient `∇L(X, Y; w)` of the objective/loss function at the point `w` for a given mini-batch `(X,Y)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- the gradient `∇L(X, Y; w)` of the objective/loss function at the point `w` for a given mini-batch `(X,Y)` | |
- the gradient `∇L(X, Y; w)` of the objective/loss function at `w` for a given mini-batch `(X,Y)`. |
README.md
Outdated
- Switch the minibatch used to evaluate the neural network | ||
- Measure the neural network's accuracy at the current point for a given testing mini-batch | ||
In addition, it provides tools to: | ||
- Switch the minibatch used to evaluate the neural network; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Switch the minibatch used to evaluate the neural network; | |
- switch the minibatch used to evaluate the neural network; |
README.md
Outdated
|
||
## Default behavior | ||
By default, the training minibatch that evaluates the neural network doesn't change between evaluations. | ||
To change the training minibatch use: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To change the training minibatch use: | |
To change the training minibatch, use: |
README.md
Outdated
``` | ||
The size of the new minibatch is the size define previously. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The size of the new minibatch is the size define previously. | |
The size of the new minibatch is the size defined earlier. |
README.md
Outdated
|
||
The size of the training and testing minibatch can be set to `1/denominator` the size of the dataset with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The size of the training and testing minibatch can be set to `1/denominator` the size of the dataset with: | |
The size of the training and test minibatches can be set to `1/p` the size of the dataset with: |
README.md
Outdated
|
||
The size of the training and testing minibatch can be set to `1/denominator` the size of the dataset with: | ||
```julia | ||
set_size_minibatch!(DenseNetNLPModel, denominator) # denominator::Int > 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set_size_minibatch!(DenseNetNLPModel, denominator) # denominator::Int > 1 | |
set_size_minibatch!(DenseNetNLPModel, p) # p::Int > 1 |
@@ -1,118 +1,118 @@ | |||
# KnetNLPModels.jl Tutorial |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same changes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You skipped a few of my comments. Please also apply them to the tutorial.
README.md
Outdated
|
||
A `KnetNLPModel` gives the user access to: | ||
- the values of the neural network variables/weights `w`; | ||
- the value of the objective/loss function `L(X, Y; w)` at `w` for a given minibatch `(X,Y)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- the value of the objective/loss function `L(X, Y; w)` at `w` for a given minibatch `(X,Y)` | |
- the value of the objective/loss function `L(X, Y; w)` at `w` for a given minibatch `(X,Y)`; |
README.md
Outdated
A `KnetNLPModel` gives the user access to: | ||
- the values of the neural network variables/weights `w`; | ||
- the value of the objective/loss function `L(X, Y; w)` at `w` for a given minibatch `(X,Y)` | ||
- the gradient `∇L(X, Y; w)` of the objective/loss function at `w` for a given mini-batch `(X,Y)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- the gradient `∇L(X, Y; w)` of the objective/loss function at `w` for a given mini-batch `(X,Y)` | |
- the gradient `∇L(X, Y; w)` of the objective/loss function at `w` for a given mini-batch `(X,Y)`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I already suggested the changes above.
README.md
Outdated
|
||
## Define the layers of interest | ||
The following code define a dense layer as an evaluable julia structure. | ||
This step-by-step example suppose prior knowledge [julia](https://julialang.org/) and [Knet.jl](https://github.com/denizyuret/Knet.jl.git). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step-by-step example suppose prior knowledge [julia](https://julialang.org/) and [Knet.jl](https://github.com/denizyuret/Knet.jl.git). | |
This step-by-step example assumes prior knowledge of [julia](https://julialang.org/) and [Knet.jl](https://github.com/denizyuret/Knet.jl.git). |
Also suggested before.
Sorry, I did not see those ones. |
I made the PR #16 from this PR. |
Please don't do that. Keep PRs minimal and orthogonal. We're ready to merge this one. |
No description provided.