Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add way to do cross check implementations #133

Closed
zhoonit opened this issue Jun 8, 2020 · 7 comments
Closed

Add way to do cross check implementations #133

zhoonit opened this issue Jun 8, 2020 · 7 comments
Assignees
Labels
RELEASE CRITICAL Need SR after merge!
Milestone

Comments

@zhoonit
Copy link
Contributor

zhoonit commented Jun 8, 2020

Currently there is limited way to check if the current implementation is logically correct. (Aside from it is running and have pretty good result.)

Cross checking the implementation with other frameworks is needed.

@kparichay
Copy link
Member

kparichay commented Jun 25, 2020

Make tensorflow operations for each layer:

  1. Forward output
  2. Gradient for the weights in layer
  3. Updated weight

Networks to use:

  1. 1 FC layer with varied activations and loss training - tests FC layer, activations and loss
  2. 2 FC layers with training - tests transfer of gradient from back from FC layer

Add more for other layers.

@jijoongmoon
Copy link
Collaborator

We checked bullet items as below with tensorflow.
. conv2d : done
. fc : done
. pooling2d : done
. flatten : done
. optimizer : sgd done
. loss : mse, cross-sigmoid, cross-softmax : done

@kparichay
Copy link
Member

@kparichay kparichay reopened this Jul 21, 2020
@jijoongmoon jijoongmoon modified the milestones: TBD2020, Sprint2007B Jul 22, 2020
@jijoongmoon jijoongmoon pinned this issue Jul 22, 2020
@jijoongmoon jijoongmoon self-assigned this Jul 22, 2020
@jijoongmoon
Copy link
Collaborator

Adam validation is done with #376

@jijoongmoon jijoongmoon added the RELEASE CRITICAL Need SR after merge! label Jul 29, 2020
@zhoonit
Copy link
Contributor Author

zhoonit commented Aug 3, 2020

Can this be closed now?

@kparichay
Copy link
Member

Once MNIST comparison with tensorflow is done, add this to a test which is run in every PR.

kparichay added a commit to kparichay/nntrainer that referenced this issue Aug 20, 2020
Update tensorflow training example for mnist application
This with same initialization as nntrainer (shown with zero initialization)
matches the final accuracy as well as loss with nntrainer

See also nnstreamer#133

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
kparichay added a commit to kparichay/nntrainer that referenced this issue Aug 20, 2020
Update tensorflow training example for mnist application
This with same initialization as nntrainer (shown with zero initialization)
matches the final accuracy as well as loss with nntrainer

See also nnstreamer#133

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
@jijoongmoon
Copy link
Collaborator

We are close this issues. We reopen or make new issue after implementing Transpose convolution.

@jijoongmoon jijoongmoon unpinned this issue Aug 25, 2020
kparichay added a commit to kparichay/nntrainer that referenced this issue Aug 25, 2020
Update tensorflow training example for mnist application
This with same initialization as nntrainer (shown with zero initialization)
matches the final accuracy as well as loss with nntrainer

See also nnstreamer#133

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoongmoon pushed a commit that referenced this issue Aug 25, 2020
Update tensorflow training example for mnist application
This with same initialization as nntrainer (shown with zero initialization)
matches the final accuracy as well as loss with nntrainer

See also #133

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
RELEASE CRITICAL Need SR after merge!
Projects
None yet
Development

No branches or pull requests

3 participants