Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add padding layer. #979

Merged
merged 16 commits into from
Aug 27, 2018
Merged

Add padding layer. #979

merged 16 commits into from
Aug 27, 2018

Conversation

hzxie
Copy link
Contributor

@hzxie hzxie commented Aug 17, 2018

I created padding layer because TinyDNN hasn't support add padding in convolutional layers.
The padding layer is similar to tf.pad in Tensorflow.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 17, 2018

Here's the code for testing.

#include <cstdlib>
#include <iostream>
#include <vector>

#include "tiny_dnn/tiny_dnn.h"

class TestNet : public tiny_dnn::network<tiny_dnn::sequential> {
 public:
  explicit TestNet(const std::string &name = "")
    : tiny_dnn::network<tiny_dnn::sequential>(name) {
    using pad     = tiny_dnn::layers::pad;
    *this << pad(5, 5, 2, 1, 2);
  }
};

int main() {
  TestNet nn;
  
  int init_value = 0;
  tiny_dnn::vec_t in(5 * 5 * 2);
  for ( auto itr = in.begin(); itr != in.end(); ++ itr ) {
    *itr = init_value ++;
  }

  int cnt = 0;
  auto res = nn.predict(in);
  for ( auto itr = res.begin(); itr != res.end(); ++ itr ) {
    std::cout << *itr << "\t";
    if ( ++ cnt % 7 == 0 ) {
      std::cout << std::endl;
    }
    if ( cnt % 63 == 0 ) {
      std::cout << std::endl;
    }
  }
  std::cout << std::endl;
}

The output is listed as follows

0	0	0	0	0	0	0	
0	0	0	0	0	0	0	
0	0	1	2	3	4	0	
0	5	6	7	8	9	0	
0	10	11	12	13	14	0	
0	15	16	17	18	19	0	
0	20	21	22	23	24	0	
0	0	0	0	0	0	0	
0	0	0	0	0	0	0	

0	0	0	0	0	0	0	
0	0	0	0	0	0	0	
0	25	26	27	28	29	0	
0	30	31	32	33	34	0	
0	35	36	37	38	39	0	
0	40	41	42	43	44	0	
0	45	46	47	48	49	0	
0	0	0	0	0	0	0	
0	0	0	0	0	0	0

@beru
Copy link
Contributor

beru commented Aug 19, 2018

@hzxie

Here's the code for testing.

Adding unit tests would be better. In that way, future changes can be tested.

travis-ci jobs failed.
https://travis-ci.org/tiny-dnn/tiny-dnn/builds/417261964?utm_source=github_status&utm_medium=notification

Running clang-format with scripts/clang-format.sh is required. But as I mentioned in #978 (comment) , this requirement is cumbersome and I personally think this task should be removed for contributors.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 22, 2018

@beru @edgarriba
All unit tests for padding layer is created and passed.
BTW, CI often failed due to the failure of unit testing cases named train2.

Maybe we should enlarge the threshold, 1e-4 is too small?
See: https://travis-ci.org/tiny-dnn/tiny-dnn/builds/419060831
screenshot from 2018-08-22 17-02-02

Copy link
Member

@edgarriba edgarriba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the PR looks good, however I would like to see more specific names since your implementation is ZeroPadding2d (in pytorch terminology). We can leave it prepared to handle more padding modes by now

UPDATE: https://pytorch.org/docs/stable/nn.html?highlight=padding#zeropad2d

@hzxie
Copy link
Contributor Author

hzxie commented Aug 22, 2018

@edgarriba
The layer name has changed to zero_pad_layer.
But Travis CI failed because of failure of train2 test case (The reason was mentioned above).
The coverage decreased for an unknown reason. (Because in the previous commit the coverage increased.)

@hzxie
Copy link
Contributor Author

hzxie commented Aug 24, 2018

@edgarriba @beru
Please come here to merge my PR.
Thank you.

@edgarriba
Copy link
Member

@hzxie can you investigate this issue with train2 ? I remember @prlz77 mentioned at some point that this was due to the gradient check he implemented.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 24, 2018

Actually, #978 also failed on train2 sometimes.
In this PR, there’s nothing related to train2.
I think I can fix this issue in another PR.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 24, 2018

@edgarriba
Can you give me some hints to fix this problem? Because train2 only failed in some platforms.
What did @prlz77 said?

@hzxie
Copy link
Contributor Author

hzxie commented Aug 24, 2018

@edgarriba
It seems that CI errored on all platforms.

Here's the error log:

CMake Error in CMakeLists.txt:
  export called with target "gmock" which requires target "gtest" that is not
  in the export set.
  If the required target is not easy to reference in this call, consider
  using the APPEND option with multiple separate calls.
-- Generating done
-- Build files have been written to: /home/travis/build/tiny-dnn/tiny-dnn
The command "if [ "$TRAVIS_OS_NAME" == "linux" ] && [ "$CXX" == "g++-4.9" ]; then lcov --directory . --zerocounters; cmake -DUSE_TBB=$USE_TBB -DUSE_SSE=$USE_SSE -DUSE_AVX=$USE_AVX -DUSE_DOUBLE=$USE_DOUBLE -DBUILD_TESTS=$BUILD_TESTS -DCOVERALLS=$COVERALLS -DUSE_ASAN=ON -DBUILD_EXAMPLES=$BUILD_EXAMPLES .; fi" failed and exited with 1 during .
Your build has been stopped.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 24, 2018

#986 also failed.
Maybe there's something wrong with TravisCI?

@beru
Copy link
Contributor

beru commented Aug 25, 2018

@hzxie It looks like recent googletest change is the cause. polybar/polybar#1393

I'm wondering if adding

option(INSTALL_GMOCK "Install Googletest's GMock?" OFF)
option(INSTALL_GTEST "Install Googletest's GTest?" OFF)

before

add_subdirectory(${googletest_SOURCE_DIR} ${googletest_BINARY_DIR})

might solve the problem...

@beru
Copy link
Contributor

beru commented Aug 25, 2018

Talking about unit tests.

https://github.com/catchorg/Catch2 is also famous test framework. But I guess rewriting unit tests isn't trivial.

@edgarriba
Copy link
Member

edgarriba commented Aug 25, 2018

@beru Catch2 looks pretty good and fits perfect with our header-only policy. If anyone wants to take this task, why not.

UPDATE: I suggest discussing it first in
https://groups.google.com/forum/#!forum/tiny-dnn-dev

@hzxie
Copy link
Contributor Author

hzxie commented Aug 25, 2018

Rewriting unit testing cases is time-consuming.
I think we need to fix CI failure first.

@edgarriba
Copy link
Member

edgarriba commented Aug 25, 2018

@hzxie agree. This PR #968 tries to fix this, take a look at it. Besides, what makes sense now is to rethink about how to setup a proper infrastructure with travis since lately things became overcomplicated with the more third-party stuff we added. At some point we should introduce docker based testing. We internally had this discussion. We should probably start a proper discussion about it. Let me open a thread since there are more some folks interested in this as well @Ravenwater @chrberger

UPDATE: There's already an open thread where we shared very few opinions.
https://groups.google.com/forum/#!topic/tiny-dnn-dev/ZdZLAE7Go30

@beru
Copy link
Contributor

beru commented Aug 25, 2018

@hzxie I added a wiki page about how to update AUTHORS file.

https://github.com/tiny-dnn/tiny-dnn/wiki/How-to-update-AUTHORS-file

Please check it out.

@beru
Copy link
Contributor

beru commented Aug 26, 2018

@hzxie

Rewriting unit testing cases is time-consuming.
I think we need to fix CI failure first.

I initially thought so but couldn't figure out the cause of googletest error with cmake.
So I've decided to change test framework to Catch2.

Please review #987.

@beru
Copy link
Contributor

beru commented Aug 26, 2018

It seems that CI errored on all platforms.

Actually, it's OK with VC++ (Windows platform) builds but I'm not sure why.

@beru
Copy link
Contributor

beru commented Aug 26, 2018

@edgarriba

Catch2 looks pretty good and fits perfect with our header-only policy. If anyone wants to take this task, why not.

I want to remove CI job problems, so made a pull request. #987

UPDATE: I suggest discussing it first in
https://groups.google.com/forum/#!forum/tiny-dnn-dev

Maybe next time.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 26, 2018

@beru

The AUTHORS file has been updated.
I will push it after PR #987 is merged.

BTW, I suggest that the last line in this file should be changed to:

Generated by https://github.com/nodejs/node/blob/master/tools/update-authors.sh

What do you think?

@edgarriba
Copy link
Member

@hzxie why don't we just reuse this script ?

@beru
Copy link
Contributor

beru commented Aug 26, 2018

@hzxie

BTW, I suggest that the last line in this file should be changed to:

Generated by https://github.com/nodejs/node/blob/master/tools/update-authors.sh

What do you think?

I think it's fine. In that way, we can show homage to nodejs team.

It's a short script file, so adding the file to scripts folder seems OK.
In that way, we'll be able to customize the file. And we can also mention where it originally came from in a commit message.

@beru
Copy link
Contributor

beru commented Aug 26, 2018

@edgarriba

why don't we just reuse this script ?

The script appends comments after authors list but those comments were manually removed.

https://github.com/tiny-dnn/tiny-dnn/commits/master/AUTHORS

It would be easier to understand if you run the script.

@edgarriba
Copy link
Member

@beru reuse or adapt

@beru
Copy link
Contributor

beru commented Aug 26, 2018

@edgarriba

reuse or adapt

I see. So adding the file to scripts folder seems fine.
We can leave tail comments in a natural way. It can be adjusted as well.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 27, 2018

@beru @edgarriba
Please review this PR.
I updated my test code and added the script for updating authors.

BTW, the new test script is not clear as before.
I don't know whether my newly added test script is run or not.
I think we should print the name of each test case before we run each test case.

@edgarriba
Copy link
Member

@hzxie LGTM. However, you should split the commit one for padding and the other to update the authors script.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 27, 2018

@edgarriba
The script for updating authors was removed.
The AUTHORS file was reverted to the original version.

Please consider merging this PR.

@hzxie
Copy link
Contributor Author

hzxie commented Aug 27, 2018

@edgarriba @beru
Please also review PR #989.

.gitignore Outdated
@@ -219,4 +219,4 @@ docs/_build
*.pb.h
CMakeFiles*
build*
DartConfiguration.tcl
DartConfiguration.tcl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No new line at end of file

AUTHORS Outdated
@@ -72,4 +72,4 @@ Karan Desai <karandesai_96@live.com>
MeiHui FAN <mhfan@ustc.edu>
Rui Huang <vowstar@users.noreply.github.com>
Siddhartha Rao Kamalakara <srk97c@gmail.com>
Jaakko Rantala <jaakko.rantala@gmail.com>
Jaakko Rantala <jaakko.rantala@gmail.com>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No new line at end of file

@@ -102,6 +103,8 @@ using fc = tiny_dnn::fully_connected_layer;

using dense = tiny_dnn::fully_connected_layer;

using zreo_pad = tiny_dnn::zero_pad_layer;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found a typo. zreo_pad should be zero_pad.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OMG.
Thank you!

Copy link
Contributor

@beru beru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@beru beru merged commit 1ce9b17 into tiny-dnn:master Aug 27, 2018
@hzxie hzxie deleted the feat/pad_layer branch August 29, 2018 05:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants