Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Copy Ctor to ANN #519

Closed

Conversation

theSundayProgrammer
Copy link
Contributor

All the original unit tests ran to completion. Added a few more to test copy constructor for FFN (feed forward) and CNN (convolutional).

@zoq
Copy link
Member

zoq commented Feb 11, 2016

Ooh this is a really neat idea, this totally eliminates the function parameter. This would also make the serialization easier, the only drawback I see so far is that any layer that currently holds an optimizer has to implement three more functions.

In #513 @stereomatchingkiss suggested a switch from a reference to pointers to hold the function to be optimized. Primarily, to support serialization for the existing layer. Since @stereomatchingkiss worked on the problem I would like to hear his opinion about this.

@stereomatchingkiss
Copy link
Contributor

Ooh this is a really neat idea, this totally eliminates the function parameter.

This idea looks pretty cool for me too, it makes the codes become more intuitive. About c++14 feature, could we postpone the use of them? Because there are a lot of c++14 features do not support by vc2013. Although vc2015 out, but cuda do not support vc2015 yet.

@zoq
Copy link
Member

zoq commented Feb 11, 2016

Okay, great so, I think we should merge this code first and then head over to #516. We have to slightly modify #516, I can do that if no one likes.

About c++14 feature, could we postpone the use of them? Because there are a lot of c++14 features do not support by vc2013. Although vc2015 out, but cuda do not support vc2015 yet.

The test should build using a C++11 compiler because of the conditional-compilation directives Joseph used. But I agree, we should avoid C++14 features for the moment, we can easily rewrite the test. I can do that once the code is merged.

@theSundayProgrammer
Copy link
Contributor Author

The reason for using C++14 is auto detection of return value in unit tests.
I could create a global FFN and use it to 'decltype'

On Fri, Feb 12, 2016 at 5:12 AM, Marcus Edel notifications@github.com
wrote:

Okay, great so, I think we should merge this code first and then head over
to #516 #516. We have to slightly
modify #516 #516, I can do that if
no one likes.

About c++14 feature, could we postpone the use of them? Because there are
a lot of c++14 features do not support by vc2013. Although vc2015 out, but
cuda do not support vc2015 yet.

The test should build using a C++11 compiler because of the
conditional-compilation directives Joseph used. But I agree, we should
avoid C++14 features for the moment, we can easily rewrite the test. I can
do that once the code is merged.


Reply to this email directly or view it on GitHub
#519 (comment).

Joseph Chakravarti Mariadassou
http://thesundayprogrammer.com

@rcurtin
Copy link
Member

rcurtin commented Feb 11, 2016

I agree with @stereomatchingkiss, we should hold off on C++14 support for a few years (probably) because of lacking compiler support and stability. The decltype solution is maybe a bit ugly though. Maybe there is a better option? (I'm not sure, I haven't thought about it much.)

@zoq
Copy link
Member

zoq commented Feb 16, 2016

We can explicitly define the network type since we are using a small network for the test this should be fairly easy. I can do that once we merge this pull request in. It looks like the only modification left is the update of the rnn structure.

@zoq
Copy link
Member

zoq commented Feb 24, 2016

In 69f97e5, f6c27ed and 43f0f13 we removed the optimizer from each layer in favor of a single optimizer in the base class (FFN, CNN, RNN). Not only makes this changes the layer easier, we can also use the existing mlpack optimizer for the neural network code.

Since this pull request works with the "one optimizer for each layer" it didn't completely apply anymore. However, I would like to merge the copy Ctor test, so maybe you can open another pull request? If not I'll go and extract the changes from the pull request.

Please comment if something doesn't make sense and I'll reopen it.

@zoq zoq closed this Feb 24, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants