Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revamp ann modules #831

Closed
wants to merge 88 commits into from
Closed
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
88 commits
Select commit Hold shift + click to select a range
4727298
Fix explicitly specialized template issue.
zoq Jun 2, 2016
05b36fc
Merge remote-tracking branch 'upstream/master'
zoq Jun 5, 2016
00e867f
edge_boxes: feature extraction
nilayjain Jun 5, 2016
c8d5766
Properly resetting auxBound. Start using a Reset() method, to avoid f…
MarcosPividori Jun 3, 2016
5149efd
backported ind2sub and sub2ind
nilayjain Jun 6, 2016
61e63b9
backported ind2sub and sub2ind
nilayjain Jun 6, 2016
5f01b84
Revert "edge_boxes: feature extraction"
nilayjain Jun 6, 2016
8907d5a
backported sub2ind & ind2sub
nilayjain Jun 6, 2016
b8da5c6
fix doc tutorial
keon Jun 7, 2016
0d6d3af
Use appveyor cache (nuget and armadillo).
zoq Jun 7, 2016
45e8cd6
fix typo
keon Jun 7, 2016
01e699c
added test for ind2sub and sub2ind
nilayjain Jun 7, 2016
7e8abed
Minor style fixes for ind2sub() test.
rcurtin Jun 7, 2016
7bbd897
Add new contributors.
rcurtin Jun 7, 2016
29fcf0a
Try debugging symbols for AppVeyor build to see if it is faster.
rcurtin Jun 7, 2016
cbbd671
Merge remote-tracking branch 'upstream/master'
zoq Jun 14, 2016
ee7ff36
Merge remote-tracking branch 'upstream/master'
zoq Jul 4, 2016
80cc90f
Merge remote-tracking branch 'upstream/master'
zoq Nov 6, 2016
055775d
Remove the RMVA model.
zoq Nov 6, 2016
6d73df2
Remove unused ann functions.
zoq Nov 7, 2016
5051083
Remove unused ann layer.
zoq Nov 8, 2016
c5d74f1
Increase the number of template arguments for the boost list class.
zoq Nov 9, 2016
120263d
Move pooling rules into the pooling class. So that we can use the Max…
zoq Nov 9, 2016
2016462
Use the stride parameter inside the convolution function.
zoq Nov 26, 2016
ac174b1
Increase the number of template arguments for the boost list class.
zoq Dec 3, 2016
e9f9eb4
Remove stride paramater from svd and fft convolution rule.
zoq Dec 3, 2016
63a6c4e
Refactor ann layer.
zoq Dec 3, 2016
ed538ba
Remove the rmva model for the CmakeLists file.
zoq Dec 3, 2016
2db9ef7
Add visitor function set; which abstracts away the different types of…
zoq Dec 5, 2016
9134309
Minor style fixes.
zoq Dec 5, 2016
148a8a7
Refactor recurrent network test.
zoq Dec 6, 2016
c0311c4
Remove unused pooling test.
zoq Dec 7, 2016
efa533a
Refactor FNN class; works for CNNs and FFNs
zoq Dec 8, 2016
b00fc86
Refactor RNN class; works will all current modules including the upda…
zoq Dec 9, 2016
558d05b
Include all layer modules.
zoq Dec 10, 2016
30adb3e
Minor style fixes.
zoq Dec 12, 2016
e075387
Add layer traits to check for the input width, height and model funct…
zoq Dec 12, 2016
a7059a1
Refactor neural visual attention modules.
zoq Dec 12, 2016
acd05e3
Use refactored rnn,ffn classes for the ann tests.
zoq Dec 12, 2016
1172374
Add ann module test.
zoq Dec 13, 2016
00c43d9
Split layer modules into definition and implementation.
zoq Dec 14, 2016
4c11a6c
Merge remote-tracking branch 'upstream/master'
zoq Dec 15, 2016
aa04427
Remove the RMVA model.
zoq Nov 6, 2016
e989fb2
Remove unused ann functions.
zoq Nov 7, 2016
251288a
Remove unused ann layer.
zoq Nov 8, 2016
acf9d9e
Increase the number of template arguments for the boost list class.
zoq Nov 9, 2016
46e6bc7
Move pooling rules into the pooling class. So that we can use the Max…
zoq Nov 9, 2016
27b46ab
Use the stride parameter inside the convolution function.
zoq Nov 26, 2016
9a6c234
Increase the number of template arguments for the boost list class.
zoq Dec 3, 2016
86fac9d
Remove stride paramater from svd and fft convolution rule.
zoq Dec 3, 2016
7ede24f
Refactor ann layer.
zoq Dec 3, 2016
2f3c448
Remove the rmva model for the CmakeLists file.
zoq Dec 3, 2016
e99e0f4
Add visitor function set; which abstracts away the different types of…
zoq Dec 5, 2016
1f95e03
Minor style fixes.
zoq Dec 5, 2016
b09d22b
Refactor recurrent network test.
zoq Dec 6, 2016
89dd57b
Remove unused pooling test.
zoq Dec 7, 2016
9d3d878
Refactor FNN class; works for CNNs and FFNs
zoq Dec 8, 2016
e362608
Refactor RNN class; works will all current modules including the upda…
zoq Dec 9, 2016
f54949c
Include all layer modules.
zoq Dec 10, 2016
f5bfe20
Minor style fixes.
zoq Dec 12, 2016
18fefb3
Add layer traits to check for the input width, height and model funct…
zoq Dec 12, 2016
919ee11
Refactor neural visual attention modules.
zoq Dec 12, 2016
ca472a6
Use refactored rnn,ffn classes for the ann tests.
zoq Dec 12, 2016
d178103
Add ann module test.
zoq Dec 13, 2016
4c565a4
Split layer modules into definition and implementation.
zoq Dec 14, 2016
d8e3ff2
Merge branch 'ann' of github.com:zoq/mlpack into ann
zoq Dec 15, 2016
36b47f4
Increase the number of template arguments for the boost list class.
zoq Nov 9, 2016
e45c115
Remove unused ann layer.
zoq Nov 8, 2016
96fbde2
Use the stride parameter inside the convolution function.
zoq Nov 26, 2016
d5a5b3a
Increase the number of template arguments for the boost list class.
zoq Dec 3, 2016
52b8a13
Remove stride paramater from svd and fft convolution rule.
zoq Dec 3, 2016
f025886
Refactor ann layer.
zoq Dec 3, 2016
8d9de82
Minor style fixes.
zoq Dec 5, 2016
1dfe0c6
Refactor recurrent network test.
zoq Dec 6, 2016
24748b0
Minor style fixes.
zoq Dec 12, 2016
f3d48b8
Refactor neural visual attention modules.
zoq Dec 12, 2016
e4e73e6
Use refactored rnn,ffn classes for the ann tests.
zoq Dec 12, 2016
4ed0e6f
Split layer modules into definition and implementation.
zoq Dec 14, 2016
552e74c
Remove merge relics.
zoq Dec 15, 2016
07945e6
Vectorise isn't supported through all armadillo versions.
zoq Dec 15, 2016
53b4855
Decrease the number of parallel builds.
zoq Dec 15, 2016
eb7b266
Add Train() function that uses a default optimizer to train the model.
zoq Dec 23, 2016
c79f26b
Remove comment.
zoq Dec 23, 2016
1dd7652
Minor style fix; remove extra space.
zoq Jan 19, 2017
5acbcd3
Store/Restore the input when saving/loading the network model.
zoq Jan 19, 2017
7e759a2
Simplify the input and target parameter.
zoq Jan 19, 2017
826a399
Minor style fix; move up comment to avoid potential licence parsing p…
zoq Jan 19, 2017
ec10c75
Remove unused parameter comments.
zoq Jan 19, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,15 @@ class NaiveConvolution
filter.n_cols - 1 + input.n_cols - 1) = input;

NaiveConvolution<ValidConvolution>::Convolution(inputPadded, filter,
<<<<<<< HEAD
<<<<<<< HEAD
output, 1, 1);
=======
output, dW, dH);
>>>>>>> Use the stride parameter inside the convolution function.
=======
output, 1, 1);
>>>>>>> Remove stride paramater from svd and fft convolution rule.
}

/*
Expand Down
64 changes: 64 additions & 0 deletions src/mlpack/methods/ann/layer/add.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,18 @@ class Add
*
* @param outSize The number of output units.
*/
<<<<<<< HEAD
<<<<<<< HEAD
Add(const size_t outSize);
=======
Add(const size_t outSize) : outSize(outSize)
{
weights.set_size(outSize, 1);
}
>>>>>>> Refactor ann layer.
=======
Add(const size_t outSize);
>>>>>>> Split layer modules into definition and implementation.

/**
* Ordinary feed forward pass of a neural network, evaluating the function
Expand All @@ -49,7 +60,18 @@ class Add
* @param output Resulting output activation.
*/
template<typename eT>
<<<<<<< HEAD
<<<<<<< HEAD
void Forward(const arma::Mat<eT>&& input, arma::Mat<eT>&& output);
=======
void Forward(const arma::Mat<eT>&& input, arma::Mat<eT>&& output)
{
output = input + weights;
}
>>>>>>> Refactor ann layer.
=======
void Forward(const arma::Mat<eT>&& input, arma::Mat<eT>&& output);
>>>>>>> Split layer modules into definition and implementation.

/**
* Ordinary feed backward pass of a neural network, calculating the function
Expand All @@ -63,7 +85,18 @@ class Add
template<typename eT>
void Backward(const arma::Mat<eT>&& /* input */,
const arma::Mat<eT>&& gy,
<<<<<<< HEAD
<<<<<<< HEAD
arma::Mat<eT>&& g);
=======
arma::Mat<eT>&& g)
{
g = gy;
}
>>>>>>> Refactor ann layer.
=======
arma::Mat<eT>&& g);
>>>>>>> Split layer modules into definition and implementation.

/*
* Calculate the gradient using the output delta and the input activation.
Expand All @@ -75,7 +108,18 @@ class Add
template<typename eT>
void Gradient(const arma::Mat<eT>&& /* input */,
arma::Mat<eT>&& error,
<<<<<<< HEAD
<<<<<<< HEAD
arma::Mat<eT>&& gradient);
=======
arma::Mat<eT>&& gradient)
{
gradient = error;
}
>>>>>>> Refactor ann layer.
=======
arma::Mat<eT>&& gradient);
>>>>>>> Split layer modules into definition and implementation.

//! Get the parameters.
OutputDataType const& Parameters() const { return weights; }
Expand Down Expand Up @@ -106,7 +150,18 @@ class Add
* Serialize the layer
*/
template<typename Archive>
<<<<<<< HEAD
<<<<<<< HEAD
void Serialize(Archive& ar, const unsigned int /* version */);
=======
void Serialize(Archive& ar, const unsigned int /* version */)
{
ar & data::CreateNVP(weights, "weights");
}
>>>>>>> Refactor ann layer.
=======
void Serialize(Archive& ar, const unsigned int /* version */);
>>>>>>> Split layer modules into definition and implementation.

private:
//! Locally-stored number of output units.
Expand All @@ -131,7 +186,16 @@ class Add
} // namespace ann
} // namespace mlpack

<<<<<<< HEAD
<<<<<<< HEAD
// Include implementation.
#include "add_impl.hpp"

=======
>>>>>>> Refactor ann layer.
=======
// Include implementation.
#include "add_impl.hpp"

>>>>>>> Split layer modules into definition and implementation.
#endif
58 changes: 58 additions & 0 deletions src/mlpack/methods/ann/layer/add_merge.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,18 @@ class AddMerge
{
public:
//! Create the AddMerge object.
<<<<<<< HEAD
<<<<<<< HEAD
AddMerge();
=======
AddMerge()
{
// Nothing to do here.
}
>>>>>>> Refactor ann layer.
=======
AddMerge();
>>>>>>> Split layer modules into definition and implementation.

/**
* Ordinary feed forward pass of a neural network, evaluating the function
Expand All @@ -48,7 +59,23 @@ class AddMerge
* @param output Resulting output activation.
*/
template<typename InputType, typename OutputType>
<<<<<<< HEAD
<<<<<<< HEAD
void Forward(const InputType&& /* input */, OutputType&& output);
=======
void Forward(const InputType&& /* input */, OutputType&& output)
{
output = boost::apply_visitor(outputParameterVisitor, network.front());

for (size_t i = 1; i < network.size(); ++i)
{
output += boost::apply_visitor(outputParameterVisitor, network[i]);
}
}
>>>>>>> Refactor ann layer.
=======
void Forward(const InputType&& /* input */, OutputType&& output);
>>>>>>> Split layer modules into definition and implementation.

/**
* Ordinary feed backward pass of a neural network, calculating the function
Expand All @@ -62,7 +89,18 @@ class AddMerge
template<typename eT>
void Backward(const arma::Mat<eT>&& /* input */,
arma::Mat<eT>&& gy,
<<<<<<< HEAD
<<<<<<< HEAD
arma::Mat<eT>&& g);
=======
arma::Mat<eT>&& g)
{
g = gy;
}
>>>>>>> Refactor ann layer.
=======
arma::Mat<eT>&& g);
>>>>>>> Split layer modules into definition and implementation.

/*
* Add a new module to the model.
Expand Down Expand Up @@ -106,7 +144,18 @@ class AddMerge
* Serialize the layer.
*/
template<typename Archive>
<<<<<<< HEAD
<<<<<<< HEAD
void Serialize(Archive& ar, const unsigned int /* version */);
=======
void Serialize(Archive& ar, const unsigned int /* version */)
{
ar & data::CreateNVP(network, "network");
}
>>>>>>> Refactor ann layer.
=======
void Serialize(Archive& ar, const unsigned int /* version */);
>>>>>>> Split layer modules into definition and implementation.

private:
std::vector<LayerTypes> network;
Expand All @@ -133,7 +182,16 @@ class AddMerge
} // namespace ann
} // namespace mlpack

<<<<<<< HEAD
<<<<<<< HEAD
// Include implementation.
#include "add_merge_impl.hpp"

=======
>>>>>>> Refactor ann layer.
=======
// Include implementation.
#include "add_merge_impl.hpp"

>>>>>>> Split layer modules into definition and implementation.
#endif