Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ANN Saving the network and reloading #531

Closed
sudarshan85 opened this issue Mar 1, 2016 · 21 comments
Closed

ANN Saving the network and reloading #531

sudarshan85 opened this issue Mar 1, 2016 · 21 comments

Comments

@sudarshan85
Copy link

Hey,

I'm working with the ANN module and have started with the feed_forward test. I'm trying to return the built network back to where I was calling it from so that I can use it there (and I was running into a lot of problems because of the template programming part more details here, which I was able to solve by compiling with -std=c++1y and return auto). However, when I tried to train the returned built network, it threw a matrix multiplication exception, which I'm assuming meant that something got corrupted while returning. So, I thought I could just train the network and save it, so that I could reload it later for prediction. The below does that and saves the network parameters in "test" file. The problem is while loading now I have to specify the type of the network (which was determined by decltype(modules) and decltype(classOutputLayer)). Furthermore, the XML file that was saved is just parameters, which I'm guessing is the values of the weight matrix and does not indicate the type of the FFN. Is there a way to fix this problem?

#include <iostream>
#include <fstream>

#include <mlpack/core.hpp>

#include <mlpack/methods/ann/activation_functions/logistic_function.hpp>
#include <mlpack/methods/ann/activation_functions/tanh_function.hpp>

#include <mlpack/methods/ann/init_rules/random_init.hpp>

#include <mlpack/methods/ann/layer/bias_layer.hpp>
#include <mlpack/methods/ann/layer/linear_layer.hpp>
#include <mlpack/methods/ann/layer/base_layer.hpp>
#include <mlpack/methods/ann/layer/dropout_layer.hpp>
#include <mlpack/methods/ann/layer/binary_classification_layer.hpp>

#include <mlpack/methods/ann/ffn.hpp>
#include <mlpack/methods/ann/performance_functions/mse_function.hpp>

using namespace mlpack;

template <typename PerformanceFunction,
         typename OutputLayerType,
         typename PerformanceFunctionType,
         typename MatType = arma::mat
         >
void BuildFFN(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize)
{
    // input layer
    ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
    ann::BiasLayer<> inputBiasLayer(hiddenLayerSize);
    ann::BaseLayer<PerformanceFunction> inputBaseLayer;

    // hidden layer
    ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
    ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
    ann::BaseLayer<PerformanceFunction> outputLayer;

    // output layer
    OutputLayerType classOutputLayer;

    auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
    ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);

    net.Train(trainData, trainLabels);
    //arma::mat prediction;
    //net.Predict(testData, prediction);

    std::ofstream ofs("test", std::ios::binary);
    boost::archive::xml_oarchive o(ofs);
    net.Serialize(o, 1);
    //o << data::CreateNVP(net, "N");
    //ofs.close();

    //return net;
}

int main(int argc, char** argv)
{
    arma::mat dataset;
    data::Load("../data/thyroid_train.csv", dataset, true);
    arma::mat trainData = dataset.submat(0, 0, dataset.n_rows - 4, dataset.n_cols - 1);
    arma::mat trainLabels = dataset.submat(dataset.n_rows - 3, 0, dataset.n_rows - 1, dataset.n_cols - 1);

    data::Load("../data/thyroid_test.csv", dataset, true);
    arma::mat testData = dataset.submat(0, 0, dataset.n_rows - 4, dataset.n_cols - 1);
    arma::mat testLabels = dataset.submat(dataset.n_rows - 3, 0, dataset.n_rows - 1, dataset.n_cols - 1);

    std::cout << "Loaded the training and testing datasets" << std::endl;

    const size_t hiddenLayerSize = 8;

    //std::ifstream ifs("test2", std::ios::binary);
    //boost::archive::xml_iarchive i(ifs);

    //auto net = BuildFFN<ann::LogisticFunction, ann::BinaryClassificationLayer, ann::MeanSquaredErrorFunction>
        //(trainData, trainLabels, testData, testLabels, hiddenLayerSize);
    BuildFFN<ann::LogisticFunction, ann::BinaryClassificationLayer, ann::MeanSquaredErrorFunction>
        (trainData, trainLabels, testData, testLabels, hiddenLayerSize);

    //double classificationError;
    //for (size_t i = 0; i < testData.n_cols; i++)
    //{
        //if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0)
        //{
            //classificationError++;
        //}
    //}

    //std::cout << "Classification Error = " << (double(classificationError) / testData.n_cols) * 100 << "%" << std::endl;

    return 0;
}

Thanks.

@stereomatchingkiss
Copy link
Contributor

Hi, by now the serialization of the ann haven't finished yet, I will open a pull request of ann in a few days, you should be able to save and load the network after it.

Furthermore, the XML file that was saved is just parameters, which I'm guessing is the values of the weight matrix and does not indicate the type of the FFN

I am afraid this cannot be done, at least not some easy task. The ann module of mlpack rely on template, that means the type of your network must be specify at compile time.

@sudarshan85
Copy link
Author

Thank you.
If the serialization hasn't completed yet, what does the data stored in the test XML file represent (see below)? And once the serialization is finished, what details will be stored in the XML file?

If the various parameters of the FFN (such as input layers, hidden layers, outputlayer type etc) is stored in the XML file then we can definitely specify the type of network that will be loaded by just looking at the file. If not, I'm still not a 100% sure how to specify the type of network to load when the type is decided just before saving it.

Thanks.

<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE boost_serialization>
<boost_serialization signature="serialization::archive" version="10">
<parameter class_id="0" tracking_level="1" version="0" object_id="_0">
    <n_rows>203</n_rows>
    <n_cols>1</n_cols>
    <n_elem>203</n_elem>
    <vec_state>0</vec_state>
    <item>-0.16960125232877565</item>
    <item>0.24735236041378075</item>
    <item>0.002766670129351563</item>
    <item>-0.19591114961641509</item>
    <item>-0.089737718232213742</item>
    <item>-0.17892329578751906</item>
    <item>-0.87561555897261822</item>
    <item>0.60517609568085684</item>
    <item>0.60969001498519482</item>
    <item>-0.34192157359063585</item>
    <item>0.096796056078446865</item>
    <item>0.50302726271757325</item>
    <item>0.16059196341625362</item>
    <item>0.1920232169745838</item>
    <item>0.61379467397984055</item>
    <item>0.719987692298378</item>
    <item>7.9112473762668483</item>
    <item>10.739234161895087</item>
    <item>-1.0400511893084701</item>
    <item>4.7481340689437195</item>
    <item>3.3553232328552802</item>
    <item>-2.384153180489553</item>
    <item>1.5788595362804594</item>
    <item>6.1803814575829144</item>
    <item>1.9873670350436798</item>
    <item>4.0695654255577836</item>
    <item>-0.4845686612882501</item>
    <item>5.8621901052967775</item>
    <item>3.0425400099270541</item>
    <item>1.1503520133176406</item>
    <item>0.73072398364933522</item>
    <item>1.171102559216296</item>
    <item>8.2134501892960596</item>
    <item>9.0212173995664902</item>
    <item>-2.3766010107393276</item>
    <item>7.8818573127885747</item>
    <item>7.8042502078070539</item>
    <item>0.65198964695885098</item>
    <item>7.0973711468836465</item>
    <item>7.3694018269119379</item>
    <item>2.7651890925707221</item>
    <item>-0.50670655871364334</item>
    <item>0.57123634403294732</item>
    <item>5.2894678882131227</item>
    <item>1.4615080403540375</item>
    <item>0.73352824742917044</item>
    <item>5.5518960510516697</item>
    <item>2.3265792476443981</item>
    <item>9.5737501224328252</item>
    <item>10.106788984177399</item>
    <item>-3.0981770870534429</item>
    <item>8.6852767640367023</item>
    <item>9.0124299678676003</item>
    <item>0.16661617769189099</item>
    <item>7.9890910723736193</item>
    <item>8.8190955179112684</item>
    <item>6.4320123425661651</item>
    <item>8.5455527069361068</item>
    <item>-1.8857914354933081</item>
    <item>6.4260206364723995</item>
    <item>5.0440364299774778</item>
    <item>-1.5047728226105581</item>
    <item>4.7938627008552563</item>
    <item>5.9273264748165628</item>
    <item>1.8797420335864743</item>
    <item>2.0364872050934966</item>
    <item>-0.47155410343535081</item>
    <item>-0.12782638565534488</item>
    <item>-0.10998911030390481</item>
    <item>0.97465924937323933</item>
    <item>0.47975347270377344</item>
    <item>1.168568332502512</item>
    <item>0.60008045967998658</item>
    <item>0.019222501440005139</item>
    <item>-0.021672141807848384</item>
    <item>-1.252630916531595</item>
    <item>-0.097241477953354069</item>
    <item>0.70291553764548376</item>
    <item>2.2025632575224754</item>
    <item>0.40650962697342902</item>
    <item>-0.22561763776076738</item>
    <item>0.39384451687303151</item>
    <item>-0.17361228574714824</item>
    <item>-0.5199806559715765</item>
    <item>0.83091961188507235</item>
    <item>-0.24047364182865358</item>
    <item>1.7241631105417219</item>
    <item>-0.78218688514504298</item>
    <item>5.3066278729164917</item>
    <item>7.5099106487317053</item>
    <item>-0.70891925138053147</item>
    <item>6.5476895355216582</item>
    <item>5.6917475299626128</item>
    <item>-0.36751551723851622</item>
    <item>4.9334183028610639</item>
    <item>5.0084120472932767</item>
    <item>9.4484913394320955</item>
    <item>9.8555537249101892</item>
    <item>-1.9133086392020289</item>
    <item>8.9928311306059729</item>
    <item>8.9451815676509696</item>
    <item>-0.94191145330271786</item>
    <item>8.1682989867608757</item>
    <item>8.760458785260516</item>
    <item>0.69777102144766778</item>
    <item>0.27645541902550813</item>
    <item>-0.16812935366126661</item>
    <item>-0.22542811636274707</item>
    <item>0.6697669372537054</item>
    <item>0.11729341348429148</item>
    <item>0.61337292177845804</item>
    <item>0.18790058093596174</item>
    <item>0.78744875879087428</item>
    <item>0.4799081500823843</item>
    <item>-0.96877692934707182</item>
    <item>0.50019884309588447</item>
    <item>0.94734253900168663</item>
    <item>-0.75924885416229315</item>
    <item>0.15119633244481448</item>
    <item>1.5577262664137852</item>
    <item>5.6510894684568402</item>
    <item>8.9371838216149762</item>
    <item>0.18583462713199048</item>
    <item>8.9364387793651172</item>
    <item>2.0934534392315935</item>
    <item>0.94509111366623144</item>
    <item>5.90257541758979</item>
    <item>5.4027307627194148</item>
    <item>-92.794495276154677</item>
    <item>-62.680250338712391</item>
    <item>77.953464376372096</item>
    <item>-75.372443255054151</item>
    <item>-90.224768157491056</item>
    <item>53.101571255717261</item>
    <item>-36.555959697791572</item>
    <item>-87.138253194827982</item>
    <item>8.322123565714497</item>
    <item>9.9510578937908107</item>
    <item>-4.556132213635502</item>
    <item>14.853496824597995</item>
    <item>12.652218729214278</item>
    <item>-12.345521114124551</item>
    <item>15.900887550046255</item>
    <item>8.3519732828180544</item>
    <item>0.7536657101716463</item>
    <item>7.1055826465525209</item>
    <item>6.2801069319637008</item>
    <item>5.6671971919480546</item>
    <item>6.4084818958795475</item>
    <item>-7.507783350746779</item>
    <item>17.156048814257883</item>
    <item>2.099739807708044</item>
    <item>-3.756886195961513</item>
    <item>-3.4068944862116841</item>
    <item>-1.2540835144936884</item>
    <item>-3.6790045136741987</item>
    <item>-2.8178792131365489</item>
    <item>0.52257005528400469</item>
    <item>-3.7451396126638925</item>
    <item>-0.6626793054506358</item>
    <item>3.9386956936956627</item>
    <item>10.023837709028429</item>
    <item>4.2373332482912671</item>
    <item>7.1170672495013472</item>
    <item>8.8307831465048299</item>
    <item>-11.210201007506946</item>
    <item>20.978713676818458</item>
    <item>3.8321915040385477</item>
    <item>0.031369252284853925</item>
    <item>-2.6520049890641033</item>
    <item>-1.2321681723778659</item>
    <item>-2.0078191813642201</item>
    <item>-1.8913548049718829</item>
    <item>0.12321210123504424</item>
    <item>-0.68338025153016868</item>
    <item>0.89422473130353086</item>
    <item>0.49745279608906773</item>
    <item>-5.8405988046460013</item>
    <item>4.1809096367479679</item>
    <item>-3.1875535351866682</item>
    <item>-7.0342570935057021</item>
    <item>0.802215134332949</item>
    <item>-0.52734998972248304</item>
    <item>4.124616881915939</item>
    <item>-15.922179111090651</item>
    <item>-5.6089548079115543</item>
    <item>-7.0219586167896733</item>
    <item>-1.3772120985778924</item>
    <item>-7.25360327470099</item>
    <item>-7.6735511605066025</item>
    <item>4.3009281234161545</item>
    <item>4.2914933362983136</item>
    <item>-6.7083963515694078</item>
    <item>-12.084011679696763</item>
    <item>-6.2604169323300969</item>
    <item>7.1783866309608104</item>
    <item>2.4982933456808589</item>
    <item>1.6306936177723779</item>
    <item>-0.071443315366760782</item>
    <item>6.288580948192922</item>
    <item>1.8042152941857585</item>
    <item>-1.2673559241642001</item>
    <item>0.73417228083721497</item>
</parameter>
</boost_serialization>

@stereomatchingkiss
Copy link
Contributor

If the serialization hasn't completed yet, what does the data stored in the test XML file represent

It is the parameters of ann, but I think it still lack the weights of another layers.

And once the serialization is finished, what details will be stored in the XML file?

It should store the parameters of the cnn,ffn,rnn and other layer weights(I may wrong on this one since I haven't tested it yet)

If not, I'm still not a 100% sure how to specify the type of network to load when the type is decided just before saving it.

You need to remember what kind of network you build, Following part are the "type" of network of yours

/ input layer
    ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
    ann::BiasLayer<> inputBiasLayer(hiddenLayerSize);
    ann::BaseLayer<PerformanceFunction> inputBaseLayer;

    // hidden layer
    ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
    ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
    ann::BaseLayer<PerformanceFunction> outputLayer;

    // output layer
    OutputLayerType classOutputLayer;

    auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
    ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);

After you train the net, you can save the parameters and load it with same type(the FFN build by your codes), it has to be the original network you use to train the network, part of the parameters could be different(it depends). Unless the parameters about the features of the input data should be the same.

ps : please correct me if anything wrong

@rcurtin
Copy link
Member

rcurtin commented Mar 1, 2016

For what it's worth, this problem is encountered with other mlpack algorithms: sometimes the user wants to do, e.g., nearest neighbor search and save the model, but they can use different types of trees, and the trees are template parameters.

What I've done in those situations is to create a "wrapper" class that serializes an identifier saying what type is being used, then serializing the appropriate type. Take a look here:

https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/neighbor_search/ns_model_impl.hpp#L52

For the situation of an arbitrary neural network architecture, the problem is a lot harder: you need some way to serialize the type information, and then some way to assemble an arbitrary type and return it when deserializing. I think it will be a tricky (but fun) problem to solve... :)

The approach I've taken, in general, towards serializing arbitrary types is this: "if using an mlpack command-line program, the type should be handled for the user; if using the mlpack C++ code, then the user is responsible for serializing and deserializing the correct type". So if the user uses the mlpack_allknn program, then the NSModel class is transparently used to serialize the correct type. If the user is writing C++, then they can use the NSModel class if they want, but whatever they choose to do, they will have to remember to deserialize their model into the correct type.

@sudarshan85
Copy link
Author

Thank you both for your replies.

"if using an mlpack command-line program, the type should be handled for the user; if using the mlpack C++ code, then the user is responsible for serializing and deserializing the correct type"

Moving forward, I see myself using the mlpack C++ code a lot more than the command line version, so I guess I need to care of serializing and deserializing this. The main problem in the current code is for every prediction I need to call "BuildFNN" which ends up training the network again and again. That is why I thought I could just return the trained network from where its being called and do prediction later but the I think the data gets corrupted on return and the prediction call segfaults.

You need to remember what kind of network you build, Following part are the "type" of network of yours

As you can see in that code decltype(modules) and decltype(classOutputLayer) determines the type of the network at compile time. I understand I pass the types on my call and I can get the classOuputLayer type pretty easily which is the BinaryClassification layer. However, the first type which is the layertypes is a std::tie (which I'm fairly new to) which returns a tuple of the input-hidden layers. I'm not sure what this returns to create a network of the appropriate type. If I can get some help on that, it'll be great!

Thanks.

@stereomatchingkiss
Copy link
Contributor

I'm not sure what this returns to create a network of the appropriate type

I think it should be something like

std::tuple<ann::LinearLayer<>&, ann::BiasLayer<>&, ....>

If you call std::make_tuple, if will become

std::tuple<ann::LinearLayer<>, ann::BiasLayer<>, ....>

You can take a look at boost typeindex too, this library could help you print the name of type(I read them from meyers book, but haven't tried it yet)

@theSundayProgrammer
Copy link
Contributor

As tie copies references not objects you cannot return a tuple that is created from a tie using objects on stack. However using make_tuple copies objects. Hence you return such tuples from functions. However you need to know the exact type of tuple. The type of RNN in the recurrent_network_test is:

RNN<
std::tuple<
          LinearLayer<arma::mat,arma::mat> &,
          RecurrentLayer<arma::mat,arma::mat> &,
          BaseLayer<LogisticFunction,arma::mat,arma::mat> &,
          LinearLayer<arma::mat,arma::mat> &,
         BaseLayer<LogisticFunction,arma::mat,arma::mat> &
         >,
 BinaryClassificationLayer,
RandomInitialization,
MeanSquaredErrorFunction
>

If you are using C++14 though you can declare the return type as auto. Hence you could create a model
auto BuildNetwork(...) { return FNN(make_tuple(...),..) }
In fact I wrote serialization code before I considered copy construction. But then I had to cut and paste code from the training module to the test module.

@sudarshan85
Copy link
Author

I am using C++14 (C++1y) and what I did was train the model and then just return net (which is the FFN) and saved it to another auto'd variable from main. However, when I called predict on that, I got a segfault. My guess is somewhere, somehow, while getting returned from BuildFFN, the variable net got corrupted. Personally, I would prefer if this worked well (because my machine has around 32gb of memory so holding data is not a problem). Perhaps, we could look into why this segfault occrus?

@stereomatchingkiss
Copy link
Contributor

just return net (which is the FFN) and saved it to another auto'd variable from main.

Like theSundayProgrammer said, if you create your net by std::tie, the type of your net would take the reference of the layers, but not copy/move the layer(remember to define ARMA_USE_CXX11, else the matrix of armadillo cannot move). Return reference of the stack is not a good idea, because the stack will be destroy after you exit from scope.

bad idea

std::tuple<int&> get_int()
{
  int a = 3;
  return std::tie(a);
}

ok, since it take the copy/move

std::tuple<int> get_int()
{
  int a = 3;
  return std::make_tuple(a);
}

Is this the case you are talking about?By the way, I would try to complete serialization of ann module on today, I would tell you if I could load and save your FFN without any trouble.

@stereomatchingkiss
Copy link
Contributor

I test the serialization of current implementation(without any change), it work like magic. Looks like I was wrong, the serialization of the ffn, rnn and cnn already work, the parameters of ffn already store the weights of the net(module), I am sorry for my misunderstanding.

Your problem should be able to fixed if you create the tuple by std::make_tuple

@sudarshan85
Copy link
Author

Thank you for your replies. After reading a bunch on tie and tuple I think I understand what they do. I changed my code to create the models using std::make_tuple instead of std::tie as shown:

auto modules = std::make_tuple(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);

I have the function return type auto and I return the variable net after training the network. In the main function I save it to another auto'd variable and call predict on it:

auto net = BuildFFN<ann::LogisticFunction, ann::BinaryClassificationLayer, ann::MeanSquaredErrorFunction>
        (trainData, trainLabels, testData, testLabels, hiddenLayerSize);

arma::mat prediction;
net.Predict(testData, prediction);

I also made sure that ARMA_USE_CXX11 was enable (after checking out how to do it here):

#if !defined(ARMA_USE_CXX11)
#define ARMA_USE_CXX11
//// Uncomment the above line if you have a C++ compiler that supports the C++11 standard
//// This will enable additional features, such as use of initialiser lists
#endif

However, the compiler doesn't like this, it spit out a huge amount of error:

In file included from /usr/include/c++/4.8/functional:55:0,
                 from /usr/include/c++/4.8/bits/stl_algo.h:66,
                 from /usr/include/c++/4.8/algorithm:62,
                 from /usr/include/boost/math/tools/config.hpp:16,
                 from /usr/include/boost/math/tools/series.hpp:16,
                 from /usr/include/boost/math/special_functions/gamma.hpp:17,
                 from /usr/local/include/mlpack/prereqs.hpp:32,
                 from /usr/local/include/mlpack/core.hpp:190,
                 from /home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:4:
/usr/include/c++/4.8/tuple: In instantiation of ‘constexpr std::_Head_base<_Idx, _Head, false>::_Head_base(const _Head&) [with long unsigned int _Idx = 3ul; _Head = mlpack::ann::LinearLayer<>]’:
/usr/include/c++/4.8/tuple:257:44:   recursively required from ‘constexpr std::_Tuple_impl<_Idx, _Head, _Tail ...>::_Tuple_impl(const _Head&, const _Tail& ...) [with long unsigned int _Idx = 1ul; _Head = mlpack::ann::BiasLayer<>; _Tail = {mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >}]’
/usr/include/c++/4.8/tuple:257:44:   required from ‘constexpr std::_Tuple_impl<_Idx, _Head, _Tail ...>::_Tuple_impl(const _Head&, const _Tail& ...) [with long unsigned int _Idx = 0ul; _Head = mlpack::ann::LinearLayer<>; _Tail = {mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >}]’
/usr/include/c++/4.8/tuple:400:33:   required from ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(const _Elements& ...) [with _Elements = {mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >}]’
/usr/include/c++/4.8/tuple:864:62:   required from ‘constexpr std::tuple<typename std::__decay_and_strip<_Elements>::__type ...> std::make_tuple(_Elements&& ...) [with _Elements = {mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&}]’
/home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:42:123:   required from ‘auto BuildFFN(MatType&, MatType&, MatType&, MatType&, size_t) [with PerformanceFunction = mlpack::ann::LogisticFunction; OutputLayerType = mlpack::ann::BinaryClassificationLayer; PerformanceFunctionType = mlpack::ann::MeanSquaredErrorFunction; MatType = arma::Mat<double>; size_t = long unsigned int]’
/home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:79:71:   required from here
/usr/include/c++/4.8/tuple:135:25: error: use of deleted function ‘mlpack::ann::LinearLayer<>::LinearLayer(const mlpack::ann::LinearLayer<>&)’
       : _M_head_impl(__h) { }
                         ^
In file included from /home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:12:0:
/usr/local/include/mlpack/methods/ann/layer/linear_layer.hpp:30:7: note: ‘mlpack::ann::LinearLayer<>::LinearLayer(const mlpack::ann::LinearLayer<>&)’ is implicitly declared as deleted because ‘mlpack::ann::LinearLayer<>’ declares a move constructor or move assignment operator
 class LinearLayer
       ^
In file included from /usr/include/c++/4.8/functional:55:0,
                 from /usr/include/c++/4.8/bits/stl_algo.h:66,
                 from /usr/include/c++/4.8/algorithm:62,
                 from /usr/include/boost/math/tools/config.hpp:16,
                 from /usr/include/boost/math/tools/series.hpp:16,
                 from /usr/include/boost/math/special_functions/gamma.hpp:17,
                 from /usr/local/include/mlpack/prereqs.hpp:32,
                 from /usr/local/include/mlpack/core.hpp:190,
                 from /home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:4:
/usr/include/c++/4.8/tuple: In instantiation of ‘constexpr std::_Head_base<_Idx, _Head, false>::_Head_base(const _Head&) [with long unsigned int _Idx = 0ul; _Head = mlpack::ann::LinearLayer<>]’:
/usr/include/c++/4.8/tuple:257:44:   required from ‘constexpr std::_Tuple_impl<_Idx, _Head, _Tail ...>::_Tuple_impl(const _Head&, const _Tail& ...) [with long unsigned int _Idx = 0ul; _Head = mlpack::ann::LinearLayer<>; _Tail = {mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >}]’
/usr/include/c++/4.8/tuple:400:33:   required from ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(const _Elements& ...) [with _Elements = {mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >}]’
/usr/include/c++/4.8/tuple:864:62:   required from ‘constexpr std::tuple<typename std::__decay_and_strip<_Elements>::__type ...> std::make_tuple(_Elements&& ...) [with _Elements = {mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&}]’
/home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:42:123:   required from ‘auto BuildFFN(MatType&, MatType&, MatType&, MatType&, size_t) [with PerformanceFunction = mlpack::ann::LogisticFunction; OutputLayerType = mlpack::ann::BinaryClassificationLayer; PerformanceFunctionType = mlpack::ann::MeanSquaredErrorFunction; MatType = arma::Mat<double>; size_t = long unsigned int]’
/home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:79:71:   required from here
/usr/include/c++/4.8/tuple:135:25: error: use of deleted function ‘mlpack::ann::LinearLayer<>::LinearLayer(const mlpack::ann::LinearLayer<>&)’
       : _M_head_impl(__h) { }
                         ^
make[2]: *** [CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o] Error 1
make[1]: *** [CMakeFiles/ff_nn.dir/all] Error 2
make: *** [all] Error 2

Based on what I can infer from this, it seems that the layers are references and making a tuple out of this is not working (I could be wrong).

@stereomatchingkiss
Copy link
Contributor

It works perfectly fine on my laptop(win8 64 bits, vc2015 64bits), maybe update your mlpack by
git pull https://github.com/mlpack/mlpack

can solve this problem? These errors looks like the compiler complain it cannot generate copy constructor/move constructor for users(if you want to know why, google the rules of zero)

@zoq
Copy link
Member

zoq commented Mar 2, 2016

Once I modified the CMake file to build against c++14 it worked also on my machine. I guess another simple solution would be to write a wrapper class that creates the network and forwards the necessary functions e.g. Predict(...). This would also work with C++11.

@zoq
Copy link
Member

zoq commented Mar 2, 2016

Btw. I used the following code to test:

auto GetVanillaNetwork(arma::mat& trainData, arma::mat& trainLabels)
{
  int hiddenLayerSize = 10;

  LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
  BiasLayer<> inputBiasLayer(hiddenLayerSize);
  BaseLayer<> inputBaseLayer;

  LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
  BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
  BaseLayer<> outputLayer;

  BinaryClassificationLayer classOutputLayer;

  auto modules = std::make_tuple(inputLayer, inputBiasLayer, inputBaseLayer,
                          hiddenLayer1, hiddenBiasLayer1, outputLayer);

  FFN<decltype(modules), decltype(classOutputLayer), RandomInitialization,
      MeanSquaredErrorFunction> net(modules, classOutputLayer);

  RMSprop<decltype(net)> opt(net, 0.01, 0.88, 1e-8,
      20 * trainData.n_cols, 1e-18);

  return net;
}

BOOST_AUTO_TEST_CASE(VanillaNetworkTest)
{
  // Load the dataset.
  arma::mat dataset;
  data::Load("thyroid_train.csv", dataset, true);

  arma::mat trainData = dataset.submat(0, 0, dataset.n_rows - 4,
      dataset.n_cols - 1);
  arma::mat trainLabels = dataset.submat(dataset.n_rows - 3, 0,
      dataset.n_rows - 1, dataset.n_cols - 1);

  data::Load("thyroid_test.csv", dataset, true);

  arma::mat testData = dataset.submat(0, 0, dataset.n_rows - 4,
      dataset.n_cols - 1);
  arma::mat testLabels = dataset.submat(dataset.n_rows - 3, 0,
      dataset.n_rows - 1, dataset.n_cols - 1);


  auto net = GetVanillaNetwork(trainData, trainLabels);
  arma::mat prediction;
  net.Predict(dataset, prediction);
}

@sudarshan85
Copy link
Author

@zoq, In your code where are you training the network?

Thank you. I am building against C++14 as well. Here is my CMake file:

cmake_minimum_required ( VERSION 2.8 )
project ( mlpack_nn )

set ( CMAKE_CXX_FLAGS "-std=c++1y" )
set ( CMAKE_EXPORT_COMPILE_COMMANDS 1 )
set ( PROJECT_INCLUDE_DIR ${PROJECT_SOURCE_DIR}/include )

set ( MLPACK_INCLUDE_DIR "/usr/local/include/mlpack/" )
set ( MLPACK_LIBRARY "/usr/local/lib/libmlpack.so.2" )

find_package ( Armadillo REQUIRED )
find_package ( Boost COMPONENTS serialization REQUIRED )
include_directories ( ${ARMADILLO_INCLUDE_DIRS} )
include_directories ( ${Boost_INCLUDE_DIR} )
include_directories ( ${MLPACK_INCLUDE_DIR} )

file ( GLOB_RECURSE PROJ_SRCS src/*.cpp )

include_directories ( "${PROJECT_INCLUDE_DIR}" )
add_executable ( ff_nn ${PROJ_SRCS} )
target_link_libraries ( ff_nn ${ARMADILLO_LIBRARIES} ${Boost_LIBRARIES} ${MLPACK_LIBRARY} )

After stero's suggestion, I did a pull today and fast-forwarded my branch by 4 commits and built and ran the mlpack_test which found no errors. After this, when I compile I am still getting errors but its different:

In file included from /home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:13:0:
/usr/local/include/mlpack/methods/ann/layer/base_layer.hpp: In instantiation of ‘OutputDataType& mlpack::ann::BaseLayer<ActivationFunction, InputDataType, OutputDataType>::OutputParameter() const [with ActivationFunction = mlpack::ann::LogisticFunction; InputDataType = arma::Mat<double>; OutputDataType = arma::Mat<double>]’:
/usr/local/include/mlpack/methods/ann/ffn.hpp:314:5:   required from ‘double mlpack::ann::FFN<LayerTypes, OutputLayerType, InitializationRuleType, PerformanceFunction>::OutputError(const DataType&, ErrorType&, const std::tuple<_Args2 ...>&) [with DataType = arma::Mat<double>; ErrorType = arma::Mat<double>; Tp = {mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >}; LayerTypes = std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> > >; OutputLayerType = mlpack::ann::BinaryClassificationLayer; InitializationRuleType = mlpack::ann::RandomInitialization; PerformanceFunction = mlpack::ann::MeanSquaredErrorFunction]’
/usr/local/include/mlpack/methods/ann/ffn_impl.hpp:241:28:   required from ‘double mlpack::ann::FFN<LayerTypes, OutputLayerType, InitializationRuleType, PerformanceFunction>::Evaluate(const mat&, size_t, bool) [with LayerTypes = std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> > >; OutputLayerType = mlpack::ann::BinaryClassificationLayer; InitializationRuleType = mlpack::ann::RandomInitialization; PerformanceFunction = mlpack::ann::MeanSquaredErrorFunction; arma::mat = arma::Mat<double>; size_t = long unsigned int]’
/usr/local/include/mlpack/core/optimizers/rmsprop/rmsprop_impl.hpp:54:22:   required from ‘double mlpack::optimization::RMSprop<DecomposableFunctionType>::Optimize(arma::mat&) [with DecomposableFunctionType = mlpack::ann::FFN<std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> > >, mlpack::ann::BinaryClassificationLayer, mlpack::ann::RandomInitialization, mlpack::ann::MeanSquaredErrorFunction>&; arma::mat = arma::Mat<double>]’
/usr/local/include/mlpack/methods/ann/ffn_impl.hpp:137:50:   required from ‘void mlpack::ann::FFN<LayerTypes, OutputLayerType, InitializationRuleType, PerformanceFunction>::Train(const mat&, const mat&) [with OptimizerType = mlpack::optimization::RMSprop; LayerTypes = std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> > >; OutputLayerType = mlpack::ann::BinaryClassificationLayer; InitializationRuleType = mlpack::ann::RandomInitialization; PerformanceFunction = mlpack::ann::MeanSquaredErrorFunction; arma::mat = arma::Mat<double>]’
/home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:45:5:   required from ‘void BuildFFN(MatType&, MatType&, MatType&, MatType&, size_t) [with PerformanceFunction = mlpack::ann::LogisticFunction; OutputLayerType = mlpack::ann::BinaryClassificationLayer; PerformanceFunctionType = mlpack::ann::MeanSquaredErrorFunction; MatType = arma::Mat<double>; size_t = long unsigned int]’
/home/sudarshan/project-yanack/mlpack_nn/src/ff_nn.cpp:79:71:   required from here
/usr/local/include/mlpack/methods/ann/layer/base_layer.hpp:128:52: error: invalid initialization of reference of type ‘arma::Mat<double>&’ from expression of type ‘const arma::Mat<double>’
   OutputDataType& OutputParameter() const { return outputParameter; }
                                                    ^
make[2]: *** [CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o] Error 1
make[1]: *** [CMakeFiles/ff_nn.dir/all] Error 2
make: *** [all] Error 2

I was able to isolate the error to the net.Train(trainData, trainLabels) call. Any suggestions?

Thanks.

@stereomatchingkiss
Copy link
Contributor

After this, when I compile I am still getting errors but its different:

Compiler complain because of const correctness, it is quite easy to fix

change

OutputDataType& OutputParameter() const { return outputParameter; }

to

OutputDataType const& OutputParameter() const { return outputParameter; }

Pull request #536 fix this problem already.

Any compiler has good support on c++11 should be able to fulfill the "rules of zero", you need to check how good your compiler support on c++11(you can treat c++14 as a small patch for c++11)

You could improve performance a little by using move rather copy

auto modules = std::make_tuple(inputLayer, inputBiasLayer, inputBaseLayer,
                          hiddenLayer1, hiddenBiasLayer1, outputLayer);

  FFN<decltype(modules), decltype(classOutputLayer), RandomInitialization,
      MeanSquaredErrorFunction> net(std::move(modules), std::move(classOutputLayer));

Original solution will copy the modules and classOutputLayer into the FFN, but with std::move, it will move the object into the FFN, this could save some memory(remember to define ARMA_USE_CXX11, else the compiler will copy the data but not move).

@sudarshan85
Copy link
Author

Thank you. I checked out pr/536 and linked the include directories and lib from there. I got rid off all the compiler error. However, I get a 100% classification error when I use std::make_tuple, but when I use std::tie I get 4.95916%. This is my funciton:

auto BuildFFN(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize)
{
    // input layer
    ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
    ann::BiasLayer<> inputBiasLayer(hiddenLayerSize);
    ann::BaseLayer<PerformanceFunction> inputBaseLayer;

    // hidden layer
    ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
    ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
    ann::BaseLayer<PerformanceFunction> outputLayer;

    // output layer
    OutputLayerType classOutputLayer;

    auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
    ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);

    net.Train(trainData, trainLabels);
    arma::mat prediction;
    net.Predict(testData, prediction);

    double classificationError;
    for (size_t i = 0; i < testData.n_cols; i++)
    {
        if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0)
        {
            classificationError++;
        }
    }

    std::cout << "Classification Error = " << (double(classificationError) / testData.n_cols) * 100 << "%" << std::endl;

    return net;
}

@stereomatchingkiss
Copy link
Contributor

I get a 100% classification error when I use std::make_tuple, but when I use std::tie I get 4.95916%.

Pull request #542 should fix this issue, the problem is the parameter name "network" hide the name of data member "network".

If you want to save memory when make the tuple, you can move the layer too

auto modules = std::make_tuple(std::move(inputLayer), 
std::move(inputBiasLayer), std::move(inputBaseLayer), 
std::move(hiddenLayer1), std::move(hiddenBiasLayer1), 
std::move(outputLayer));

@theSundayProgrammer
Copy link
Contributor

I had the same result when used make_tuple rather than tie. I fixed it
though in my own fork:
http://github.com/theSundayProgrammer/mlpack
fork:mytweaks
SHA1: 540dd7c

Although it is for CNN not FNN, I guess a similar change for FNN should
work too

On Thu, Mar 3, 2016 at 3:29 AM, sudarshan notifications@github.com wrote:

Thank you. I checked out pr/536 and linked the include directories and lib
from there. I got rid off all the compiler error. However, I get a 100%
classification error when I use std::make_tuple, but when I use std::tie I
get 4.95916%. This is my funciton:

auto BuildFFN(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize)
{
// input layer
ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
ann::BiasLayer<> inputBiasLayer(hiddenLayerSize);
ann::BaseLayer inputBaseLayer;

// hidden layer
ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
ann::BaseLayer<PerformanceFunction> outputLayer;

// output layer
OutputLayerType classOutputLayer;

auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);

net.Train(trainData, trainLabels);
arma::mat prediction;
net.Predict(testData, prediction);

double classificationError;
for (size_t i = 0; i < testData.n_cols; i++)
{
    if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0)
    {
        classificationError++;
    }
}

std::cout << "Classification Error = " << (double(classificationError) / testData.n_cols) * 100 << "%" << std::endl;

return net;

}


Reply to this email directly or view it on GitHub
#531 (comment).

Joseph Chakravarti Mariadassou
http://thesundayprogrammer.com

@theSundayProgrammer
Copy link
Contributor

I had the same result when used make_tuple rather than tie. I fixed it
though in my own fork:
http://github.com/theSundayProgrammer/mlpack
branch:mytweaks
SHA1: 540dd7c

Although it is for CNN not FNN, I guess a similar change for FNN should
work too

On Thu, Mar 3, 2016 at 6:33 AM, Joe Mariadassou joe.mariadassou@gmail.com
wrote:

I had the same result when used make_tuple rather than tie. I fixed it
though in my own fork:
http://github.com/theSundayProgrammer/mlpack
fork:mytweaks
SHA1: 540dd7c

Although it is for CNN not FNN, I guess a similar change for FNN should
work too

On Thu, Mar 3, 2016 at 3:29 AM, sudarshan notifications@github.com
wrote:

Thank you. I checked out pr/536 and linked the include directories and
lib from there. I got rid off all the compiler error. However, I get a 100%
classification error when I use std::make_tuple, but when I use std::tie I
get 4.95916%. This is my funciton:

auto BuildFFN(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize)
{
// input layer
ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
ann::BiasLayer<> inputBiasLayer(hiddenLayerSize);
ann::BaseLayer inputBaseLayer;

// hidden layer
ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
ann::BaseLayer<PerformanceFunction> outputLayer;

// output layer
OutputLayerType classOutputLayer;

auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);

net.Train(trainData, trainLabels);
arma::mat prediction;
net.Predict(testData, prediction);

double classificationError;
for (size_t i = 0; i < testData.n_cols; i++)
{
    if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0)
    {
        classificationError++;
    }
}

std::cout << "Classification Error = " << (double(classificationError) / testData.n_cols) * 100 << "%" << std::endl;

return net;

}


Reply to this email directly or view it on GitHub
#531 (comment).

Joseph Chakravarti Mariadassou
http://thesundayprogrammer.com

Joseph Chakravarti Mariadassou
http://thesundayprogrammer.com

@sudarshan85
Copy link
Author

Thank you all for your help. Working with pr/542 fixed everything and I am able to return net as an auto using C++1y and predict later in code and it works. Closing the issue now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants