Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Import ONNX with low_memory setting #333

Closed
lauracanalini opened this issue Mar 2, 2022 · 1 comment
Closed

Import ONNX with low_memory setting #333

lauracanalini opened this issue Mar 2, 2022 · 1 comment
Labels

Comments

@lauracanalini
Copy link
Contributor

Hi, I was trying to import the resnet101 ONNX file with the low_memory setting from import_net_from_onnx_file.
However, when I try to use a batch_size greater than 1, the training explode in the resize of the network, in particular of the first Conv layer, in the ConvolDescriptor::resize when it tries to free the ptrI pointer (line 265 of descriptor_conv2D.cpp). I am using EDDL 1.0.4b.
A simple program that reproduces the error:

#include <iostream>

#include "eddl/apis/eddl.h"
#include "eddl/serialization/onnx/eddl_onnx.h"

using namespace eddl;

int main(int argc, char **argv) {
    auto resnet101 = import_net_from_onnx_file("resnet101.onnx", { 3, 224, 224 }, 0);
    auto resnet101_low_mem = import_net_from_onnx_file("resnet101.onnx", { 3, 224, 224 }, 2);

    auto conv = dynamic_cast<LConv*>(resnet101->layers[1]);
    auto conv_low_mem = dynamic_cast<LConv*>(resnet101_low_mem->layers[1]);

    eddl_free(conv->cd->ptrI);
    cout << "Conv ptrI freed" << endl;
    eddl_free(conv_low_mem->cd->ptrI);  // here the program breaks
    cout << "Conv_low_mem ptrI freed" << endl;
    
    return EXIT_SUCCESS;
}

Regardless of the error, does it make sense to import the ONNX with the low_memory setting or passing "low_memory" to the computing service when the network is built does the same thing?

@chavicoski
Copy link
Contributor

Hi,

If you want to use the model with a "low_mem" configuration, you should say it in the constructor of the CompServ. The argument "mem" of the ONNX import function is not valid for doing that, and we will probably remove this argument in the future.

You can see the example using the CompServ here:

#include <iostream>

#include "eddl/apis/eddl.h"
#include "eddl/serialization/onnx/eddl_onnx.h"

using namespace eddl;

int main(int argc, char **argv) {
    auto resnet101 = import_net_from_onnx_file("resnet101.onnx", { 3, 224, 224 }, 0);
    auto resnet101_low_mem = import_net_from_onnx_file("resnet101.onnx", { 3, 224, 224 }, 0); // <- Use 0

    auto conv = dynamic_cast<LConv*>(resnet101->layers[1]);
    auto conv_low_mem = dynamic_cast<LConv*>(resnet101_low_mem->layers[1]);

    build(resnet101,
          adam(),
          {"softmax_cross_entropy"},
          {"categorical_accuracy"},
          CS_CPU(-1, "full_mem")); // <- "full_mem"

    build(resnet101_low_mem,
          adam(),
          {"softmax_cross_entropy"},
          {"categorical_accuracy"},
          CS_CPU(-1, "low_mem")); // <- "low_mem"

    eddl_free(conv->cd->ptrI);
    cout << "Conv ptrI freed" << endl;
    eddl_free(conv_low_mem->cd->ptrI);  // Now the program doesn't break
    cout << "Conv_low_mem ptrI freed" << endl;

    return EXIT_SUCCESS;
}

And regarding your last question, you can import the model and then use any configuration that you want, it just changes the amount of memory reserved for the operations, but the results should be the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants