Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in conversion of Inception Model from Keras to CNTK #19

Closed
coolrishi2005 opened this Issue Dec 5, 2017 · 7 comments

Comments

Projects
None yet
2 participants
@coolrishi2005
Copy link

coolrishi2005 commented Dec 5, 2017

Hi kitstar,

I have saved Inception Model and its Weights using the below code:

import numpy as np
import tensorflow as tf
import os
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras import applications

BackEndModel = applications.InceptionV3(include_top=False, weights='imagenet')
model_json = BackEndModel.to_json()
with open("incpetionTopModel.json", "w") as json_file:
json_file.write(model_json)
BackEndModel.save_weights("incpetionTopModel.h5")

Now, I want to convert this model and weight files to cntk. Using the standard procedure as mentioned in mmdnn documentation, I am getting the following error in the last step (After generating the CNTK code snippet, you can convert the code and IR weights file to CNTK original model). Following is the error trace:

(C:\Program Files\Anaconda3\envs\py35) >python -m mmdnn.conversion.examples.cntk.imagenet_test
-n cntkInceptionTopModel.py -w IRIncpetionTopModel.npy --dump cntkInceptionTopMo
del.dnn
Selected CPU as the process wide default device.
C:\Program Files\Anaconda3\envs\py35\lib\site-packages\cntk\core.py:82: RuntimeW
arning: data is not C contiguous; rearrange your data/computation to avoid costl
y data conversions
RuntimeWarning)
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\envs\py35\lib\runpy.py", line 184, in _run_mo
dule_as_main
"main", mod_spec)
File "C:\Program Files\Anaconda3\envs\py35\lib\runpy.py", line 85, in run_cod
e
exec(code, run_globals)
File "C:\Program Files\Anaconda3\envs\py35\lib\site-packages\mmdnn\conversion
examples\cntk\imagenet_test.py", line 57, in
tester = TestCNTK()
File "C:\Program Files\Anaconda3\envs\py35\lib\site-packages\mmdnn\conversion
examples\cntk\imagenet_test.py", line 22, in init
self.model = self.MainModel.KitModel(self.args.w)
File "D:\Rishi\Machine Learning\CNTK\MMdnn-master\mmdnn\conversion\cntk\cntkIn
ceptionTopModel.py", line 27, in KitModel
conv2d_189 = convolution(input_3, strides = (2, 2,), auto_padding = [Fa
lse, False, False], name = 'conv2d_189')
File "D:\Rishi\Machine Learning\CNTK\MMdnn-master\mmdnn\conversion\cntk\cntkIn
ceptionTopModel.py", line 374, in convolution
input = cntk.transpose(input, [dim - 2] + list(range(0, dim - 2)))
File "C:\Program Files\Anaconda3\envs\py35\lib\site-packages\cntk\internal\swi
g_helper.py", line 69, in wrapper
result = f(*args, **kwds)
File "C:\Program Files\Anaconda3\envs\py35\lib\site-packages\cntk\ops_init

.py", line 2056, in transpose
return transpose(x, perm, name)
RuntimeError: invalid vector subscript

Kindly help me out with it.
I have already added -node add as suggested by you while generating IRCode from Keras Model.

@kitstar

This comment has been minimized.

Copy link
Contributor

kitstar commented Dec 5, 2017

Hi @coolrishi2005 ,

The input layer of keras InceptionV3 doesn't contain shapes info, which is placed in model config batch_input_shape. So it can't be parsed correctly.

To fix it, please change the code

BackEndModel = applications.InceptionV3(include_top=False, weights='imagenet')

to

BackEndModel = applications.InceptionV3(input_shape=(299, 299, 3), include_top=False, weights='imagenet')

And I tested it with follow scripts

$ python -m mmdnn.conversion._script.convertToIR -f keras -d ./kit_imagenet -n incpetionTopModel.json -w incpetionTopModel.h5
$ python -m mmdnn.conversion._script.IRToCode --dstModelFormat cntk --IRModelPath kit_imagenet.pb --dstModelPath kit_imagenet.py --IRWeightPath kit_imagenet.npy
$ python -m mmdnn.conversion.examples.cntk.imagenet_test -n kit_imagenet.py -w kit_imagenet.npy --dump cntkmodel.dnn

CNTK model file is saved as [cntkmodel.dnn], generated by [kit_imagenet.py] and [kit_imagenet.npy].

Hope it can help you.

@coolrishi2005

This comment has been minimized.

Copy link
Author

coolrishi2005 commented Dec 6, 2017

Hi kitstar,

Thanks for the help. The model has been converted to *.dnn, but with the following warning:

(C:\Program Files\Anaconda3\envs\py35) D:\mmdnn\conversion\cntk>python -m mmdnn.conversion.examples.cntk.imagenet_test
-n cntkInceptionTopModel.py -w IRInceptionTopModel.npy --dump cntkInceptionTopMo
del.dnn
Selected CPU as the process wide default device.
C:\Program Files\Anaconda3\envs\py35\lib\site-packages\cntk\core.py:82: RuntimeW
arning: data is not C contiguous; rearrange your data/computation to avoid costl
y data conversions
RuntimeWarning)
CNTK model file is saved as [cntkInceptionTopModel.dnn], generated by [cntkIncep
tionTopModel.py] and [IRInceptionTopModel.npy].

Is it fine?

@kitstar

This comment has been minimized.

Copy link
Contributor

kitstar commented Dec 6, 2017

Supposed to be fine.

@coolrishi2005

This comment has been minimized.

Copy link
Author

coolrishi2005 commented Dec 6, 2017

Great!! Thanks.

@kitstar kitstar added the question label Dec 6, 2017

@kitstar kitstar closed this Dec 6, 2017

@coolrishi2005

This comment has been minimized.

Copy link
Author

coolrishi2005 commented Dec 6, 2017

Hi kitstar,

I have saved the created and saved the Inception model with following parameters:
BackEndModel = applications.InceptionV3(input_shape=(150, 150, 3), include_top=False, weights='imagenet')

I am trying to load the above saved model (cntkInceptionTopModel.dnn) using cntk CPP Evaluation Example provided by Microsoft (https://github.com/Microsoft/CNTK/blob/release/2.3/Examples/Evaluation/CNTKLibraryCPPEvalCPUOnlyExamples/CNTKLibraryCPPEvalCPUOnlyExamples.cpp)

Now, for evaluation, I am providing binary RGB data as input (which is of size 150x150x3, i.e. 67500 bytes data). Following is the code for the same:

void EvaluationSingleSampleUsingDense(const wchar_t* modelFile, const DeviceDescriptor& device)
{
printf("\n===== Evaluate single sample using dense format.\n");

// Load the model.
// The model is trained by <CNTK>/Examples/Image/Classification/ResNet/Python/TrainResNet_CIFAR10.py
// Please see README.md in <CNTK>/Examples/Image/Classification/ResNet about how to train the model.
FunctionPtr modelFunc = Function::Load(modelFile, device);

// Get input variable. The model has only one single input.
Variable inputVar = modelFunc->Arguments()[0];

// The model has only one output.
// If the model has more than one output, use modelFunc->Outputs to get the list of output variables.
Variable outputVar = modelFunc->Output();
NDShape outputShape = outputVar.Shape();
std::vector<size_t> outputShapeDim = outputShape.Dimensions();

// Prepare input data.
// For evaluating an image, you first need to perform some image preprocessing to make sure that the input image has the correct size and layout
// that match the model inputs.
// Please note that the model used by this example expects the CHW image layout.
// inputVar.Shape[0] is image width, inputVar.Shape[1] is image height, and inputVar.Shape[2] is channels.
// For simplicity and avoiding external dependencies, we skip the preprocessing step here, and just use some artificially created data as input.
NDShape inputShape = inputVar.Shape();
std::vector<size_t> inputShapeDim = inputShape.Dimensions();

unsigned char inputDataChar[67500];
std:string filepath = "D:\\Rishi\\CNTK Evaluation\\TestCNTK\\TestCNTK\\videoRGBData.rgb";
std::basic_ifstream<unsigned char> infile(filepath, std::ios::binary | std::ios::in);
infile.read(inputDataChar, 67500);
infile.close();

std::vector<float> inputData(inputVar.Shape().TotalSize());
/*for (size_t i = 0; i < inputData.size(); ++i)
{
	inputData[i] = static_cast<float>(i % 255);
}*/

for (int i = 0; i < 67500; i++)
	inputData[i] = (float)inputDataChar[i];

// Create input value and input data map
cout << inputVar.Shape();
ValuePtr inputVal = Value::CreateBatch(inputVar.Shape(), inputData, device);
std::unordered_map<Variable, ValuePtr> inputDataMap = { { inputVar, inputVal } };

// Create output data map. Using null as Value to indicate using system allocated memory.
// Alternatively, create a Value object and add it to the data map.
std::unordered_map<Variable, ValuePtr> outputDataMap = { { outputVar, nullptr } };

// Start evaluation on the device
modelFunc->Evaluate(inputDataMap, outputDataMap, device);

// Get evaluate result as dense output
ValuePtr outputVal = outputDataMap[outputVar];
std::vector<std::vector<float>> outputData;
outputVal->CopyVariableValueTo(outputVar, outputData);

PrintOutput<float>(outputVar.Shape().TotalSize(), outputData);

}

But, during the call of modelFunc->Evaluate(inputDataMap, outputDataMap, device), I am getting following runtime error:
Unhandled exception at 0x00007FFE1527FEDE (Cntk.Core-2.3d.dll) in CNTKLibraryCPPEvalCPUOnlyExamples.exe: 0xC00000FD: Stack overflow (parameters: 0x0000000000000001, 0x000000474F603FD0). occurred

Can you please help?

@kitstar

This comment has been minimized.

Copy link
Contributor

kitstar commented Dec 6, 2017

Hi,
Does the converted model with original input shape (299, 299, 3) work?
If it works, maybe resizing your image in your application is a workaround.

@coolrishi2005

This comment has been minimized.

Copy link
Author

coolrishi2005 commented Dec 6, 2017

Hey, no issue. I am working on it.
Thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.