Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

load my own trained model error #11

Open
jinfei3459 opened this issue Dec 6, 2018 · 1 comment
Open

load my own trained model error #11

jinfei3459 opened this issue Dec 6, 2018 · 1 comment

Comments

@jinfei3459
Copy link

jinfei3459 commented Dec 6, 2018

Hi,Thanks for your nice work.
I have a question about how to get a right model. I tried to use “nwojke/cosine_metric_learning” to train my own model,but the c++ program will make an error when I replace the "tt1.pb"model with my own model.
The following is the error message:

create graph in session failed: Invalid argument: Cannot assign a device for operation 'map/TensorArray': Could not satisfy explicit device specification '/gpu:4' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices:
TensorArrayReadV3: CPU
TensorArrayV3: CPU
Enter: GPU CPU
Placeholder: GPU CPU
TensorArrayScatterV3: CPU
[[Node: map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_UINT8, dynamic_size=false, element_shape=<unknown>, tensor_array_name="", _device="/gpu:4"](map/strided_slice)]]
CUDA Error: driver shutting down
test: /data4/gcy/ds-master/yoda/darknet/src/cuda.c:36: check_error: Assertion `0' failed.
Aborted (core dumped)

The program work when I commented out this line of code.

tf::graph::SetDefaultDevice("/gpu:4", &graph_def);

But I want to set a gpu id ,so I can't commented out it. The way I came up with was to train the same model as "tt1.pb" was.I use tensorflow1.2.1 ,tensorflow1.4.0 and tensorflow1.5 to train models,but none of these models can be used. So I want to know the source of "tt1.pb" model .
Can you help me ? Thank you very much.

@jinfei3459 jinfei3459 changed the title How to get the right model load my own trained model error Dec 7, 2018
@jinfei3459
Copy link
Author

The solution has been found. Add the code:

tf::graph::SetDefaultDevice("/gpu:4", &graph_def);
opts.config.set_allow_soft_placement(true);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant