You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I trained a model in keras with tensorflow as a backend and would like to import it to DL4J. This works when I set my backend to be "nd4j-native", but fails when I set by backend as "nd4j-cuda-8.0".
I create the model in the following way: unet.py (training omitted). And load it into DL4J as so
When I load the model in DL4J with GPUs enabled, process hangs at the above step. I've waited for about 10 minutes with no luck, and all the while the process uses a steady 6-7 GB of my GPU's memory.
Now the strange part is that when I create the model in Keras with "tf" ordering (aka channels last) the hang up no longer happens and everything is great (unet_tf_ordering.py). When I use CPU it doesn't care what ordering I use.
The text was updated successfully, but these errors were encountered:
emc5ud
changed the title
DL4J Keras Model Import with CUDA backend strange behavior (0.9.2-SNAPSHOT)
DL4J Keras model import with CUDA backend strange behavior (0.9.2-SNAPSHOT)
Jan 26, 2018
This issue is a little strange.
So I trained a model in keras with tensorflow as a backend and would like to import it to DL4J. This works when I set my backend to be "nd4j-native", but fails when I set by backend as "nd4j-cuda-8.0".
I create the model in the following way: unet.py (training omitted). And load it into DL4J as so
When I load the model in DL4J with GPUs enabled, process hangs at the above step. I've waited for about 10 minutes with no luck, and all the while the process uses a steady 6-7 GB of my GPU's memory.
Now the strange part is that when I create the model in Keras with "tf" ordering (aka channels last) the hang up no longer happens and everything is great (unet_tf_ordering.py). When I use CPU it doesn't care what ordering I use.
Relevant python libraries:
Relevant part of my build.sbt:
The text was updated successfully, but these errors were encountered: