-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No model found in config file when trying to load converted model #25
Comments
Not sure if this is relevant, but I tried to do some basic inspection with h5py and have this:
|
Hi @dchouren , are you trying to load the model with the original Keras version? Or with this fork? |
Have tried both with the same result |
@MarcBS So it seems that what's being created is just a weights file. Is that correct? I can create a model, say VGG16, and then use model.load_weights('...') and that works. I would suggest changing the output from |
I used caffe2keras to convert the VGG16-hybrid1365 caffe model to an h5 file. Conversion went fine and I used the caffemodel and prototxt found here: https://github.com/metalbubble/places365.
However, when I try to load_model, I get ValueError: No model found in config file.
Theano and Keras are up to date.
Any idea why the model is not converted properly?
The only other related issue I could find was this, which didn't shed any light. fchollet/deep-learning-models#1
Converted with
The text was updated successfully, but these errors were encountered: