-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect output dimension on ios #9
Comments
Hi, |
Yes i am using that checkpoint. I am not sure about how exactly i should proceed with slicing. I found only 1 output model/L0/ResizeBilinear and used that to freeze the graph. Are you suggesting that i should be modifying the output before freezing the graph? |
Yes, something like: tf.image.resize_images(self.disp2[:,:,:,0], size) |
I am not experienced in neural networks, so i am quite confused. I don't manipulate any outputs nor do i run any inference in python. I just used the script in this post to export the pb file: Since it's has only 1 output with 2 channels, i don't know how i am supposed to slice after freezing the graph. |
You have two options:
|
I tried reconverting the latest pretrained model from the Pydnet repository to ios via tfcoreml. The conversion is successful but the output shape has wrong dimensions:
(1, 512, 256, 2)
I expected it to be 1 instead of 2. I know there is already a provided ios coreml file here but i plan to retrain the Pydnet model on my own dataset later. That's why i am attempting to do the conversation myself.
@GZaccaroni did you encounter such issue when you did the conversion for the ios part? Due to the incorrect dimension, i am not able to transform it to a valid image.
Here is the full conversion log from tfcoreml:
The text was updated successfully, but these errors were encountered: