Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting MaX-DeepLab - iterating over tf.Tensor is not allowed: AutoGraph did convert this function. #50

Closed
louisquinn opened this issue Aug 25, 2021 · 6 comments

Comments

@louisquinn
Copy link

Hey guys,

Thanks for your awesome work in this repo.

Getting the following error when attempting to export a Max-DeepLab model:

tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: 
iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

Which is generated from this line whilst attempting to iterate over a Tensor("strided_slice:0", shape=(), dtype=int32)
https://github.com/google-research/deeplab2/blob/main/model/post_processor/max_deeplab.py#L389

I'm using Tensorflow version 2.6.0, this also happens with TF 2.5.0.

I've also attempted to export max_deeplab_s_os16_res641_400k using the provided config and checkpoint and got the same error.

I'll keep investigating if its an issue with my environment.

@aquariusjay
Copy link
Contributor

Hi louisquinn,

Thanks for reporting the issue.
Currently, the export_model.py only supports exporting Panoptic-DeepLab.
We will add the support for MaX-DeepLab soon.
In the meantime, please feel free to prepare a PR if you figure out the issue on your end. We will be happy to incorporate your contribution.
Additionally, based on the provided error log, I think you could safely remove the for loop, since we usually have batch_size 1 during inference.

Cheers,

@louisquinn
Copy link
Author

Hey @aquariusjay thanks for your reply!

Yep, since posting this issue I made a similar change to what you're suggesting - I just changed the loop to a

for i in range(n)

Where n is my batch size. This exports fine and I can do inference in the Tensorflow runtime!

However anything further than that looks to be a bit more complex to hack through.
For example, freezing with convert_variables_to_constants_v2 (for TF-TRT conversion) results in this:

ValueError: Node 'StatefulPartitionedCall/DeepLab/max_deeplab_s/stage4/block2/attention/height_axis/query_rpe/Gather/axis' is not unique

I didn't want to pull the architecture apart too much in an attempt to fix it before you guys eventually make an update, so for now Panoptic works for us in situations where we want to use anything other than the TF runtime!

If I find time to do a more elegant solution ill raise a PR.

Do you guys have an ETA on when some of these extensions would be supported?

Thanks again!

@aquariusjay
Copy link
Contributor

Hi @louisquinn,

Glad to know that you managed to make it work except TF-TRT.
We do not have an exact ETA to fix the issues.
We will work on the export_model issue asap, while for the TF-TRT issue, it is not under our radar and thus may take longer time.

Cheers,

@louisquinn
Copy link
Author

Thanks @aquariusjay

So as an update I was able to export Panoptic Deeplab with a variable batch size, convert to TF-TRT and serve in Triton, so thanks for your recommendations!

I found it was sufficient to just comment this line out (plenty of angry warnings from the TF-TRT converter but still works):
https://github.com/google-research/deeplab2/blob/main/model/post_processor/panoptic_deeplab.py#L314
and also make a few modifications to the Deeplab model def.

One last question, in the Panoptic Deeplab docs here its specified there's some MobilnetV3 checkpoints for Cityscapes. Are the base pretrained Imagenet checkpoints available for Mobilenet-V3?

@aquariusjay
Copy link
Contributor

Hi @louisquinn,

Great job on making it work!

We are working on updating the Panoptic-DeepLab w/ MobileNet-v3 results (better performance than the one in model zoo) along with ImageNet pretrained checkpoints for MobileNet-v3. They will be ready soon. Please stay tuned.

Cheers,

@louisquinn
Copy link
Author

@aquariusjay great! Well thanks for answering all my questions! I'll close this issue now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants