-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for batch_size > 1 for inference with datasets with images of different sizes #1370
Comments
It is not supported yet. We do not have a clear plan for batch inference due to lack of developing resources. |
Got it, thanks for the info! |
aravind-h-v
pushed a commit
to aravind-h-v/mmsegmentation
that referenced
this issue
Mar 27, 2023
…ab#1370) * updates img2img_inpainting README * Adds example image to community pipeline README
xiexinch
added a commit
that referenced
this issue
Jul 20, 2023
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. ## Motivation #3181 #2965 #2644 #1645 #1444 #1370 #125 ## Modification Remove the assertion at data_preprocessor ## BC-breaking (Optional) Does the modification introduce changes that break the backward-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR. ## Use cases (Optional) If this PR introduces a new feature, it is better to list some use cases here, and update the documentation. ## Checklist 1. Pre-commit or other linting tools are used to fix the potential lint issues. 2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness. 3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMDet3D. 4. The documentation has been modified accordingly, like docstring or example tutorials.
nahidnazifi87
pushed a commit
to nahidnazifi87/mmsegmentation_playground
that referenced
this issue
Apr 5, 2024
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. ## Motivation open-mmlab#3181 open-mmlab#2965 open-mmlab#2644 open-mmlab#1645 open-mmlab#1444 open-mmlab#1370 open-mmlab#125 ## Modification Remove the assertion at data_preprocessor ## BC-breaking (Optional) Does the modification introduce changes that break the backward-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR. ## Use cases (Optional) If this PR introduces a new feature, it is better to list some use cases here, and update the documentation. ## Checklist 1. Pre-commit or other linting tools are used to fix the potential lint issues. 2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness. 3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMDet3D. 4. The documentation has been modified accordingly, like docstring or example tutorials.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Following up on closed issue: #125
Hi, are there any updates since this issue was closed on handling padding when datasets consists of images with different image sizes? When modifying samples_per_gpu in test.py, there is a call to torch.stack which requires the tensors to be the same size. As a result, I believe this issue occurs for these datasets regardless of single or multi gpu inference and wanted to check if there was an existing solution that supports batch_size > 1 for inference for datasets such as ADE20K with images of different sizes.
The text was updated successfully, but these errors were encountered: