Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix variable batch size for list of tensors. Make Constant op constant again #2637

Merged
merged 1 commit into from Jan 27, 2021

Conversation

mzient
Copy link
Contributor

@mzient mzient commented Jan 26, 2021

Signed-off-by: Michał Zientkiewicz mzient@gmail.com

Why we need this PR?

Pick one, remove the rest

  • It fixes a bug: assertion when providing variable batch as a list of tensors.
  • It doesn not recreate constants when batch size changes.

What happened in this PR?

Fill relevant points, put NA otherwise. Replace anything inside []

  • What solution was applied:
    • changed the condition in BackendImpl from == p->batch_size() to <= p->max_batch_size()
    • max_batch_size TensorList/TensorVector is populated and then only trimmed to required length.
  • Affected modules and functionalities:
    • Python interface for Pipeline
    • Constant op
    • Constant op tests
  • Key points relevant for the review:
    • N/A
  • Validation and testing:
    • Python test
  • Documentation (including examples):
    • documentation stated that Constants are never reallocated which is true... again

JIRA TASK: N/A

…again.

Signed-off-by: Michał Zientkiewicz <mzient@gmail.com>
@mzient mzient requested a review from a team January 26, 2021 14:34
@mzient
Copy link
Contributor Author

mzient commented Jan 26, 2021

!build

@@ -162,3 +163,23 @@ def test_constant_fn():
def test_scalar_constant_promotion():
yield _test_scalar_constant_promotion, "cpu"
yield _test_scalar_constant_promotion, "gpu"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this should go to test_dali_variable_batch_size.py?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just my opinion, but I don't like the whole idea of test_dali_variable_batch_size - it's a bag of very shallow tests. When we have variable batch size support across the board, we might just use variable batch size for new tests for other ops.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So do we want to extend every operator's test to cover batch size variability more thoroughly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When it doesn't require jumping though too many hoops, then yes. Currently any test that uses readers is not exactly easily convertible to variable batch size.

@JanuszL JanuszL self-assigned this Jan 26, 2021
@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2015851]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2015851]: BUILD PASSED

@mzient mzient merged commit dbd081d into NVIDIA:master Jan 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants