New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: clamp keep input size in update_cache for causal conv #5732
Conversation
for more information, see https://pre-commit.ci
@@ -148,7 +148,9 @@ | |||
x = torch.cat((needed_cache, x), dim=-1) | |||
|
|||
if cache_next is not None: | |||
input_x_kept = input_x[:, :, : input_x.size(-1) - self.cache_drop_size] | |||
input_x_size = torch.tensor(input_x.size(-1) - self.cache_drop_size, dtype=torch.int64) |
Check failure
Code scanning / CodeQL
Potentially uninitialized local variable
@@ -148,7 +148,9 @@ def update_cache(self, x, cache=None, cache_next=None): | |||
x = torch.cat((needed_cache, x), dim=-1) | |||
|
|||
if cache_next is not None: | |||
input_x_kept = input_x[:, :, : input_x.size(-1) - self.cache_drop_size] | |||
input_x_size = torch.tensor(input_x.size(-1) - self.cache_drop_size, dtype=torch.int64) | |||
input_x_size = input_x_size.clip(min=1, max=input_x.size(-1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no need to specify max for the clip.
input_x_kept = input_x[:, :, : input_x.size(-1) - self.cache_drop_size] | ||
input_x_size = torch.tensor(input_x.size(-1) - self.cache_drop_size, dtype=torch.int64) | ||
input_x_size = input_x_size.clip(min=1, max=input_x.size(-1)) | ||
input_x_kept = input_x[:, :, :input_x_size] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you please check it with export nemo to make sure the ONNX conversion is still working?
@@ -148,7 +148,9 @@ def update_cache(self, x, cache=None, cache_next=None): | |||
x = torch.cat((needed_cache, x), dim=-1) | |||
|
|||
if cache_next is not None: | |||
input_x_kept = input_x[:, :, : input_x.size(-1) - self.cache_drop_size] | |||
input_x_size = torch.tensor(input_x.size(-1) - self.cache_drop_size, dtype=torch.int64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you get the error with a specific file or any causes this?
What are the values of self.cache_drop_size and input_x.size(-1) here?
This PR is stale because it has been open for 14 days with no activity. Remove stale label or comment or update or this will be closed in 7 days. |
Closing. Will recreate. |
What does this PR do ?
Sometimes in CausalConv1D.update_cache input_x_keep ends up having no frames (ie size of [M, N, 0]). Make sure that the we have at least one frame.
Collection: asr
Changelog
Usage
# Add a code snippet demonstrating how to use this
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
@VahidooX
Additional Information
To reproduce the original issue in main run:
Error output: