Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: [jax2tf] None of the tests in shape_poly_test.py are ran for enable_xla=False #11929

Closed
marcvanzee opened this issue Aug 16, 2022 · 1 comment · Fixed by #11913
Closed
Assignees
Labels
bug Something isn't working

Comments

@marcvanzee
Copy link
Collaborator

marcvanzee commented Aug 16, 2022

Description

In shape_poly_test.py, we use the function _make_harness to create a test harness. If flag enable_and_disable_xla=True, we create two harnesses recursively, one with argument enable_xla=True and one with enable_xla=False here. This will be passed on to _make_harness in the kwargs argument **params. However, this argument is never forwarded when constructing the Harness here. As a result, we currently create two harnesses, both with enable_xla=True.

What jax/jaxlib version are you using?

No response

Which accelerator(s) are you using?

No response

Additional System Info

No response

@marcvanzee marcvanzee added the bug Something isn't working label Aug 16, 2022
@marcvanzee marcvanzee self-assigned this Aug 16, 2022
@marcvanzee
Copy link
Collaborator Author

@gnecula FYI

copybara-service bot pushed a commit that referenced this issue Aug 16, 2022
* Fixes #11804: we only supported `lax.reduce_window` without batch and channel dimensions, which is wrong. This is supported, and in fact something that most users use (this case is actually not explained in the [operational semantics for XLA::ReduceWindow](https://www.tensorflow.org/xla/operation_semantics#reducewindow)). I have fixed this and clarified a number of test cases with batch and channel dimensions.

* Also, @sdenton4 gave a failing example in a Colab using polymorphic dimensions. I've added this as a test case to make sure it works now.

* Adds support for explicit padding using the existing padding logic from convolutions.

* Fixes #11874: we were not handling SAME padding for `lax.add` correctly, since we used `tf.nn.avg_pool`, which does not include non-padding tokens (see issue for more details). I resolved it by adding manual padding and added some additional tests for this.

* Ensures we call eval_shape on a shape containing polynomials before calling a TF op.

* Fixes #11929 (comment): we weren't running any of the shape_poly_test.py tests for `enable_xla=False`.

PiperOrigin-RevId: 467532166
copybara-service bot pushed a commit that referenced this issue Aug 16, 2022
* Fixes #11804: we only supported `lax.reduce_window` without batch and channel dimensions, which is wrong. This is supported, and in fact something that most users use (this case is actually not explained in the [operational semantics for XLA::ReduceWindow](https://www.tensorflow.org/xla/operation_semantics#reducewindow)). I have fixed this and clarified a number of test cases with batch and channel dimensions.

* Also, @sdenton4 gave a failing example in a Colab using polymorphic dimensions. I've added this as a test case to make sure it works now.

* Adds support for explicit padding using the existing padding logic from convolutions.

* Fixes #11874: we were not handling SAME padding for `lax.add` correctly, since we used `tf.nn.avg_pool`, which does not include non-padding tokens (see issue for more details). I resolved it by adding manual padding and added some additional tests for this.

* Ensures we call eval_shape on a shape containing polynomials before calling a TF op.

* Fixes #11929 (comment): we weren't running any of the shape_poly_test.py tests for `enable_xla=False`.

PiperOrigin-RevId: 467532166
copybara-service bot pushed a commit that referenced this issue Aug 16, 2022
* Fixes #11804: we only supported `lax.reduce_window` without batch and channel dimensions, which is wrong. This is supported, and in fact something that most users use (this case is actually not explained in the [operational semantics for XLA::ReduceWindow](https://www.tensorflow.org/xla/operation_semantics#reducewindow)). I have fixed this and clarified a number of test cases with batch and channel dimensions.

* Also, @sdenton4 gave a failing example in a Colab using polymorphic dimensions. I've added this as a test case to make sure it works now.

* Adds support for explicit padding using the existing padding logic from convolutions.

* Fixes #11874: we were not handling SAME padding for `lax.add` correctly, since we used `tf.nn.avg_pool`, which does not include non-padding tokens (see issue for more details). I resolved it by adding manual padding and added some additional tests for this.

* Ensures we call eval_shape on a shape containing polynomials before calling a TF op.

* Fixes #11929 (comment): we weren't running any of the shape_poly_test.py tests for `enable_xla=False`.

PiperOrigin-RevId: 467532166
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant