New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: [jax2tf] None of the tests in shape_poly_test.py are ran for enable_xla=False #11929
Labels
bug
Something isn't working
Comments
@gnecula FYI |
copybara-service bot
pushed a commit
that referenced
this issue
Aug 16, 2022
* Fixes #11804: we only supported `lax.reduce_window` without batch and channel dimensions, which is wrong. This is supported, and in fact something that most users use (this case is actually not explained in the [operational semantics for XLA::ReduceWindow](https://www.tensorflow.org/xla/operation_semantics#reducewindow)). I have fixed this and clarified a number of test cases with batch and channel dimensions. * Also, @sdenton4 gave a failing example in a Colab using polymorphic dimensions. I've added this as a test case to make sure it works now. * Adds support for explicit padding using the existing padding logic from convolutions. * Fixes #11874: we were not handling SAME padding for `lax.add` correctly, since we used `tf.nn.avg_pool`, which does not include non-padding tokens (see issue for more details). I resolved it by adding manual padding and added some additional tests for this. * Ensures we call eval_shape on a shape containing polynomials before calling a TF op. * Fixes #11929 (comment): we weren't running any of the shape_poly_test.py tests for `enable_xla=False`. PiperOrigin-RevId: 467532166
copybara-service bot
pushed a commit
that referenced
this issue
Aug 16, 2022
* Fixes #11804: we only supported `lax.reduce_window` without batch and channel dimensions, which is wrong. This is supported, and in fact something that most users use (this case is actually not explained in the [operational semantics for XLA::ReduceWindow](https://www.tensorflow.org/xla/operation_semantics#reducewindow)). I have fixed this and clarified a number of test cases with batch and channel dimensions. * Also, @sdenton4 gave a failing example in a Colab using polymorphic dimensions. I've added this as a test case to make sure it works now. * Adds support for explicit padding using the existing padding logic from convolutions. * Fixes #11874: we were not handling SAME padding for `lax.add` correctly, since we used `tf.nn.avg_pool`, which does not include non-padding tokens (see issue for more details). I resolved it by adding manual padding and added some additional tests for this. * Ensures we call eval_shape on a shape containing polynomials before calling a TF op. * Fixes #11929 (comment): we weren't running any of the shape_poly_test.py tests for `enable_xla=False`. PiperOrigin-RevId: 467532166
copybara-service bot
pushed a commit
that referenced
this issue
Aug 16, 2022
* Fixes #11804: we only supported `lax.reduce_window` without batch and channel dimensions, which is wrong. This is supported, and in fact something that most users use (this case is actually not explained in the [operational semantics for XLA::ReduceWindow](https://www.tensorflow.org/xla/operation_semantics#reducewindow)). I have fixed this and clarified a number of test cases with batch and channel dimensions. * Also, @sdenton4 gave a failing example in a Colab using polymorphic dimensions. I've added this as a test case to make sure it works now. * Adds support for explicit padding using the existing padding logic from convolutions. * Fixes #11874: we were not handling SAME padding for `lax.add` correctly, since we used `tf.nn.avg_pool`, which does not include non-padding tokens (see issue for more details). I resolved it by adding manual padding and added some additional tests for this. * Ensures we call eval_shape on a shape containing polynomials before calling a TF op. * Fixes #11929 (comment): we weren't running any of the shape_poly_test.py tests for `enable_xla=False`. PiperOrigin-RevId: 467532166
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
In
shape_poly_test.py
, we use the function _make_harness to create a test harness. If flagenable_and_disable_xla=True
, we create two harnesses recursively, one with argumentenable_xla=True
and one withenable_xla=False
here. This will be passed on to_make_harness
in the kwargs argument**params
. However, this argument is never forwarded when constructing the Harness here. As a result, we currently create two harnesses, both withenable_xla=True
.What jax/jaxlib version are you using?
No response
Which accelerator(s) are you using?
No response
Additional System Info
No response
The text was updated successfully, but these errors were encountered: