-
Notifications
You must be signed in to change notification settings - Fork 24
[torch] Segmented Polynomial Product #97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
cuequivariance_torch/cuequivariance_torch/primitives/segmented_polynomial_product.py
Show resolved
Hide resolved
cuequivariance_torch/cuequivariance_torch/primitives/segmented_polynomial_product.py
Outdated
Show resolved
Hide resolved
cuequivariance_torch/tests/primitives/segmented_polynomial_test.py
Outdated
Show resolved
Hide resolved
|
i don't get why those pipelines are failing, any input @mariogeiger ? |
Oh crap sorry about that. On the CI I've installed a version of cuequivariance-ops that is compatible with the "advanced batching" but not with main. Let me switch back |
|
cuequivariance-torch (ubuntu-latest, 3.10) is failing because you don't have an escape mechanism to run the tests on cpu. The simplest fist is to skip the tests when there are no GPU available. |
|
I changed the cuequivariance-ops packages in the CI, now cuequivariance-torch (self-hosted, 3.12) is failing with a different error |
|
@mariogeiger looks like the remaining failure is not even in code i changed. do you know what's going on? |
cuequivariance_torch/.coverage
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| if operand_extent is None: | ||
| (operand_extent,) = list(o.get_dims(0)) | ||
| else: | ||
| torch._assert( | ||
| operand_extent == list(o.get_dims(0))[0], | ||
| "all operands must have the same extent", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if operand_extent is None: | |
| (operand_extent,) = list(o.get_dims(0)) | |
| else: | |
| torch._assert( | |
| operand_extent == list(o.get_dims(0))[0], | |
| "all operands must have the same extent", | |
| ) | |
| if operand_extent is None: | |
| (operand_extent,) = o.segment_shape | |
| else: | |
| torch._assert( | |
| (operand_extent,) == o.segment_shape, | |
| "all operands must have the same extent", | |
| ) |
Sorry my bad, this is nicer. (segment_shape is a property that exist only if all_same_segment_shape() is true.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| ) | ||
|
|
||
|
|
||
| class SegmentedPolynomial(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a docstring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
No description provided.