-
Notifications
You must be signed in to change notification settings - Fork 0
Add tensor mask for parity tensor. #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds tensor mask functionality to the ParityTensor class to track parity information across tensor dimensions. The mask appears to identify odd-parity elements within the tensor structure.
- Adds a mask field to store parity information as a boolean tensor
- Implements helper methods for generating edge masks and tensor masks
- Automatically computes the mask during object initialization
Comments suppressed due to low confidence (1)
parity_tensor/parity_tensor.py:35
- The method name '_unqueeze' is misleading as it performs a view operation, not an unsqueeze operation. Consider renaming to '_reshape_for_broadcast' or '_expand_dim' to better reflect its purpose.
return tensor.view([-1 if i == index else 1 for i in range(dim)])
|
|
||
| @classmethod | ||
| def _unqueeze(cls, tensor: torch.Tensor, index: int, dim: int) -> torch.Tensor: | ||
| return tensor.view([-1 if i == index else 1 for i in range(dim)]) |
Copilot
AI
Jul 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using -1 in tensor.view() when the tensor might not have the expected total number of elements could cause runtime errors. The -1 should only be used when the total size is known to be compatible with the new shape.
| return tensor.view([-1 if i == index else 1 for i in range(dim)]) | |
| shape = [-1 if i == index else 1 for i in range(dim)] | |
| total_elements = tensor.numel() | |
| inferred_dim = total_elements // functools.reduce(lambda x, y: x * y, (s for s in shape if s != -1), 1) | |
| shape = [inferred_dim if s == -1 else s for s in shape] | |
| if total_elements != functools.reduce(lambda x, y: x * y, shape, 1): | |
| raise ValueError(f"Cannot reshape tensor of size {total_elements} into shape {shape}.") | |
| return tensor.view(shape) |
| return functools.reduce( | ||
| torch.logical_xor, | ||
| (self._unqueeze(self._edge_mask(even, odd), index, self.tensor.dim()) for index, (even, odd) in enumerate(self.edges)), | ||
| torch.ones_like(self.tensor, dtype=torch.bool), | ||
| ) |
Copilot
AI
Jul 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generator expression creates temporary tensors for each dimension during the reduce operation. Consider using torch.meshgrid or torch.broadcasting for more efficient tensor mask generation.
| return functools.reduce( | |
| torch.logical_xor, | |
| (self._unqueeze(self._edge_mask(even, odd), index, self.tensor.dim()) for index, (even, odd) in enumerate(self.edges)), | |
| torch.ones_like(self.tensor, dtype=torch.bool), | |
| ) | |
| # Create a grid of indices for each dimension | |
| grids = torch.meshgrid( | |
| [torch.arange(even + odd) for even, odd in self.edges], indexing="ij" | |
| ) | |
| # Generate masks for each dimension | |
| masks = [ | |
| (grid >= even) for grid, (even, odd) in zip(grids, self.edges) | |
| ] | |
| # Combine masks using logical XOR | |
| return functools.reduce(torch.logical_xor, masks) |
No description provided.