We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prioritze coverage for the core ATen opset.
The content you are editing has changed. Please copy your edits and refresh the page.
aten.copy
aten._scaled_dot_product_flash_attention
aten.remainder.Scalar
aten.arange.start_step
aten.sort
aten.reflection_pad1d
aten.remainder.Tensor
aten.roll
aten._pdist_forward
aten.any
aten.topk
aten.replication_pad3d
aten.reflection_pad2d
aten.clamp.Tensor
aten.pixel_shuffle
aten.scalar_tensor
aten.upsample_nearest2d.vec
aten.replication_pad2d
aten.constant_pad_nd
aten.adaptive_avg_pool1d
aten.reflection_pad3d
aten.amin
aten.argmin
aten.flip
aten.any.dim
aten.isnan
aten.as_strided
aten._cdist_forward
aten.select_scatter
aten.slice_scatter
aten.sym_size.int
aten.randn
aten.rand
aten.randperm
aten.log1p
aten.diagonal
aten.scatter.value
aten.gather
aten._local_scalar_dense
aten.scatter_add
aten.scatter_reduce.two
aten.index_put
aten.native_dropout
aten.scatter.src
aten.empty_strided
aten.empty.memory_format
aten.nonzero
aten.sym_storage_offset
aten.resize_
aten.sym_stride.int
The text was updated successfully, but these errors were encountered:
Based on my common sense, the following ops should be prioritized:
Element-wise ops (I'm writing): torch.ops.aten.ne.Scalar torch.ops.aten.ne.Tensor torch.ops.aten.ge.Scalar torch.ops.aten.ge.Tensor torch.ops.aten.le.Scalar torch.ops.aten.le.Tensor torch.ops.aten.bitwise_and.Scalar torch.ops.aten.bitwise_and.Tensor torch.ops.aten.bitwise_and.Scalar_Tensor torch.ops.aten.bitwise_or.Scalar torch.ops.aten.bitwise_or.Tensor torch.ops.aten.bitwise_or.Scalar_Tensor torch.ops.aten.bitwise_xor.Scalar torch.ops.aten.bitwise_xor.Tensor torch.ops.aten.bitwise_xor.Scalar_Tensor torch.ops.aten.bitwise_not
Padding-related ops: torch.ops.aten.pad.default torch.ops.aten.constant_pad_nd.default torch.ops.aten.reflection_pad1d.default torch.ops.aten.reflection_pad2d.default torch.ops.aten.reflection_pad3d.default torch.ops.aten.replication_pad1d.default torch.ops.aten.replication_pad2d.default torch.ops.aten.replication_pad3d.default
Others: torch.ops.aten.amin torch.ops.aten.argmin torch.ops.aten.arange.start_step torch.ops.aten.native_dropout torch.ops.aten.rand torch.ops.aten.randn torch.ops.aten.sort torch.ops.aten.copy torch.ops.aten.topk torch.ops.aten.clamp torch.ops.aten.isnan torch.ops.aten.nonzero torch.ops.aten.index_select torch.ops.aten.flip torch.ops.aten.trunc
Sorry, something went wrong.
Additional context #1809
narendasan
zewenli98
bowang007
apbose
gs-olive
No branches or pull requests
TL;DR
Prioritze coverage for the core ATen opset.
Goal(s)
Tasks
Tasks
aten.copy
#2435aten._scaled_dot_product_flash_attention
#2427aten.remainder.Scalar
#2564aten.arange.start_step
#2492aten.sort
#2508aten.remainder.Tensor
#2565aten.roll
#2567aten._pdist_forward
#2568aten.any
#2545aten.topk
#2499aten.clamp.Tensor
#2517aten.pixel_shuffle
#2594aten.scalar_tensor
#2593aten.adaptive_avg_pool1d
#2603aten.amin
#2493aten.argmin
#2498aten.flip
#2535aten.isnan
#2712aten.as_strided
#2734aten._cdist_forward
#2725aten.select_scatter
#2436aten.slice_scatter
#2434aten.sym_size.int
#2496aten.randn
#2571aten.rand
#2572aten.randperm
#2573aten.log1p
#2760aten.diagonal
#2873aten.select_scatter
#2919aten.scatter.value
#2705aten.gather
#2534aten._local_scalar_dense
#2743aten.scatter_add
#2737aten.scatter_reduce.two
#2739aten.index_put
#2544aten.native_dropout
#2494aten.scatter.src
#2920aten.empty_strided
#2758aten.empty.memory_format
#2738aten.nonzero
#2516aten.sym_storage_offset
#2757aten.resize_
#2872aten.sym_stride.int
#2497The text was updated successfully, but these errors were encountered: