New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dynamo] module tests + operator support #148
Conversation
AndreSlavescu
commented
Mar 25, 2023
•
edited
edited
- non-linear activation operators + tests
- convolution operators + tests
- conditional operator tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @AndreSlavescu,
I left some comments.
One thing that might need your help. Could you help me to validate that if the torch.compile will dispatch the model to the backend when there is only one operator in the module? Say, we directly optimize
torch.compile(torch.nn.Conv2d(...), backend='hidet')
by using this api: https://docs.hidet.org/stable/gallery/tutorials/optimize-pytorch-model.html#print-the-input-graph
If it is true, we might need another way to write the check_module
function in our test script to make sure that we really tested the mapping.
python/hidet/graph/ops/definitions/conv3d_transpose/conv3d_transpose.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR.
A general comment on testing strategies is that you should look for code coverage. I think you have covered most of the things in your tests, but 1 exception is perhaps conv operators.
There are three notable convolution cases for as far as I have heard
- common: 3x3 (or 5x5) kernel sizes, stride 1 or 2, and groups 1, the things you often see in resnet
- grouped conv: similar to common cases but with groups > 1, the things you often see in resnext
- depthwise seperable conv: a conv where # of groups == # of input channels, followed by a conv where kernel size is 1x1, something like this: https://github.com/seungjunlee96/Depthwise-Separable-Convolution_Pytorch
I think these would be a good stress test on the correctness for those index calculations. :)
python/hidet/graph/ops/definitions/conv1d_transpose/conv1d_transpose.py
Outdated
Show resolved
Hide resolved
tests/frontends/torch/test_conv_transpose/test_torch_conv2d_transpose.py
Outdated
Show resolved
Hide resolved
tests/frontends/torch/test_conv_transpose/test_torch_conv3d_transpose.py
Outdated
Show resolved
Hide resolved
Hi @AndreSlavescu, let me know by replying this PR if your PR is ready to review. Thanks! |
Hi @yaoyaoding , PR is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @AndreSlavescu, over all looks good. But there are still some minor issues.
python/hidet/graph/ops/definitions/conv1d_transpose/conv1d_transpose.py
Outdated
Show resolved
Hide resolved
python/hidet/graph/ops/definitions/conv1d_transpose/conv1d_transpose.py
Outdated
Show resolved
Hide resolved
python/hidet/graph/ops/definitions/conv1d_transpose/conv1d_transpose.py
Outdated
Show resolved
Hide resolved
python/hidet/graph/ops/definitions/conv2d_transpose/conv2d_transpose.py
Outdated
Show resolved
Hide resolved
@pytest.mark.parametrize('groups', [1]) | ||
@pytest.mark.parametrize('dtype', [torch.float32]) | ||
def test_conv1d_transpose(in_shape, w_shape, stride, padding, output_padding, groups, dtype): | ||
check_module( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add
cudnn.allow_tf32 = False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make it consistent, could you add the pairs of
cudnn.allow_tf32 = False
and
cudnn.allow_tf32 = True
to all conv/conv_transpose test?
b7944b3
to
cb2fc6b
Compare
Thanks @AndreSlavescu! This PR looks good to me now. |
…idet-org#148) **Overview** Specialize function `Constant._binary()` for compilation speedup **Compilation time improvement results** matmul_f16 with `max_parallel_jobs=1` Before: 2m 11.2s After: 2m 4.4s Speedup: 5.5% **Additional test** matmul_f16 has 177 candidates. I checked that all of them remained the same(no functional changes)