-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When I use opt_einsum optimizes torch.einum, the running time after optimization increases. #202
Comments
Hey @edwin-zft, I get: %%timeit
y = naive(x, w1, w2, w3, w4, w5, w6)
# 536 µs ± 4.06 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) vs. %%timeit
y = optimized(x, w1, w2, w3, w4, w5, w6)
# 470 µs ± 2.07 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) and as a bonus: expr = contract_expression(
'bkxy,ikj,jxm,myf,fpl,lqz,zri->bpqr',
x.shape, w1.shape, w2.shape, w3.shape, w4.shape, w5.shape, w6.shape,
optimize='dp',
)
%%timeit
y = expr(x, w1, w2, w3, w4, w5, w6)
# 72.2 µs ± 758 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) so maybe its just a warm-up issue for you, are you using |
Thank you for your reply!
vs.
The improvement of running speed after optimization is not obvious. I guess it is due to the particularity of this expression.
Finally, thank you very much for your answers and your work! |
Some of the recent PRs/issues etc. in If I increase to
I don't know the intricacies of timeit, but I guess its running the path optimization to produce |
FYI torch indeed does default to using opt_einsum if it's found in the environment. |
Super cool! |
The respective running time:
I want to know what caused this.Thanks!
The text was updated successfully, but these errors were encountered: