-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support top-level sequential for-loop #4421
Comments
We should also consider supporting |
Right, that's a slightly different problem than what I was originally thinking. In my understanding, users would want their loops to be parallelized. But for specific loop in a specific kernel, they might want to make it sequential 😅 |
We probably need to support both modes (everything serialized and single loop serialized). For serializing a single loop, I slightly prefer option 1 for its simplicity. Option 2 creates an extra indentation level :-) |
I wonder if option 2 might be somehow more "Pythonic"? -- I suspect that for many Python users it might look a bit unfamiliar that (On the other hand, perhaps Python users simply must learn this pattern to become fluent in Taichi, since there are other Taichi functions that kind of act as compiler hints like |
why not add a new keyword? |
Either way, I suggest us to converge to a more generic compiler hint approach. Instead of adding a new API for each knob, we can just make them parameters to a function such as |
For sequential loops I do for _ in range(1): # one thread
for i in range(100):
dostuff() No need for decorator, |
Yep, we do view that as a workaround rather than a solution.. |
Another cent from me: |
Resolved in #4525, now we can write |
Right now the top level for-loop in the kernels will be parallelized automatically. This is useful in most cases, but sometimes users indeed want the sequential semantics.
There are two approaches I can think of:
ti.block_dim
):with
:The text was updated successfully, but these errors were encountered: