Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support top-level sequential for-loop #4421

Closed
k-ye opened this issue Mar 1, 2022 · 10 comments
Closed

Support top-level sequential for-loop #4421

k-ye opened this issue Mar 1, 2022 · 10 comments
Assignees
Labels
discussion Welcome discussion! feature request Suggest an idea on this project

Comments

@k-ye
Copy link
Member

k-ye commented Mar 1, 2022

Right now the top level for-loop in the kernels will be parallelized automatically. This is useful in most cases, but sometimes users indeed want the sequential semantics.

There are two approaches I can think of:

  1. A loop-level decorator (we already have a few of these, e.g. ti.block_dim):
@ti.kernel
def foo():
  ti.seq_loop()
  for i in range(100):
    # runs in a single thread
  1. Use with:
@ti.kernel
def foo():
  with ti.loop_config(sequential=True, block_dim=128):
    for i in range(100):
      # runs in a single thread
@k-ye k-ye added feature request Suggest an idea on this project discussion Welcome discussion! labels Mar 1, 2022
@yuanming-hu
Copy link
Member

We should also consider supporting ti.init(serial=True) so that everything becomes serial on CPUs :-)

@k-ye
Copy link
Member Author

k-ye commented Mar 1, 2022

We should also consider supporting ti.init(serial=True) so that everything becomes serial on CPUs :-)

Right, that's a slightly different problem than what I was originally thinking. In my understanding, users would want their loops to be parallelized. But for specific loop in a specific kernel, they might want to make it sequential 😅

@yuanming-hu
Copy link
Member

We probably need to support both modes (everything serialized and single loop serialized).

For serializing a single loop, I slightly prefer option 1 for its simplicity. Option 2 creates an extra indentation level :-)

@bcolloran
Copy link

I wonder if option 2 might be somehow more "Pythonic"? -- I suspect that for many Python users it might look a bit unfamiliar that ti.seq_loop() would alter the behavior of the following loop, because it's not a pattern that is possible in other Python code, but a with block often signifies that something special is happening in the enclosed block.

(On the other hand, perhaps Python users simply must learn this pattern to become fluent in Taichi, since there are other Taichi functions that kind of act as compiler hints like ti.block_dim and and ti.block_local that don't use a with block.)

@95833
Copy link
Contributor

95833 commented Mar 1, 2022

why not add a new keyword?

@qiao-bo qiao-bo added this to To Triage in Lang Features & Python via automation Mar 2, 2022
@k-ye
Copy link
Member Author

k-ye commented Mar 2, 2022

Either way, I suggest us to converge to a more generic compiler hint approach. Instead of adding a new API for each knob, we can just make them parameters to a function such as ti.loop_config()?

@mackrol
Copy link

mackrol commented Mar 3, 2022

For sequential loops I do

for _ in range(1): # one thread
    for i in range(100):
        dostuff()

No need for decorator,

@k-ye
Copy link
Member Author

k-ye commented Mar 3, 2022

For sequential loops I do

for _ in range(1): # one thread
    for i in range(100):
        dostuff()

No need for decorator,

Yep, we do view that as a workaround rather than a solution..

@yuanming-hu
Copy link
Member

Another cent from me: serial may be a better name compared to sequential. It's clearer and shorter.

@lin-hitonami
Copy link
Contributor

lin-hitonami commented Mar 25, 2022

Resolved in #4525, now we can write ti.loop_config(serialize=True) before a loop to make the loop serial.

Lang Features & Python automation moved this from To Triage to Done Mar 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Welcome discussion! feature request Suggest an idea on this project
Development

No branches or pull requests

6 participants