Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No parallelization when accessing NonSend/NonSync storage in workload #101

Closed
poconn opened this issue Jun 24, 2020 · 7 comments
Closed
Labels
enhancement New feature or request

Comments

@poconn
Copy link
Contributor

poconn commented Jun 24, 2020

Currently a system with a NonSend and/or NonSync input always runs alone, but it should be possible to run other non-conflicting systems in parallel while running that system on the main thread.

I'm happy to post a PR, seems like it should be relatively straightforward, just checking first since it involves some core code that I'm not too familiar with. Does this conflict with anything you're working on/have planned, or is there some reason it's not feasible?

@leudz
Copy link
Owner

leudz commented Jun 24, 2020

I used !Send + !Sync's restrictions because it was simple and optimizations could be added at any time. I'm not working on anything related to the scheduler currently so if you want to implement it, have fun =)

The restrictions go like this:

  • Send + !Sync can only be accessed from one thread at a time so View and ViewMut have the same restrictions
  • !Send + Sync can only be added/removed from the thread World is in, so ViewMut<T> would have to be limited to this thread (I'm not actually sure if this is correctly enforced in World's methods actually, it might return an error/panic)
  • !Send + !Sync can only be accessed from one thread at a time and this thread has to be the same as World

@poconn
Copy link
Contributor Author

poconn commented Jun 24, 2020

I used !Send + !Sync's restrictions because it was simple and optimizations could be added at any time.

To be clear this applies to !Send + !Sync too - they're always put in a batch of their own so they're not run in parallel with anything else, while we could run other systems in other threads at the same time, right?

struct NonSendSyncComponent;

struct OtherComponent;

let world = World::new();

world
    .add_workload("workload")
    .with_system(system!(|_: NonSendSync<View<'_, NonSendSyncComponent>>| {
        println!("system 1");
        sleep(Duration::from_secs(1));
    }))
    .with_system(system!(|_: View<'_, OtherComponent>| {
        println!("system 2");
        sleep(Duration::from_secs(1));
    }))
    .build();
world.run_workload("workload");
// output:
// system 1
// [1s passes]
// system 2

Anyway !Send + !Sync is the case I mainly care about for now, so I was thinking I would make this change for everything other than Send + Sync, treating the other three cases the same for now (missing out on the additional possible parallelization for !Send + Sync and Send + !Sync).

@leudz
Copy link
Owner

leudz commented Jun 24, 2020

Oh I understand now.
Yes it's possible but does rayon allow to specify in which thread system should run?
We'd at least need a way to choose the "main" thread.

@poconn
Copy link
Contributor Author

poconn commented Jun 24, 2020

I was thinking kick off the other systems asynchronously through rayon, run the NonSendSync one right there in try_run_workload_index without going through rayon, and then join the other threads after it finishes? I'm not very familiar with rayon's API so not sure how trivial that is.

@leudz
Copy link
Owner

leudz commented Jun 24, 2020

rayon probably doesn't have an api for it. We could spawn a thread when creating World and send/receive info when needed. But that wouldn't be super clean and rayon might fight with the main thread for resource.

@poconn
Copy link
Contributor Author

poconn commented Jun 24, 2020

Huh, too bad, I just assumed that part would be straightforward 😕 I can look into it some more later then, thanks for the feedback!

@leudz leudz added the enhancement New feature or request label Oct 6, 2020
@leudz
Copy link
Owner

leudz commented Aug 11, 2021

Turns out rayon does have a method for this in_place_scope.

@leudz leudz closed this as completed in 89c0e3d Aug 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants