Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--parallel [amount] flag on pueue group --add [group_name] #249

Closed
Nukesor opened this issue Oct 4, 2021 · 5 comments · Fixed by #250
Closed

--parallel [amount] flag on pueue group --add [group_name] #249

Nukesor opened this issue Oct 4, 2021 · 5 comments · Fixed by #250
Labels
t: Enhancement Something that exists, but could be made better t: Feature A new feature that needs implementation
Milestone

Comments

@Nukesor
Copy link
Owner

Nukesor commented Oct 4, 2021

Add a --parallel flag, which allows users to set the amount of parallel tasks for that group during creation.

An optional parameter could be added to group add to specify the number of parallel tasks, e.g. pueue group add -p 10 gpu

Originally posted by @loewenheim in #240 (comment)

@Nukesor Nukesor added t: Enhancement Something that exists, but could be made better t: Feature A new feature that needs implementation labels Oct 4, 2021
@SpyrosRoum
Copy link
Contributor

SpyrosRoum commented Oct 14, 2021

Hey this seems like a relatively easy thing to add, and fairly convenient too so I'm gonna give it a try!

Should I base my changes on master or on branch 1.1.0?

@SpyrosRoum
Copy link
Contributor

SpyrosRoum commented Oct 14, 2021

And one more question:
I believe I have two choices on how to achieve this.
One would be to edit pueue-lib so GroupMessage::Add takes both the name and the amount of parallel tasks.
The other way would be to change get_message_from_opt so it can return multiple messages that will then be sent to the server.
The easiest way would be to change it so it returns Result<Vec<Message>> but another approach would be to add an enum like

enum Messages {
    Single(Message),
    Multiple(Vec<Message>)
}

This means that we wouldn't have to create a vec with just one element, which is the most common thing to return

Thoughts?

@loewenheim
Copy link
Contributor

I think the simplest way to do it is to give GroupMessage::Add a second parameter of type Option<usize>.

@SpyrosRoum
Copy link
Contributor

I agree it's simpler but it has changes in mote places (deamon, client, and the pueue-lib crate). I'm not sure if there is currently a system to check that the version of everything is compatible with everything else but it will probably be useful.
Finally being able to return multiple messages from get_message_from_opt might be useful in the future as well

@Nukesor
Copy link
Owner Author

Nukesor commented Oct 14, 2021

The changes should be based of the v1.1.0
It also uses the pueue-lib master branch, which makes development a lot more comfortable.

Adding an Option<usize> definitely is the way to go for now.
pueue-lib is not v1.0.0 for this exact reason. I expected that there will be quite a few breaking changes in the internal library structure, which is why pueue-lib isn't marked as stable yet.

I would like to keep get_message_from_opt as it is for now. The protocol is designed to be a simple ping-pong in most cases.
There are some cases, where we do some special handling, but those are things like Streaming or local file specific logic.
Sending multiple non-special requests in quick succession would need a partial rewrite of the current protocol logic.

I see how this has it's own advantages, as one can simply chain requests and thereby dynamically extend the client facing interface. However, this isn't how Pueue was designed. I would postpone this for now, as this would probably result in a major rewrite of a lot of things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
t: Enhancement Something that exists, but could be made better t: Feature A new feature that needs implementation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants