Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: DSP input parameter groupings, rendering order #1760

Open
yuanchi2807 opened this issue Sep 6, 2023 · 6 comments
Open
Labels
community feature/ds-pipelines Data Science Pipelines feature (aka DSP) kind/enhancement New functionality request (existing augments or new additions) needs-info Further information is requested from the reporter or from another source priority/normal An issue with the product; fix when possible

Comments

@yuanchi2807
Copy link

Feature description

Pipeline input parameters are rendered in alphabetical order today.
User experience would improve to divide them into mandatory, optional fields to accomodate general and expert users.
General users only need to fill in the mandatory section, where optional fields allow experts to further configure a pipeline.

Describe alternatives you've considered

No response

Anything else?

No response

@yuanchi2807 yuanchi2807 added kind/enhancement New functionality request (existing augments or new additions) priority/normal An issue with the product; fix when possible untriaged Indicates the newly create issue has not been triaged yet labels Sep 6, 2023
@andrewballantyne
Copy link
Member

@yuanchi2807 I'm fairly certain (looking at the code) we just render them in order they are in the params section... have you tried reordering your params to see if it reorders?

@andrewballantyne andrewballantyne added the needs-info Further information is requested from the reporter or from another source label Sep 6, 2023
@yuanchi2807
Copy link
Author

@andrewballantyne
We put in

    name: str = "LLM-pipeline",  # name of Ray cluster
    min_worker: int = 2,  # min number of workers
    max_worker: int = 2,  # max number of workers
    cluster_up_tmout: int = 5,  # minutes to wait for the Ray cluster achived min_worker number
    wait_cluster_ready_tmout: int = 0,  # seconds to wait for Ray cluster to become available
    wait_cluster_nodes_ready_tmout: int = 1,  # seconds to wait for cluster nodes to be ready
    ....

and the Pipeline input parameters section begins like this.
image

@andrewballantyne
Copy link
Member

Ooooh, most interesting. Thanks for the additional information @yuanchi2807

I am going to pull in some backend/notebook help to understand what is happening here. cc @harshad16 @HumairAK

If things are all generated by Kubeflow & the SDK -- is there anyway for them to order the items? Because it seems the params might sort them on that side of things (either in Elyra or in some other fashion).

@roytman
Copy link

roytman commented Sep 6, 2023

@andrewballantyne , you're right. It is done by the compiler, I see the input parameters according ABC in the yaml file.
e.g.

  - name: execute-ray-jobs
      params:
      - name: checkpoint_path
        value: $(params.checkpoint_path)
      - name: cos_access_secret
        value: $(params.cos_access_secret)
      - name: http_retries
        value: $(params.http_retries)
      - name: init-notifier-output
        value: $(tasks.init-notifier.results.output)
      - name: input_path
        value: $(params.input_path)
      - name: output_path
        value: $(params.output_path)
      - name: script_name
        value: $(params.script_name)
      - name: start-ray-cluster-output
        value: $(tasks.start-ray-cluster.results.output)
      - name: switch
        value: $(params.switch)
      - name: wait_job_ready_retries
        value: $(params.wait_job_ready_retries)
      - name: wait_job_ready_tmout
        value: $(params.wait_job_ready_tmout)
      - name: wait_print_tmout
        value: $(params.wait_print_tmout)
        ```

@andrewballantyne
Copy link
Member

If there is metadata you can get on the run -- we could sort them optionally based on that.

@DaoDaoNoCode DaoDaoNoCode added feature/ds-pipelines Data Science Pipelines feature (aka DSP) needs-ux and removed untriaged Indicates the newly create issue has not been triaged yet labels Sep 11, 2023
@DaoDaoNoCode
Copy link
Member

cc @yannnz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community feature/ds-pipelines Data Science Pipelines feature (aka DSP) kind/enhancement New functionality request (existing augments or new additions) needs-info Further information is requested from the reporter or from another source priority/normal An issue with the product; fix when possible
Projects
Status: No status
Status: UX Backlog
Status: No status
Development

No branches or pull requests

5 participants