Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mlos_webui design requirements #838

Open
9 tasks
bpkroth opened this issue Aug 12, 2024 · 4 comments
Open
9 tasks

mlos_webui design requirements #838

bpkroth opened this issue Aug 12, 2024 · 4 comments

Comments

@bpkroth
Copy link
Contributor

bpkroth commented Aug 12, 2024

For starters, mlos_webui seems an appropriate moniker to me. It should support

  1. Visualization of ExperimentData from a shared Storage location.
  2. Execution of existing mlos_bench configs it has access to.
    (e.g., via SP access to a git repo).
    • We should allow it to be parallelized across Experiments via GridSearch (e.g., to schedule different instances of a tuning Experiment across different scenarios, like storage types or VM sizes, or benchmark type).
    • Error logs should be visible.
    • Experiments should be resumable.
  3. Editing of such configs.
    (with versioning)
  • Authentication should be required.
  • Fine grained authorization (e.g., restricting access to experiments by group or some such) is out of scope for now (i.e., if you can authenticate, you can perform any action).
  • Code should be modular enough to allow easily adding features (e.g., new visualizations).

Originally posted by @bpkroth in #824 (comment)

@bpkroth
Copy link
Contributor Author

bpkroth commented Aug 12, 2024

@eujing
@yshady

@bpkroth
Copy link
Contributor Author

bpkroth commented Aug 12, 2024

See Also #732

@yshady
Copy link

yshady commented Aug 12, 2024

Thanks for compiling a list of tasks/features @bpkroth

I’d also love support for users to be able to

  • be able to configure benchmarks from a gui
  • Error logs should show the end x lines and parse out potential error lines (user preference)
  • be able to monitor active experiments and see which launched experiments have quit before hitting the needed trial repeat number, this can be used as a tracking method
  • automated correlation detection / validation between Params and a target metric would be a heaven sent
  • for a experiment grid showing pie chart for each part where user can see pending, failed, and success metrics

Essentially if we can map out everything we do into a web ui with no code I think this will be a far better costumer experience for those who may want to create their own benchmarks and their own tuning teams

@yshady
Copy link

yshady commented Aug 12, 2024

I think creating a validation system where controls and optimized configs are done automatically on some level will be super valuable for a webpage

instead on launching many experiments manually we might want to create a system to find and validate a number of configs given a estimated monetary budget

along with this we also need to consider detection of when param selection is ineffective in changing a target metric

@bpkroth bpkroth mentioned this issue Oct 3, 2024
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants