Performance harness for MirageOS 3 #685
Open
Labels
Comments
Hi there. Seems you got me mistaken for one of your devs :) |
If only my skills were up to the task! :) Best of lucks. |
Manuel, you may need to manually unsubscribe yourself from this thread. Otherwise you'll keep getting notifications. :) |
@TImada IIRC you were awaiting permission to publish the performance harness work you've done - is that still in process somewhere? |
@yomimono sorry, I'm still awaiting the permission. It was supposed to be done by this week. I hope I can publish it soon. :) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We need one to evaluate all the MirageOS 3 backends and determine if changesets are causing regressions or improvements.
Many parameters to vary: individual library versions, OCaml compiler revisions, compiler flags, options like tracing, and the actual backends that we target (Xen, KVM, Unix)
Initially going to look at on-host network tests, and then move to off-host ones when we get physical machines.
Possibly use Owl to do some multivariate data analysis on the script results.
Push the results to a single performance summary page
@avsm to sort out physical machine infrastructure (initially hosted at CUCL)
@avsm to build a DataKit module to run perf tests via OPAM
Takayuki Imada to start on Mirage unikernels for individual network performance (e.g. iperf, ping)
@mor1 to resurrect https://github.com/mirage/mirage-perf which we used for MirageOS2 (but never ran on an ongoing basis)
someone to determine what format we log in for post processing
Useful notes:
The text was updated successfully, but these errors were encountered: