Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow running benchmarks from a different repo #370

Closed
wants to merge 6 commits into from

Conversation

punchagan
Copy link
Contributor

This is a PR that corresponds to the changes in tarides/ocaml-benching#2. This is still a WIP for the UI/frontend, but the backend plumbing should be in place.

@punchagan punchagan requested a review from art-w June 30, 2022 07:09
@punchagan punchagan force-pushed the ocaml-benching branch 3 times, most recently from 063c843 to 6bbeb8d Compare July 18, 2022 06:28
@punchagan punchagan marked this pull request as ready for review July 18, 2022 06:40
@punchagan punchagan force-pushed the ocaml-benching branch 6 times, most recently from 3fc8316 to b873e47 Compare March 13, 2023 11:04
@punchagan punchagan changed the title WIP: Allow running benchmarks in a different repo Allow running benchmarks from a different repo Mar 14, 2023
@ElectreAAS
Copy link
Contributor

With 26 files changed, this is a hard PR to review without context. Can you elaborate on what your goals were?
I can guess that the addition of target_name and target_version is the major thing going on, but I don't understand what they really mean for us

@punchagan
Copy link
Contributor Author

With 26 files changed, this is a hard PR to review without context. Can you elaborate on what your goals were?

Yes, apologies @ElectreAAS ! I just cleaned-up an old work-in-progress PR, and didn't realize that there's not enough context for a review. Thanks for flagging this! :)

The PR aims to allow having benchmarks in a different repository from the actual code that is being benchmarked. The original work was done to allow having the benchmarks for ocaml/ocaml run without making any changes to the ocaml/ocaml repository, including adding the GitHub application on the original/target repository.

With these changes, current-bench would be configured to watch the benchmark repository (art-w/benching for instance), and the make bench in that repository would clone the target repository and run the benchmarks. The current-bench code as it is, doesn't allow for this easily, since the current-bench requires commit hashes to be unique for a repository, and we'd like to be able to run the same benchmarks for different commits/versions of the target repository. The changes in this PR essentially allow for this, while also making some changes in the UI to link to the target repository instead of the benchmarks repository from the graphs, etc.

@ElectreAAS
Copy link
Contributor

I just rebased on main for a simpler review. This was an automatic rebase so I assume nothing broke

@punchagan
Copy link
Contributor Author

The current approach tries to support having the benchmarks in a completely different repository, and some configuration on current-bench without having to make any changes to the target repository. But, this doesn't support watching the target repository's PRs, etc., and the step of adding the GitHub application to the target repository would need to be done to support that. The changes in this PR were done specifically to support building benchmarks for ocaml/ocaml, and to allow for running the benchmarks on different versions of OCaml.

Currently, we plan to run the benchmarks on ocaml/ocaml on a weekly basis, but we'd also want to run the benchmarks on PRs to the project. Would it better to take the approach where we add a minimal set of changes to ocaml/ocaml and then use a separate repository like ocaml-benching to store the benchmarks and run them? This is the approach taken by Steve for dune monorepo benchmarks (PR), and this seems like a better approach in the long run. Also, I think we'd not need to do as many tweaks and shenanigans in the UI, etc., as we do in this PR, if we take that approach.

Thoughts @art-w , @ElectreAAS ?

match conf.Config.target_name with
| Some name -> ("TARGET_NAME=" ^ name) :: build_args
| _ -> build_args
in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand the motivation for the new fields in the configuration? (can't we just configure the existing conf.Config.build_args appropriately?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The additional metadata would make it easier to update the corresponding DB fields, before a benchmark run has finished and the JSON is ready.

@art-w
Copy link
Contributor

art-w commented Mar 24, 2023

Hm yes I see what you mean with the shenanigans hmhm. I like the idea of configuring on our end that the ocaml/ocaml has a secondary repo ocaml-benching that contains the benchmark code to run, such that we can keep the logic "run a benchmark on every PR / every week" without requiring tricks on the UI side.. It's not entirely obvious but it seems doable?

punchagan and others added 6 commits March 24, 2023 16:52
To support running benchmarks from a different repository than the original
repository, we allow specifying the name and the version of the
original (target project) in the benchmarks config. We capture this version to
a separate column in the benchmarks table in the DB and can be displayed in the
front-end.
target_version and target_name are parsed from the JSON output of the
benchmarks run, but this information comes too late.  The DB has unique
constraints on (repo_id, commit, worker and docker_image) on the metadata table
to detect duplicate runs.  This makes it tricky to configure benchmark runs for
a project where the benchmark repository is separate from the code repository.
Including the target_version in the unique constraint above would allow this.
@ElectreAAS
Copy link
Contributor

I rebased again on main, with the newest API. Hopefully my manual conflict resolutions didn't break anything.

@punchagan
Copy link
Contributor Author

I like the idea of configuring on our end that the ocaml/ocaml has a secondary repo ocaml-benching that contains the benchmark code to run, such that we can keep the logic "run a benchmark on

I implemented something on these lines in #427

@punchagan
Copy link
Contributor Author

Closed in favor of #427

@punchagan punchagan closed this Apr 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants