Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: benchmark util for assessing performance #787

Merged
merged 9 commits into from
Dec 21, 2023
Merged

feat: benchmark util for assessing performance #787

merged 9 commits into from
Dec 21, 2023

Conversation

Nischay-Pro
Copy link
Contributor

@Nischay-Pro Nischay-Pro commented Aug 3, 2023

Description

This PR adds a new benchmarking script to quickly check the performance of flexmeasures and allows the devs to check the performance of the code.

To use, simply have flexmeasures configured and activated in your local environment, and run

python3 benchmark.py

located in the ci folder of the root directory.


  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on code under GPL or other license that is incompatible with FlexMeasures

Signed-off-by: Nischay Ram Mamidi <NischayPro@gmail.com>
@Nischay-Pro Nischay-Pro changed the title initial benchmark script feat: benchmark util for accessing performance Aug 3, 2023
@Nischay-Pro Nischay-Pro self-assigned this Aug 3, 2023
Signed-off-by: Nischay Ram Mamidi <NischayPro@gmail.com>
Copy link
Contributor

@nhoening nhoening left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested it and it works fine. Thanks!

Could you round the measured duration to a shorter representation (e.g. 2 after-comma digits)?

Maybe @Flix6x has ideas for the commands:

  • Under which circumstances do these work? (I believe this assumes our production db dump)?
  • Could we maintain the list of sensors outside of the main code, or above in a handy-to-edit list?
  • What other commands would be useful, e.g. add schedule for-storage

@nhoening nhoening requested a review from Flix6x August 10, 2023 11:38
@Nischay-Pro
Copy link
Contributor Author

I tested it and it works fine. Thanks!

Could you round the measured duration to a shorter representation (e.g. 2 after-comma digits)?

Maybe @Flix6x has ideas for the commands:

* Under which circumstances do these work? (I believe this assumes our production db dump)?

* Could we maintain the list of sensors outside of the main code, or above in a handy-to-edit list?

* What other commands would be useful, e.g. `add schedule for-storage`
  • Currently production db-dump
  • That actually looks good. Something similar to toy tutorial, with it's own dataset
  • Yes. That would be helpful.

@Flix6x
Copy link
Contributor

Flix6x commented Aug 11, 2023

I would like it if the sensor ID (for me, one is enough) could be passed when calling the benchmark script, so the script can be used on any FlexMeasures database. For me, it would then make sense to also allow passing a custom start, duration and max_iterations (possibly with the current values as defaults). I do like the benchmark showing results for several durations, but a single sensor is all I need.

Signed-off-by: F.N. Claessen <felix@seita.nl>
@Flix6x Flix6x added this to the 0.17.0 milestone Sep 27, 2023
@nhoening nhoening assigned nhoening and unassigned Nischay-Pro Oct 17, 2023
…mmand, as well

Signed-off-by: Nicolas Höning <nicolas@seita.nl>
Signed-off-by: Nicolas Höning <nicolas@seita.nl>
Signed-off-by: Nicolas Höning <nicolas@seita.nl>
Signed-off-by: Nicolas Höning <nicolas@seita.nl>
Signed-off-by: Nicolas Höning <nicolas@seita.nl>
Signed-off-by: Nicolas Höning <nicolas@seita.nl>
@nhoening nhoening modified the milestones: 0.17.0, 0.18.0 Nov 8, 2023
@Flix6x Flix6x merged commit 82949bf into main Dec 21, 2023
9 checks passed
@Flix6x Flix6x deleted the benchmark-util branch December 21, 2023 10:57
@nhoening
Copy link
Contributor

@Flix6x I don't agree this PR was ready. Let's re-open it.

It should not call the CLI as that takes too long by itself. I have not found the best way to benchmark otherwise yet. Maybe API or CLI or calling into functions directly. But as you in one of my commits, the client is not a good way either IMO.

@Flix6x Flix6x restored the benchmark-util branch December 21, 2023 12:54
Flix6x added a commit that referenced this pull request Dec 21, 2023
This reverts commit 82949bf.

Signed-off-by: F.N. Claessen <felix@seita.nl>
@Flix6x Flix6x removed this from the 0.18.0 milestone Dec 21, 2023
@Flix6x Flix6x added this to the 0.19.0 milestone Dec 21, 2023
@Flix6x
Copy link
Contributor

Flix6x commented Dec 21, 2023

Apologies. I reverted the merge commit on the main branch: 8ed96e1.

@nhoening nhoening mentioned this pull request Jan 2, 2024
@nhoening
Copy link
Contributor

How can we re-open a PR?

@nhoening
Copy link
Contributor

Oh I see #949 is the new one for this.

@Flix6x Flix6x changed the title feat: benchmark util for accessing performance feat: benchmark util for assessing performance Feb 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants