Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal on benchmark flow declaration and separation of concerns #8

Open
tchataigner opened this issue Sep 13, 2023 · 1 comment
Open

Comments

@tchataigner
Copy link

Hey everyone! I have been interacting with the Gomu Gomu project for a few days now as I was implementing the cache layer over Madara. This gave me some time to think about some ideas for benchmark flows, and after discussing with @d-roak we decided to open an issue to talk about it.

Separation of concerns

I do not think that this bring anything new under the sun compared to the initial issue by @abdelhamidbakhta but I wanted to give my whole thinking process/

From my understanding there are a few elements driving the development of the project, but the main one seems to be to have a standard benchmark toolkit for every Starknet sequencers implementation. This pushes for a CLI that could be used over any sequencer RPC endpoint.
However, I think that each sequencer will also have the need of being able to implement easily their own benchmark flow, so that should be easily achieved, either in the sequencer repo or in this one.
By splitting the Gomu Gomu implementation in three crates (or two if three seems an overkill), I think we could achieve that. My idea was:

  • cli crate: contains interface to interact with the shooter logic through a terminal. Practical to easily integrate in CI and use in a dev environment
  • lib crate: contains the core logic of the shooter. Is responsible to run the benchmark flows, measure performances and generate reports
  • flow crate: contains the logic for each benchmark flow, along with their own ephemeral state during testing.

Here lib and flow could be merged but I liked the idea of having each crate responsible for different elements.

Implementation of the flow execution

I think the current GatlingShooter is really good at what it does and we should not touch it too much. However, I would change the way we run benchmark.

For starters, I would try to abstract a benchmark logic as much as possible from GatlingShooter and making it generic to accept a trait we define. Taking from tester::Bencher for inspiration we could have this method added to run flows:

impl GatlingShooter {
    // Arguments available in closure should be iterated upon to identify everything needed during a benchmark.
    fn iter<T>(&mut self, flow: Flow<T>,count: u64) {
        // Run closure necessary number of time while measure the time taken each time
        // Save all measured performances in GatlingShooter
    }

Then we could declare flow over a trait:

trait Flow<T> {
    // Called by GatlingShooter::iter() before running each closure
    fn initialize(gatling_shooter: GatlingShooter) -> Result<Self, Error> where Self: Sized;
    // Method to be run by GatlingShooter::iter()
    fn execution(rpc: Arc<JsonRpcClient<HttpTransport>>, accounts: Vec<SingleOwnerAccount<Arc<JsonRpcClient<HttpTransport>>, LocalWallet>>) -> T;
    // Maybe add finalize() here ?
}

If we consider having a flow crate, all standard flows defined in Gomu Gomu could be exported over an enumeration to ease logic in the cli crate. We could define a proc macro to auto-generate it at build time.

With this kind of setup it would also be quite easy for any project directly using the lib crate directly to define their own flow to in Gomu Gomu.

Miscellaneous thoughts

  • It might be interesting to be able to re-initialize the state of the node we are running against as to test a consistent behavior. Is it really interesting? Could we achieve that?
  • Is there a need to pass specific command line argument for a given benchmark flow?

Looking forward to here thoughts on this or answer any questions!

@github-actions
Copy link

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍
Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs.
Thank you for your contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants