Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating benchmarks for piccolo ORM #144

Open
aminalaee opened this issue Jul 31, 2021 · 7 comments
Open

Creating benchmarks for piccolo ORM #144

aminalaee opened this issue Jul 31, 2021 · 7 comments

Comments

@aminalaee
Copy link
Member

I'm usually not a fan of benchmarks but I think it'd be a good idea to have some benchmarks comparing piccolo with other sync/async ORMs.
It will also help with cases like #143 comparing piccolo with itself with different configurations.

@dantownsend
Copy link
Member

@aminalaee I like the idea of having benchmarks to catch performance regressions made by changes to the Piccolo codebase.

I find them quite tricky to implement though, as there's no guarantees around the performance of the CI infrastructure, so one build might be slower than another, without it being anything to do with the code being tested.

As for testing it against other frameworks, it's hard to know what to test. Piccolo is fastest in this situation:

# `freeze` caches a lot of the work in generating the SQL:
QUERY = MyTable.select().output(as_json=True).freeze()

async def some_endpoint(request):
    # Letting Piccolo serialise the JSON means orjson will be used if available, which is super fast.
    data = await QUERY.run()
    return Response(data, content_type="application/json")

Other frameworks might not have comparable features, or might have their own performance optimisations we're not aware of.

What do you think?

@aminalaee
Copy link
Member Author

@dantownsend
I think for testing different frameworks benchmarking basic INSERT/SELECT/UPDATE/DELETE without any configuration can be a good start. I agree that optmizing each framework can be complicated and probably unfair to others.

For regression testing Piccolo I think we can try pytest-benchmark and if the load of Github server affects the numbers go for dedicated hardware for testing. But I guess increasing number of queries and maybe getting average values from the tests can minimize that effect.

@dantownsend
Copy link
Member

@aminalaee Yeah, that makes sense. Do you think the benchmarks should be part of this repo, or a separate repo?

@aminalaee
Copy link
Member Author

aminalaee commented Aug 1, 2021

@dantownsend I think for comparing different frameworks we can have a separate repo so anyone can see how that works and run them locally. We would show benchmarks in Piccolo docs then. That would keep Piccolo's history clean of benchmarking commits.

And for Piccolo regression tests I think pytest-benchnark can do a good job with a github workflow but as you said this is a bit tricky so needs more testing.

If you think we need both of them we can do them separately.

@dantownsend
Copy link
Member

Sounds like a good plan. It would be nice to have both, but having either of them would be useful.

@aminalaee
Copy link
Member Author

@dantownsend If we want the extra repository for comparisons, then please create the repo and I'll start an MR to get it started

@dantownsend
Copy link
Member

Here's a repo in case you feel like doing some performance testing:

https://github.com/piccolo-orm/piccolo_performance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants