Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jormungandr performance issue #1759

Open
kinnou02 opened this issue Sep 12, 2016 · 2 comments
Open

Jormungandr performance issue #1759

kinnou02 opened this issue Sep 12, 2016 · 2 comments

Comments

@kinnou02
Copy link
Contributor

kinnou02 commented Sep 12, 2016

I create this issue for speaking about the problems of performance in jormungandr. One of the slowest part in jormungandr is the marshalling phase that convert protocol buffer response in dict before serializing them in json. Another slow part that is already resolved is the deserialization of protobuf, this was solve by using the cpp implementation.

I tested a few solution:

  • Pypy is another python interpreter using a JIT for better performance.
  • Cython compile python code and give us the ability to type some variable, this can greatly increase performance. In this test I only active cython compilation. In almost every test this was equivalent or slower, so I will not include it in the table bellow.
  • Serpy is another marshaller, it should be faster than flask_restful marshaller (that has been deprecated in recent release)

These tests were run with ab, 100 requests were made each time and the table provide the average response time.

journey stop_schedule lines lines?count=500
cpython 159 155 66 1679
pypy 172 117 55 954
cpython-serpy NC NC 44 1075
pypy-serpy NC NC 58 867
cpython-serpy-protocpp NC NC 27 529
pypy-serpy-protocpp NA NA NA NA
cpython-protocpp 136 102 45 1082
pypy-protocpp NA NA NA NA

protobuf doesn't support cpp optimization with pypy currently. Pypy is also a little slower on journey, it's possible that it"s because we use numpy, I will check that latter.

The migration process from marshal to serialize is slow, it is needed to rewrite the serializer for every type.

@pbougue
Copy link
Contributor

pbougue commented Sep 12, 2016

Nice! Could be nice to have the original figures also (current "prod-context" times)

@kinnou02
Copy link
Contributor Author

production is "cpython-protocpp"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants