You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I create this issue for speaking about the problems of performance in jormungandr. One of the slowest part in jormungandr is the marshalling phase that convert protocol buffer response in dict before serializing them in json. Another slow part that is already resolved is the deserialization of protobuf, this was solve by using the cpp implementation.
I tested a few solution:
Pypy is another python interpreter using a JIT for better performance.
Cython compile python code and give us the ability to type some variable, this can greatly increase performance. In this test I only active cython compilation. In almost every test this was equivalent or slower, so I will not include it in the table bellow.
Serpy is another marshaller, it should be faster than flask_restful marshaller (that has been deprecated in recent release)
These tests were run with ab, 100 requests were made each time and the table provide the average response time.
journey
stop_schedule
lines
lines?count=500
cpython
159
155
66
1679
pypy
172
117
55
954
cpython-serpy
NC
NC
44
1075
pypy-serpy
NC
NC
58
867
cpython-serpy-protocpp
NC
NC
27
529
pypy-serpy-protocpp
NA
NA
NA
NA
cpython-protocpp
136
102
45
1082
pypy-protocpp
NA
NA
NA
NA
protobuf doesn't support cpp optimization with pypy currently. Pypy is also a little slower on journey, it's possible that it"s because we use numpy, I will check that latter.
The migration process from marshal to serialize is slow, it is needed to rewrite the serializer for every type.
The text was updated successfully, but these errors were encountered:
I create this issue for speaking about the problems of performance in jormungandr. One of the slowest part in jormungandr is the marshalling phase that convert protocol buffer response in dict before serializing them in json. Another slow part that is already resolved is the deserialization of protobuf, this was solve by using the cpp implementation.
I tested a few solution:
These tests were run with ab, 100 requests were made each time and the table provide the average response time.
protobuf doesn't support cpp optimization with pypy currently. Pypy is also a little slower on journey, it's possible that it"s because we use numpy, I will check that latter.
The migration process from marshal to serialize is slow, it is needed to rewrite the serializer for every type.
The text was updated successfully, but these errors were encountered: