You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 29, 2024. It is now read-only.
I found the low proformance of library json.
There are about 1441440 data in my single series.
When I try to query select host,last(value) from node_cpu group by host, I found it is ultra slow to parse the json body in the response. It takes me about 30s to solve the query.
When I modified the source code, using library ujson instead of json, I found it is ultra fast to parse the http response.
I'm not sure that is a good idea, whether uJSON or JSON performs better is really dependent on what Python run-time you are using and the size of the payloads. If I'm using cPython then uJSON will be a good choice, however if I'm using pypy then the JIT will cause JSON to exceed the performance of uJSON.
I'd like to see the library configurable at runtime, let us chose between uJSON and JSON depending on our use-case and chosen run-time.
I found the low proformance of library json.
There are about 1441440 data in my single series.
When I try to query
select host,last(value) from node_cpu group by host
, I found it is ultra slow to parse the json body in the response. It takes me about 30s to solve the query.When I modified the source code, using library ujson instead of json, I found it is ultra fast to parse the http response.
Here is the patchfile
use_ujson_instead_of_json.txt
The text was updated successfully, but these errors were encountered: