You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all thanks for this library! This sounds like a very good idea.
However, what I noticed is that since it evaluates fields lazily, comparing it directly to other JSON libraries that provide you with the full dictionary right away is a bit unfair.
Assuming you will use the whole dict anyway, IMHO, a more fair comparison would be with .export() call.
Hi,
yes, this is because the conversion to Python dict (and other types) is the most expensive bit of the whole JSON parsing.
The idea behind this is to harvest the raw power of SIMDJSON in Python; not to race against orjson.
It is also - as you point out correctly - not universal replacement, you need to make some trade-offs (read-only parsing output which is not a true Python dictionary).
The message is that: (1) these speeds are possible in Python (2) you need to adjust your design if you want to be in this performance range.
In our case, we parse rather big (10kb) JSONs in very high frequency (>50000 per second), we don't need to access all attributes (by far) and we don't need to modify the dictionary.
For this SIMDJSON is ideal choice.
Hi!
First of all thanks for this library! This sounds like a very good idea.
However, what I noticed is that since it evaluates fields lazily, comparing it directly to other JSON libraries that provide you with the full dictionary right away is a bit unfair.
Assuming you will use the whole dict anyway, IMHO, a more fair comparison would be with
.export()
call.Then however, it is slower than orjson.
It gets a lot faster if you will not use the whole dictionary.
However, in my experience this is rarely the case.
The text was updated successfully, but these errors were encountered: