-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some analysis about memory allocator in json-c #552
Comments
linux/slab.h, or anything from linux will not be usable due to incompatible licenses. I suspect something got missed in your analysis. Are all arrays in your sample data less than 32 entries? If not then there should be more allocations for "Arraylist->size" (which I assume is actually for "Arraylist->array") than for "Arraylist". Similarly, if any object has more than 16 fields, the lh_entry count should be more than lh_table. Also, any analysis of performance seems rather incomplete without details about what else is taking up time. If memory allocation is only 23% of the time (4.379/18.657), then wouldn't it make more sense to focus on the other 77% of the time spent instead? |
Yes, imiskolee/mempool is inappropriate and I just take it as an example. And, if needed, I suggest I missed the
Memory allocate time of |
…ed at https://github.com/json-c/json-c/wiki/Proposal:-struct-json_object-split The current changes split out _only_ json_type_object, and thus have a number of hacks to allow the code to continue to build and work. Originally mentioned in issue #535. When complete, this will probably invalidate #552. This is likely to cause notable conflicts in any other significant un-merged changes, such as PR#620.
Fyi, I just pushed commit e26a119, which significantly improves the memory used for use cases that have a lot of small arrays. That, plus the changes from the json_object-split branch which I just merged in, might help to make it less useful to use a slab allocator. |
When json-c parses a json string, it needs to request the memory for many structs/string with
malloc()
/calloc()
func. Especially, if the json file is larger, the request frequencies is more.But,
malloc()
/calloc()
function have much disadvantages, like having low running speed, needing free memory, and causing memory fragmentation. I think we don't need to care about memory free or memory fragmentation, because the parsing process in short time will not bring serious problem of memory fragmentation. Memory request speed is the key factor for performance optimization. I parsed a 631k.json
coming from the miloyip/nativejson-benchmark and got the results as follows:parse time
: 18.657msAs we can see,
json_object
,lh_table
,lh_entry
,Araylist
,Arraylist->size
andjson->o.c_string.str.ptr
have many allocation frequencies. Takejson_object
as example, it requests memory in 11968 times and costs 4.379ms. So, memory allocation costs much time throughout the hole parse process. And if we want to improve performance of json-c,we could try to reduce the time required for memory requests.Slab allocator is a memory management mechanism intended for the efficient memory allocation of objects. It is very suitable for the
json_object
,lh_table
,lh_entry
,Araylist
. However, when I try using this, I couldn't find or include the linux/slab.h. I think it's not open to the user. Maybe I use it in wrong way? In addition, we need to create/destroy akmem_cache
for every struct. But, like memory pool, we just need to create one memory block at the begin. What's your opinion on this? We can discuss the implementation details later.Memory pool is a good choice. I used imiskolee/mempool to request memory for
json_object
and the parse time is reduced. Maybe we could find a suitable opensource memory pool or just write one. This is a technical job.At last, as request: json_init_library #540 mentioned, many libraries support wrapping a custom function for
malloc/calloc
and let the user choose to solve memory optimization problem. cJson and jsnsson are the examples. It is easy to be implemented.Welcome everyone's comments. ha-ha
The text was updated successfully, but these errors were encountered: