After some experimenting and talking to Nicolas Pierron at the Paris office (he works on Ionmonkey performances), I found an easy performance gain to implement on the site which is caching our big arrays of translations per repo as json files on first use and use the json file instead of including the file. The reason is that json is a strict and simple format to parse and including that is less work for the engine than including a php file with just an array because a php file can contain any piece of code therefore the engine doesn't know what to expect.
On the entity view here is the before/after result on our most intensive view (entity search across all locales for Firefox Desktop):
Before:
Memory peak: 18874368 (17.25MB)
Elapsed time (s): 5.7142
After:
Memory peak: 13369344 (12.57MB)
Elapsed time (s): 1.5571
We get performance and memory gains on all the views working with strings, here is for the main search view:
Before:
Memory peak: 22020096 (19.57MB)
Elapsed time (s): 0.1894
After:
Memory peak: 20709376 (19.09MB)
Elapsed time (s): 0.0837
On my local server, if I simulate 500 requests to the API with batches of 100 simultaneous requests, I get 12 requests/s before the patch, 29 after.
The text was updated successfully, but these errors were encountered:
On our real server, for the same API call:
before:
Requests per second: 22.85 #/sec
Time per request: 4377.272 ms
After:
Requests per second: 38.30 #/sec
Time per request: 2610.636 ms
After some experimenting and talking to Nicolas Pierron at the Paris office (he works on Ionmonkey performances), I found an easy performance gain to implement on the site which is caching our big arrays of translations per repo as json files on first use and use the json file instead of including the file. The reason is that json is a strict and simple format to parse and including that is less work for the engine than including a php file with just an array because a php file can contain any piece of code therefore the engine doesn't know what to expect.
On the entity view here is the before/after result on our most intensive view (entity search across all locales for Firefox Desktop):
Before:
Memory peak: 18874368 (17.25MB)
Elapsed time (s): 5.7142
After:
Memory peak: 13369344 (12.57MB)
Elapsed time (s): 1.5571
We get performance and memory gains on all the views working with strings, here is for the main search view:
Before:
Memory peak: 22020096 (19.57MB)
Elapsed time (s): 0.1894
After:
Memory peak: 20709376 (19.09MB)
Elapsed time (s): 0.0837
On my local server, if I simulate 500 requests to the API with batches of 100 simultaneous requests, I get 12 requests/s before the patch, 29 after.
The text was updated successfully, but these errors were encountered: