After some experimenting and talking to Nicolas Pierron at the Paris office (he works on Ionmonkey performances), I found an easy performance gain to implement on the site which is caching our big arrays of translations per repo as json files on first use and use the json file instead of including the file. The reason is that json is a strict and simple format to parse and including that is less work for the engine than including a php file with just an array because a php file can contain any piece of code therefore the engine doesn't know what to expect.
On the entity view here is the before/after result on our most intensive view (entity search across all locales for Firefox Desktop):
Memory peak: 18874368 (17.25MB)
Elapsed time (s): 5.7142
Memory peak: 13369344 (12.57MB)
Elapsed time (s): 1.5571
We get performance and memory gains on all the views working with strings, here is for the main search view:
Memory peak: 22020096 (19.57MB)
Elapsed time (s): 0.1894
Memory peak: 20709376 (19.09MB)
Elapsed time (s): 0.0837
On my local server, if I simulate 500 requests to the API with batches of 100 simultaneous requests, I get 12 requests/s before the patch, 29 after.
The text was updated successfully, but these errors were encountered: