Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upHTTP Cache parallelism experiment #23553
Conversation
highfive
commented
Jun 12, 2019
|
Heads up! This PR modifies the following files:
|
highfive
commented
Jun 12, 2019
|
Ok, need to deal with re-directs... |
3e1c217
to
b830741
|
Opened new PR for upstreamable changes. Completed upstream sync of web-platform-test changes at web-platform-tests/wpt#17311. |
|
Error syncing changes upstream. Logs saved in error-snapshot-1560410894923. |
|
No upstreamable changes; closed existing PR. Completed upstream sync of web-platform-test changes at web-platform-tests/wpt#17311. |
|
Ok so this actually exposed a bug, we would previously do two "stores" of a response, once in Actually the spec says to "update" the data of the stored resource as part of This had the result of storing 304 responses when we actually had 200 that could be refreshed. So with the current changes, a bunch of 304 related tests started failing, because the cache started returning 304 responses while a 200 was expected. cc @jdm |
|
Ok this one is ready for review. @jdm r? Is this progress? I can' t say I have done any scientific measurements, I did try running the On the other hand, the cache appears "faster" if only because the three "HTTP cache updates returned headers(...)" tests in fetch/http-cache/304-update.html started failing due to a previously hidden bug. It looks like this:
Note that 2-5 are done from the same fetch thread, while 6 is done in a parallel fetch(started after the one that does 2-5, but running in parallel with it). So previously, the test would always pass, and return the 200 at 6. The 304 was stored, but somehow never constructed and returned at 6. With these changes, that same test started consistently failing, and returning the stored 304 at 6. The fix was removing the bug and not storing the 304 when we should instead use it to update a 200. So logically, it appears that the contention on the cache as a whole prevented the stored 304 to be constructed in between the parallel steps 4/5 and 6. With the current change, it would seem 6 could "quickly" grab the 304 in between 4/5. Note that these fetches were contending on the same "entry". So this in a way shows that "per entry" parallelism is pretty good. The 'different entry' case hasn't been tested, but should logically be even better since there is no contention between different fetches using different entries(only when getting the entry from the cache). There was also the issue of the |
|
Note sure how relevant this is in the light of #23494 (comment), although the test will require an update in any case... |
|
closing in favor of #23494 (not sure the Moved the update of the test to that PR. |
gterzian commentedJun 12, 2019
•
edited by SimonSapin
Experiment to see if the parallelism of the cache can be improved, by having parallel fetches not content on the cache for each operation, instead just getting an entry corresponding to their request, and then operation on the entry, which should thus only be contented by fetches for similar resources. Also trying to have the various resources in an entry have their own locks, so that there will be less contention even in the case of requests for similar resources(which could further be reduced if entries are double keyed by top-level origin).
I'm only worried that requests urls change during a fetch, which would complicate matters...
Part of #23495
./mach build -ddoes not report any errors./mach test-tidydoes not report any errorsThis change is