Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: evaluate terms in parallel #464

Merged
merged 4 commits into from
Jan 27, 2017
Merged

ci: evaluate terms in parallel #464

merged 4 commits into from
Jan 27, 2017

Conversation

talex5
Copy link
Contributor

@talex5 talex5 commented Jan 26, 2017

This PR allows up to 50 terms to be evaluated at the same time. This may be useful if you have a high-latency link to the database. Testing with SelfCI against a DB in another country, this improved startup time from about 90s to 45s.

/cc @avsm

Thomas Leonard added 2 commits January 26, 2017 13:49
Instead, lock each term individually. This should mean a quicker
response when rebuilding, and begins to move to parallel term
evaluation.

Signed-off-by: Thomas Leonard <thomas.leonard@docker.com>
Signed-off-by: Thomas Leonard <thomas.leonard@docker.com>
@samoht
Copy link
Member

samoht commented Jan 26, 2017

Why 50? You can set unlimited fids when connecting to the server if needed, see https://github.com/docker/datakit/blob/master/bridge/github/main.ml#L134 for instance

@talex5
Copy link
Contributor Author

talex5 commented Jan 27, 2017

Unlimited buffering means increased latency. e.g. if you have 1000 9p requests queued up and the user tries to view a log, they'll have to wait for the queue to clear first. Also, we just want to keep the server busy, not overload it. The ideal value will depend a bit on the deployment, but 50 should be plenty I think.

@samoht
Copy link
Member

samoht commented Jan 27, 2017

My point is that you might need to increase the number of fids that the server will accept for the client (by default it is set to 10 I think) if you want your 50 requests to really be processed in parallel.

@samoht
Copy link
Member

samoht commented Jan 27, 2017

(my bad, it is set to 100 by default: https://github.com/mirage/ocaml-9p/blob/master/lib/protocol_9p_client.ml#L552 so it should be ok)

Thomas Leonard added 2 commits January 27, 2017 11:06
Allows loading multiple branches from disk in parallel.

Signed-off-by: Thomas Leonard <thomas.leonard@docker.com>
Allows more outstanding 9p operations at the same time.

Signed-off-by: Thomas Leonard <thomas.leonard@docker.com>
@talex5
Copy link
Contributor Author

talex5 commented Jan 27, 2017

Worth increasing it anyway (I didn't realise the default was still 100). I've also changed the cache to use per-branch locks, which helps a lot.

@talex5 talex5 merged commit 1e8e287 into moby:master Jan 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants