New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up API generation #461
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Just the one comment worth considering.
|
||
service_blob = blob(repo_name, service["sha"]; auth=auth) | ||
service = JSON.parse(String(base64decode(service_blob.content))) | ||
# Remove old service files to ensure services that no longer exist are removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm down for this change. Couple things maybe worth commenting, has AWS ever removed an existing service before? Also maybe need consideration with how to handle this on the Julia side.
AWS' semver most likely wouldn't mark this as breaking given what they have done historically (an assumption). But I'm not sure we would want to follow that in our Julia packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is doubtful this would happen on the AWS side. Where this does occur with AWS.jl is if we rename the service files.
Maybe I should update this comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgot to respond here... They existing files would not be removed. If we wanted to rename files we would need to manual empty the directory and regenerate them.
We can bring this change along, seems to make sense!
bors r+ |
461: Speed up API generation r=mattBrzezinski a=omus In working on some other PRs it was necessary for me to regenerate the service definitions locally. I noticed the process was kind of slow so I looked into some minor improvements and this PR was the outcome. I started off with adding threading support to the high-level and low-level loops. Here are the resulting timings: No threading: 1m 45s Threading (1): 1m 38s Threading (2): 60s Threading (4): 37s I then noticed we were downloading the service definition files twice. To fix this I introduced the `ServiceFile` type which handles fetching of the service definition JSON files and caches the results. This means we only download the service definition files for the low-level API generation. Additionally, this change reduces how many GitHub API call and avoids rate limiting over multiple calls. Finally, I added in some logic to remove the high-level service definitions before building the new ones. This was done to handle the scenario which AWS actually removes a service definition file. Co-authored-by: Curtis Vogt <curtis.vogt@gmail.com> Co-authored-by: mattBrzezinski <matt.brzezinski@invenia.ca>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Canceled. |
bors r+ |
In working on some other PRs it was necessary for me to regenerate the service definitions locally. I noticed the process was kind of slow so I looked into some minor improvements and this PR was the outcome.
I started off with adding threading support to the high-level and low-level loops. Here are the resulting timings:
No threading: 1m 45s
Threading (1): 1m 38s
Threading (2): 60s
Threading (4): 37s
I then noticed we were downloading the service definition files twice. To fix this I introduced the
ServiceFile
type which handles fetching of the service definition JSON files and caches the results. This means we only download the service definition files for the low-level API generation. Additionally, this change reduces how many GitHub API call and avoids rate limiting over multiple calls.Finally, I added in some logic to remove the high-level service definitions before building the new ones. This was done to handle the scenario which AWS actually removes a service definition file.