You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we hit Github API rate limiting extremely quickly for our queries to find all CRD types in a repository (search), and relatively quickly for parsing each of those CRDs (get file contents). These results should be cached such that we do not have to hit Github every time they are requested.
A first step could be authenticating requests such that the limit for fetching CRD file content will rise from 60 to 5000 per hour, and 10 to 30 per minute for finding CRDs in a repository.
The next step will be to cache results and crawl repos on a periodic basis based on incoming requests for content.
The text was updated successfully, but these errors were encountered:
Currently, we hit Github API rate limiting extremely quickly for our queries to find all CRD types in a repository (search), and relatively quickly for parsing each of those CRDs (get file contents). These results should be cached such that we do not have to hit Github every time they are requested.
A first step could be authenticating requests such that the limit for fetching CRD file content will rise from 60 to 5000 per hour, and 10 to 30 per minute for finding CRDs in a repository.
The next step will be to cache results and crawl repos on a periodic basis based on incoming requests for content.
The text was updated successfully, but these errors were encountered: