-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache schemas to disk #21
Comments
Hi Nitive! You could actually run kubeconform without an internet connection. You would need to copy the right folder from https://github.com/instrumenta/kubernetes-json-schema to a local folder, and then run kubeconform like this: ./bin/kubeconform -schema-location '/path/to/your/local/copy/{{ .NormalizedKubernetesVersion }}-standalone{{ .StrictSuffix }}/{{ .ResourceKind }}{{ .KindSuffix }}.json' folder If you stick to a single kubernetes version it shouldnt be big enough to be a concern! Note: I might add an integration test + documentation for this use case. |
Yeah I can download schemas and use local copy but it's not very convenient. It would be okey if downloading schemas was one-time thing but it isn't.
I was hoping to upload schemas to some server and let kubeconform download and cache it. When a new version of schemas is released, I would change schemas url for kubeconform to redownload it kubeconform -strict -schema-location https://kubernetesjsonschema.dev -schema-location \
- https://kubernetesjsonschema.my-company.com/v1
+ https://kubernetesjsonschema.my-company.com/v2 Also kubeconform can do caching more efficiently because it can download only schemas resources that actually used, not everything that kubernetes support |
Agree - just pointing out that "everything kubernetes support" is only about 50MB for the whole folder from kubernetes-json-schema (per k8s version) - less if you strip out the files you dont need (for example non -strict files if you use -strict, etc). It's a bit of work to maintain, that's true. There are a couple ways to implement a cache. There actually already is an in-memory cache - but it caches the parsed schemas, not the schemas. I could persist that to disk, but that would mean it would be a binary cache. Or I could add a second layer of cache to the HTTP registry driver 🤔 This will take a little bit of time to implement, I need to think about it a bit more :) |
@Nitive I made a PoC here https://github.com/yannh/kubeconform/pull/24/files - you specify a folder to cache schemas with -cache, the filename is the md5 checksum of the URL, and contains the schema in clear text. The existing cache also caches things like 404 and would probably never detect new files. The cache I implemented here never expires, it assumes the files at a given URL never change. Would this work for you? Maybe as a future iteration I could implement cache-control header... EDIT: Refactored this a little bit so that the in-memory and on-disk cache use the same interface. If you don't mind building this from source feel free to give the branch a shot. I'll merge this in some time when I have given this another round of thoughts :) |
Merged, I made a new minor release v0.4.2, feel free to try it out! |
Thank you very much! Validation works much faster with disk cache! I tried it out and have some proposals for improvements
Refs' schemas don't seem to be cached. I have written a test to show the problem and I'll create a PR soon (updated: #26)
This could be a problem, especially if 5xx errors are cached. I took a look at HTTP status codes and seems like safely cached can be only 200, 201, 202, 203, 204, 206, 207, 208, 226.
Thank you, it suits me very well. I really like how cache is implemented in Deno JS runtime: everything is cached forever but there is an easy way to clear the cache I'm ready to contribute those features if you like, starting with the easiest ones to get familiar with the code. |
Looks like 404 responses do not get cached, I've added test for that in #28 |
It's on purpose 😬 for full offline capabilities I d really rather work on a lean way to get the required schemas... |
1 & 3: I thought about this, and I m not sure. Creating the folder might mean that it would create files in the wrong place if it is misconfigured. Forcing it to write to a folder that exists ensures that you are indeed writing to the correct folder. On the location itself - all these would not exist in the context of a Docker container, so we would be building more complex logic here. If you are looking for something cross platform, you could probably conditionally set a CACHE_FOLDER variable, (warning, not tested):
4: Are you sure about the caching of refs not happening? I just merged the "offline" tests, do you think you could make a failing test? 2: I think we can change that when we do, I'd rather keep it as simple as possible for now? |
|
I think it would be best for users do not configure anything for cache and let kubeconform handle it. This is how most tools work. For example kubectl keeps its cache in
Maybe I'll do that but I would try to avoid adding complexity and duplication to Makefiles. It's better to have simple configuration (
No problem. The only thing that bothers me is that when we do it, people will have to redownload cache but I don't think it's very important to prevent that |
Closing as implemented so far - I would consider PRs improving on the current state, but I am unlikely to spend more time on it myself since I personally do not need it. |
Hello!
Please add caching resource schemas to disk. It's typical for such an instrument to be run between different projects and downloading the same schemas every time affects performance. It also become very slow on bad internet connection
The text was updated successfully, but these errors were encountered: