-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apply many files #63
Apply many files #63
Conversation
Signed-off-by: Gustavo Coelho <gutorc@hotmail.com>
My suggestion here would be to just use jsonnet functionality to achieve this: you can create a single jsonnet file and have it import and merge the other ones. |
@malcolmholmes I agree with the @gutorc92 approach, having a "root doc" can make the generation complex (instead of building each single group of items need to process all items in a single one), another option should be to have a single arg that can be a glob (eg |
@mavimo why does a "root doc" make things more complex? I'm curious to hear that. I certainly agree that glob based targeting is a must. I don't recall having done it. |
@malcolmholmes a "complexity" can be the fact that need to include and reference every single new file in the "root document". In my use-case I have a folder (let's call it Another topic that (I'm not too much sure) is that the generation can be slower and consume much more memory if need to process a lot of dashboard every time (we expect to reach hundred of dashboards in short time), actually we are planning to do some "check" to publish only changed dashboard in CI (using some sort of git diff to detect changes) once we are going to merge all the dashboard in the "root file" we need to have a more complex algo to detect if a dashboard is changed (eg because is changes something in the root doc that impact the specific dashboard also if the dashboard is not changed.. Maybe my use-case is too specific to be a "real issues" 😄 |
This is a use-case I have been thinking about. Increasingly it is becoming clear to me that Grizzly has as much of a role outside of jsonnet, just dealing with raw JSON files. I haven't really thought through how we would cover the various scenarios. What you are suggesting is an interesting idea: a directory tree full of files, Grizzly can somehow identify the type of the file, then push it to its destination. Note JSON could be a datasource or a dashboard, YAML could be prometheus rules, etc. Once we have this, for Grafana, there's an interesting possibility: Grizzly will, via the file system, know the mapping between files and dashboard UIDs, be known. Thus, when a user uses It is also a relatively significant diversion from the Tanka/Jsonnet pattern, so we need to make sure that we support both patterns, somehow. |
|
This can now be done by pointing Grizzly at a directory instead of a file. |
Signed-off-by: Gustavo Coelho gutorc@hotmail.com
Solves #64