Add configuration.meta/crossplane.yaml support for dependencies to beta validate#5815
Conversation
Signed-off-by: Mehmet Enes <menes.onus@gmail.com>
ezgidemirel
left a comment
There was a problem hiding this comment.
Thanks for the PR @enesonus! It looks good in general. I added a couple of nit comments.
| ) | ||
|
|
||
| var ( | ||
| configDep2Yaml = []byte(`apiVersion: meta.pkg.crossplane.io/v1alpha1 |
There was a problem hiding this comment.
nit:
| configDep2Yaml = []byte(`apiVersion: meta.pkg.crossplane.io/v1alpha1 | |
| configMetaYaml = []byte(`apiVersion: meta.pkg.crossplane.io/v1alpha1 |
can you also add a yaml for a configuration package?
There was a problem hiding this comment.
I can add a yaml for a configuration package for sure but what would be the difference between that and the second object inside the extensions arg which has pkg.crossplane.io/v1alpha1 as apiVersion?
There was a problem hiding this comment.
Ohh I assumed, you're using this meta object as an input, not as an output. Crossplane packages cannot depend on meta objects. Therefore, it's safe to replace this manifest with a configuration package to verify validate command can download the dependencies which are configurations.
There was a problem hiding this comment.
@ezgidemirel can you please check the changes made at 75380c0 to manager_test.go? I tried to follow your comments but not %100 sure if they are OK.
jbw976
left a comment
There was a problem hiding this comment.
Something interesting I noticed is that with this approach we lose the ability to validate the XR (and other types that may be defined in the local package itself). For example:
[!] could not find CRD/XRD for: aws.platform.upbound.io/v1alpha1, Kind=XEKS
Looks like we only will get the schemas for dependencies mentioned in the crossplane.yaml file and nothing that is directly in the local package being tested. What do we think of that limitation? 🤔
jbw976
left a comment
There was a problem hiding this comment.
Thanks @enesonus! I have some questions about how we could maybe model this data better and about how we could make the unit test easier to understand and possibly testing more in isolation so we have confidence all the cases are working correctly. 🙇♂️
cmd/crank/beta/validate/manager.go
Outdated
| deps map[string]bool // One level dependency images | ||
| confs map[string]bool // Configuration images | ||
| deps map[string]bool // Dependency images | ||
| confs map[string]interface{} // Configuration images |
There was a problem hiding this comment.
a descriptive comment of what is expected / allowed in this map would be helpful for readers - it's a bit opaque as it is now. Just seeing this declaration, I can guess that anything is technically allowed to be inserted, but what types are going to be put into the map in a practical sense? are there any implications to call out here depending on what type gets inserted at runtime?
There was a problem hiding this comment.
75380c0 changes the type of confs to map[string]*metav1.Configuration. I think the type declaration will be descriptive enough and increase the readability of this code which is a concern of this comment as I understood.
cmd/crank/beta/validate/manager.go
Outdated
| return errors.Wrapf(err, "cannot download package %s", image) | ||
| } | ||
| switch c := m.confs[image].(type) { | ||
| case bool: |
There was a problem hiding this comment.
from this switch statement, i can tell that we expect bool or *metav1.Configuration to be included in this map. interestingly enough, both of these types will end up with a usable *metav1.Configuration after processing occurs in this switch.
Would it make more sense to model m.confs as map[string]*metav1.Configuration instead? when we find just the image we could insert the string key with a nil value. When we find the image and a *metav1.Configuration we can insert with a non-nil value.
that way, we can tell a couple things while using a more strict limit on the types we allow/expect that is arguably easier to read and reason about.
- we know we have seen an image before if the key is present
- we know we have a final
*metav1.Configurationto use if the value is nil or not
would that work better in your opinion? 🤔
There was a problem hiding this comment.
This seems easier to read and understand but with that approach we will be assigning each of the configuration we downloaded to the m.confs map.
I initially designed my approach as you described but it felt like we might use too much memory in cases where user loads too much configuration but looking at it now I also feel like a bunch of configuration yaml files wouldn't be a problem for us in the memory department if we dont have like 1000 of them. I will change this approach to your proposed one.
There was a problem hiding this comment.
At 75380c0 I updated the code to use your proposed logic, please let me know if there are any issues!
| "name": "config-pkg", | ||
| }, | ||
| "spec": map[string]interface{}{ | ||
| "package": "config-dep-2:v1.3.0", |
There was a problem hiding this comment.
my mental model of the dependency graph is already confused by this point 😂 - would you be able to add a comment somewhere that shows a quick little ascii tree of the dependency graph used in this test?
i tried to construct it myself, but that actually confused me a bit, because i'm not sure if there are 2 separate config packages in play here. Are the 2 configuration entries in this extensions list referring to the same underlying configuration (e.g. one is the pkg and one is the meta, but for the same configuration), or to two entirely separate configurations?
There was a problem hiding this comment.
it may make sense to have a 1st test case for the config pkg and 2nd test case for the config meta, so you are testing in isolation that they both work independently.
Right now, with them possibly intertwined (but i'm not sure exactly if that's the case 😇), it may be hard to tell that they are both working OK or you just happened to get all the final dependencies correct because the dependencies overlap between the two. If that was the case, one could work while the other one doesn't, yet the final result is still the same due to the overlapping dependencies.
Then in a 3rd separate case, if these config meta and config pkg are indeed separate things, it would be useful to make sure its all working when both are specified.
There was a problem hiding this comment.
I updated the tests with 3 cases which uses Configuration.meta and Configuration.pkg seperately and 3rd case with both with commit 75380c0.
Co-authored-by: Ezgi Demirel <ezgi@upbound.io> Signed-off-by: Mehmet Enes <94247411+enesonus@users.noreply.github.com>
@jbw976 for the scenario we want to cover, the configuration is still under development. If the author wants to test the XR against the XRD, he/she can give it directly as it is to the validate command just like any other schema. The download and schema extraction logic is for improving the user experience. I don't think it's a limitation in this case. |
Signed-off-by: Mehmet Enes <menes.onus@gmail.com>
bf271bd to
75380c0
Compare
| configPkg = []byte(`apiVersion: meta.pkg.crossplane.io/v1alpha1 | ||
| kind: Configuration | ||
| metadata: | ||
| name: config-pkg | ||
| spec: | ||
| dependsOn: | ||
| - provider: provider-dep-1 | ||
| version: "v1.3.0" |
There was a problem hiding this comment.
@enesonus , in the test, you're mocking this meta resource as OCI layer. However, meta resources are not real resources and there are no images for them. Therefore, packages cannot depend on a meta resource.
You can update this meta resource to a real configuration package resource to test a configuration depending on a configuration package scenario.
There was a problem hiding this comment.
The way I'm reading FetchBaseLayer(), I'm expecting the mock to return the contents of the base layer which will always have a meta pkg object for extraction in extractPackageContent(). So I thiiiink that this is correctly a meta.pkg type, but then the providerYaml and funcYaml should be consistent and also be a meta.pkg type. So I think those are the ones to update for correctness in this test?
There was a problem hiding this comment.
Actually I reread this code and saw that we don't even return the providerYaml and funcYaml. We can simply use a real configuration.pkg like configuration-aws-network:v0.12.0 and even get rid of the mocking. About the crossplane.yaml, since all extensions gets processed and converted to *unstructured.Unstructured we can directly provide it inside the test case as *unstructured.Unstructured and do not use any files. This is because at this test we are only running addDependencies() and PrepExtensions(). These functions do not have anything to do with downloading providers and functions.
cacheDependencies() is the function which needs the information of providers and functions to download them and we dont use it in this test.
I think directly using FetchBaseLayer() is better than mocking. I will change these tests to directly use it now.
There was a problem hiding this comment.
You can see the changes at e74d1fc . It made the code much shorter and easier to follow IMO.
There was a problem hiding this comment.
It is easier to read/follow now because it's shorter and more direct, but I do have a couple of concerns with this approach:
- It puts a dependency from the unit test on the
xpkg.upbound.ioservice being up and healthy. It's typically better to avoid outside service dependencies from unit tests. Do we have examples you've seen in the code base already where we are doing this (e.g. relying on the realxpkg.upbound.io)?- e.g., if
xpkg.upbound.iocan't be reached during the unit test, then it fails with something like:
- e.g., if
--- FAIL: TestConfigurationTypeSupport/SuccessfulConfigPkg (0.05s)
manager_test.go:164:
All dependencies should be successfully added from Configuration.pkg
addDependencies(...): -want error, +got error:
any(
+ e`cannot download package xpkg.upbound.io/upbound/configuration-aws-network:v0.12.0: cannot get config: reading image "xpkg.upbound.io/upbound/configuration-aws-network:v0.12.0": Get "https://xpkg.upbound.io/v2/": dial tcp: lookup xpkg.upbound.io: no such host`,
)
- I am not sure this test would keep working when you finish the work for
validatecommand should download/cache full graph of dependencies #5803 andvalidatecommand should support dependencies of providers/functions too #5826. TheSuccessfulConfigMetatest case would run and then try to fetchfunction-dep-1to look up its dependencies and fail because that's not a real package that exists.
So I think it's probably still more reliable to isolate our code/runtime dependencies here and mock the fetcher, so we're not relying on a live registry and we can define fake package details entirely within this test. Does that make sense?
There was a problem hiding this comment.
So your opinion is that instead of relying the test on xpkg.upbound.io service being up and healthy we should continue to use configPkg providerYaml and funcYaml with mocking and change the apiVersion of providerYaml and funcYaml to meta.pkg am I correct?
I saw some examples of xpkg.upbound.io being used [1, 2] and assumed those are using the service but checked it now without internet connection and they worked fine i.e. they dont depend on xpkg.upbound.io being healthy.
There was a problem hiding this comment.
Yes sir, that feels like the best approach that will also work when we extend the implementation to handle deeper dependencies and also provider/function dependencies 🙇♂️
There was a problem hiding this comment.
cool @enesonus! one small thing I just noticed is that the mocked provider/function meta.pkg objects are still using the spec of their pkg counterparts. e.g. they should not have a spec.package.
I think it doesn't affect the functionality of this test because they don't have deeper dependencies, but I would still update them to be consistent. The configuration.meta.pkg looks correct, so update the provider/function meta.pkg to be similar 🤓
Probably having a comment saying which package they are for would then be useful for the reader, since that info isn't directly captured in the meta.pkg type itself.
There was a problem hiding this comment.
Oh really sorry about that one. I forgot the spec.package there 😅 I removed them and added comments about package:version info at 83ce98f
good point @ezgidemirel, thanks for reminding me of the scope of this scenario! that sounds reasonable to me 👍 |
| if err != nil { | ||
| return errors.Wrapf(err, "cannot download package %s", image) | ||
| } | ||
| if cfg == nil { |
There was a problem hiding this comment.
i really like the way this function simplified! if we don't have the config yet, we look it up, then we join the flow back and proceed the same way for all cases.
| want want | ||
| }{ | ||
| "SuccessfulConfigPkg": { | ||
| //config-pkg |
There was a problem hiding this comment.
nice, this little dependency tree diagram really helps quickly understand what the test case is setting up and testing! 💪
Signed-off-by: Mehmet Enes <menes.onus@gmail.com>
e74d1fc to
b7a5400
Compare
Signed-off-by: Mehmet Enes <menes.onus@gmail.com>
Description of your changes
This PR enables
crossplane beta validatecommand to useConfiguration.meta/crossplane.yamlfor pulling dependencies. With this PRbeta validatecommand can validate the correctness of the Configurations using theircrossplane.yamlfile before the build, without an already publishedConfiguration.pkg.Here is what this PR does:
Current usage (with https://github.com/enesonus/configuration-aws-eks/tree/config-meta-validate):
We need an already built and published Configuration.pkg specified at examples/configuration.yaml
New usage (with https://github.com/enesonus/configuration-aws-eks/tree/config-meta-validate):
No need for building and publishing the package. Pulls dependencies from
Configuration.meta/crossplane.yamlFixes #5728
I have:
earthly +reviewableto ensure this PR is ready for review.Added or updated e2e tests.Linked a PR or a docs tracking issue to document this change.Addedbackport release-x.ylabels to auto-backport this PR.Need help with this checklist? See the cheat sheet.