Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deployed components should have fixed version #34

Closed
7 tasks done
phoracek opened this issue Apr 3, 2019 · 19 comments
Closed
7 tasks done

deployed components should have fixed version #34

phoracek opened this issue Apr 3, 2019 · 19 comments

Comments

@phoracek
Copy link
Member

phoracek commented Apr 3, 2019

We should use images with specific version instead of :latest. One reason is that we cannot be sure that our manifests will work with whatever is shipped in the image. Another reason is that we make sure all components work well together. Finally it would make upgrades more obvious - upgrade from components:x to components:y.

@SchSeba @booxter does it make sense and is it doable?

  • SR-IOV DP
  • SR-IOV CNI
  • Linux bridge CNI
  • Linux bridge marker
  • Multus
  • kubemacpool
  • nmstate

TODO: Keep table of image/component versions matched for image release in README

@phoracek phoracek modified the milestone: v0.4.0 Apr 4, 2019
@booxter
Copy link
Contributor

booxter commented Apr 8, 2019

@phoracek my feeling is SR-IOV upstream is not particularly interested in maintaining release tags / branches. There are several tags (notably, release-v1, latest, and snapshot) for some of them but I don't think there is plan to properly tag any of that upstream. I am also not sure whether docker / quay maintain all uploaded versions, or just latest; so when a new version is build, how long does the registry keep the older versions?

Is this a suggestion to go back to upstream and push for a more clear release versioning / branch management? Or is it e.g. a suggestion to build our own builds in kubevirt repos and use those instead of upstream builds?

@phoracek
Copy link
Member Author

phoracek commented Apr 8, 2019

@booxter I'm not aware of them removing old tags, never heard of it. We need to stick to a specific version, otherwise there is no guarantee that after :latest image upgrade, manifest we keep in the operator will be still working. We can fix to a specific image hash, but that sounds risky.

Having proper tags on u/s would be ideal, but I'm not sure we could make it. Having image copies under kubevirt sounds like a blasphemy, but easier to handle.

There is a request/pattern in HCO. They want every operator to expose ImageRegistry and ImageTag in operator config. That is impossible while pulling various third-party images.

If we copy set of third-party images under our organization, we can test given set with the operator (and together), we can set them with a tag matching operator version and keep it all nice and tidy.

@booxter @dankenigsberg what do you think? Do we copy third-party images under our organization and make it simple and easily tested or do we fight u/s projects to tag their images (if they have u/s image, we still miss repo for kubemacpool and we need to build our own bridge CNI with VLAN support).

edit: also, if we keep our custom set of images on u/s, it will be way easier to mirror them to d/s, since we keep them all together with known version

@dankenigsberg
Copy link
Member

An upstream versioned image is best.

For projects that have upstream versions, a versioned kubevirt image is a close second.

If we need d/s only patches (eg no recent upstream versions) it would be cool if we had RPM-style release number that includes the githash, so the operator always takes the latest of those.

"latest" sounds fragile, and should be used only if we are too lazy/busy to get a clear tag.

Please fight with u/s until they have versions and formal releases. But until they do, we must have our own images.

If we can avoid maintaining our own image by exposing SriovImageRegistry that defaults to ImageRegistry, I think we should.

HTH.

@phoracek
Copy link
Member Author

phoracek commented Apr 8, 2019

Thanks @dankenigsberg

Could we do this?

SriovDPImage: registry/image_name:image_tag
...
  • Note that if someone overrides the default image (e.g. on d/s), they need to make sure that their version is ~same as on u/s (so operator manifests will work)

  • All images must be versioned

  • If there is good enough tag on u/s, use it

  • If there is none (or we need newer version), make temporary kubevirt/cluster-network-addon-<component_name>:<operator_version>, once we get u/s tagged, we use it

Would you agree?

@rthallisey
Copy link

I think that's fine @phoracek. The HCO only needs a mapping from version, containerRegistry, and imagePullPolicy to whatever you expose. You'll just need to add something for imagePullPolicy.

  • version -> image_tag
  • containerRegistry -> registry
  • imagePullPolicy -> ?

@phoracek
Copy link
Member Author

phoracek commented Apr 9, 2019

Thanks. We already have single imagePullPolicy parameter in operator's CR applied to all deployed images. Will start with adding images to the CR, making it possible to switch between them live. That should bring us closer to possible upgrades.

@phoracek
Copy link
Member Author

kubemacpool resolved in #83

@phoracek
Copy link
Member Author

Linux bridge CNI resolved in #85

@booxter
Copy link
Contributor

booxter commented Apr 30, 2019

@phoracek do we just pick initial version at random? Like v0.1.0? Or is there some guidance?

@SchSeba
Copy link
Contributor

SchSeba commented May 1, 2019

@booxter I think v0.1.0 should be fine

@phoracek
Copy link
Member Author

phoracek commented May 1, 2019

@SchSeba @booxter I'm not sure about the versions. We cannot use u/s version, since they do not version it, that's fine. However, if we introduce our own versioning we would need to maintain a table saying from which u/s version we built it (otherwise it would get pretty chaotic). Do you think we could use just short commit hash (first 7 characters from the commit id) instead? There is nothing forcing us to use semver AFAIK.

@dankenigsberg
Copy link
Member

I think we could still use semver: use the latest upstream version, then add release, which may include the downstream githash.

This would give us a chance to re-sync with upstream if it wakes up and releases a new version.

e.g for
https://github.com/intel/sriov-network-device-plugin/releases
v2.0.0-8.gitdeadd00d

@SchSeba
Copy link
Contributor

SchSeba commented May 1, 2019

@phoracek Sure no problem.

@dankenigsberg before my change for multus and sriov-cni we pull from latest(not a specific version) and for the device plugins we pull from @booxter repo.

PR #92

Can we leave the version tag (v0.1.0) with the commit hash as requested by petr?

@phoracek
Copy link
Member Author

phoracek commented May 1, 2019

@SchSeba if I understood @dankenigsberg properly, for multus, we should do v3.2-1.gitbf61002, where v3.2 is the last release, shows this is our first release and commit id bf61002 is the top of master. If we were to ship released version (not latest), we would simply drop the git commit ID part.

@SchSeba
Copy link
Contributor

SchSeba commented May 1, 2019

@booxter what version should I tag the sriov-device-plugin?

@booxter
Copy link
Contributor

booxter commented May 1, 2019

@SchSeba (answered the other place but) since they have release-v1 branches for both CNI and DP, I assume anything below v1 is no-go. I wouldn't suggest going far from v1 either, nor introducing our own minor / patch versions w/o their involvement.

So in essence, v1. is probably the way to go. (Unless we have some constraints in operator upgrade that would require something more semantic.)

@rthallisey
Copy link

I'd expect that these vars are defaulted by your operator if they are not explicitly set on the CR. You probably already have that, but in case you don't, just providing a reminder.

@SchSeba
Copy link
Contributor

SchSeba commented May 2, 2019

@phoracek
Copy link
Member Author

All components are now in their fixed versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants