Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

Support new API version allowing charts from Helm repos, per-release git repos #1382

Merged
merged 22 commits into from
Nov 7, 2018

Conversation

squaremo
Copy link
Member

@squaremo squaremo commented Sep 21, 2018

  • update the CRD and types
  • implement the bit where it releases a chart from a repo spec
  • cache the downloaded charts
  • authentication for chart repos (NB: per-operator, via mounting creds)
  • don't exit or block if there's no git repo configured
  • figure out what to do with requirements.yml

EDIT: qualification on authentication, so I can check it off

@squaremo
Copy link
Member Author

squaremo commented Sep 27, 2018

Some open questions:

As it stands, this will remove support from fluxd for the v1alpha2 FluxHelmRelease resources.

In other words, you have to do a big changeover and run both the new helm-operator and new fluxd, otherwise you won't be able to see or update FluxHelmRelease workloads via flux; and after doing so, the old resources will be ignored, so you have to carefully convert them at the same time.

NB the API group is different, so there's no automagic translation of the old resources into the new (and Kubernetes < 1.10 doesn't support that anyway).

An alternative would be to support both old and new resources, at least in fluxd (so you can see and update workloads of both versions; but the helm operator only releases things from the new versions). One problem with that is Kubernetes will only respond to kubectl get fluxhelmrelease with resources from one of the versions, which would be a bit confusing.

Should FHRs refer to charts using a repository URL, or a preconfigured repository name (like Helm does)?

Originally I thought all you would need is a repository URL -- and that's true, to some extent. However I'd also understand if people wanted to use, e.g., stable to specify the repo. So, maybe we could allow for referring to names in a repositories.yaml? (Though this introduces some problems with caching, annoyingly).

Are chartPullSecrets the best way to supply credentials?

I haven't implemented authentication yet, but the current design is:

  • in the FluxHelmRelease, you refer to a chartPullSecret
  • that names a secret in the same namespace, which has an entry in the same format as imagePullSecrets
  • .. but it may also have certificates etc.
How does it handle requirements.yml

For git-path releases, we can just assume that any vendored charts are in the filesystem*. But for charts from a helm repo, the dependencies (as given in requirements.yml) would presumably have to be downloaded first -- how and when do we do that?

*I think?

EDIT: add question on requirements.yml

@stefanprodan
Copy link
Member

Building with make doesn't work on macOS since there is no sha256sum on High Sierra. For anyone trying to test this on macOS you can do the following:

brew install coreutils
sudo ln -s /usr/local/bin/gsha256sum /usr/local/bin/sha256sum

@dholbach
Copy link
Member

dholbach commented Oct 2, 2018

Is gsha256sum something you had to install separately as well? (I'm guessing this because of /usr/local.) If it is, it might be good to document that as part of the build instructions.

If it's a tool that's generally around in build environments on MacOS, we should probably special-case this.

@hiddeco
Copy link
Member

hiddeco commented Oct 2, 2018

@dholbach gsha256sum is included in coreutils


Just want to make a note that installing coreutils is optional.

MacOS has a native shasum function, so you can easily mimic the command by adding function sha256sum() { shasum -a 256 "$@"; } && export -f sha256sum to your .bashrc.

Or in case make needs access:

$ cat <<EOF > /usr/local/bin/sha256sum
#!/bin/bash
shasum -a 256 "$@"
EOF
$ chmod +x /usr/local/bin/sha256sum

@squaremo
Copy link
Member Author

squaremo commented Oct 2, 2018

Building with make doesn't work on macOS since there is no sha256sum on High Sierra

Ops! I can probably adapt the Makefile to cope with this.

@stefanprodan
Copy link
Member

@squaremo I've built both flux and helm-op images but when enabling automation not all formats are working.

Working:

apiVersion: flux.weave.works/v1beta1
kind: FluxHelmRelease
metadata:
  name: steerer
  namespace: istio-system
  annotations:
    flux.weave.works/automated: "true"
spec:
  releaseName: steerer
  chart:
    repository: https://stefanprodan.github.io/steerer/
    name: steerer
    version: 0.0.1
  values:
    image:
      repository: stefanprodan/steerer
      tag: 0.0.1-rc.18

Not working:

apiVersion: flux.weave.works/v1beta1
kind: FluxHelmRelease
metadata:
  name: podinfo
  namespace: test
  annotations:
    flux.weave.works/automated: "true"
spec:
  releaseName: podinfo
  chart:
    repository: https://stefanprodan.github.io/steerer/
    name: podinfo-steerer
    version: 1.2.1
  values:
    canary:
      repository: quay.io/stefanprodan/podinfo
      tag: 1.2.0

@squaremo
Copy link
Member Author

squaremo commented Oct 8, 2018

Not working:

That's not intended to work -- the supported formats are

container1:
  image: foo:v1
container2:
  image: foo
  tag: v1
container3:
  image:
    repository: foo
    tag: v1

of which I think the the first and third are in common use, and the middle is my confabulation.

@squaremo
Copy link
Member Author

squaremo commented Oct 8, 2018

Alternative design for providing Helm repo credentials: refer to a secret with a repositories.yaml file in it.

This might also let you just give e.g., "stable" as the repository name, if there's an entry for that in the file -- not sure if that's a good idea, though, since that means changing the secret could change the meaning of the FHR.

You could also specify certificates in the repositories.yaml and include them in the secret (idea: a tool which will convert a repositories.yaml file into a secret).

Another alternate: provide this secret to the helm op, rather than each FluxHelmRelease (but it does seem like it belongs with each FHR, since it could be working in different namespaces with different authentication, as the kubernetes controllers do).

@stefanprodan
Copy link
Member

Regarding authentication there are two types of auth that we should support. For chartmuseum there is a basic auth option and for Artifactory there is the access token option.

Basic auth header (chartmuseum):

Authorization: Basic YWRtaW46YW....

Access token header (Jfrog):

Authorization: Bearer eyJ2ZXIikwIj....

The chartPullSecret could look like:

type: Basic/Bearer
token: 
username:
password:

@squaremo
Copy link
Member Author

squaremo commented Oct 9, 2018

for Artifactory there is the access token option.

What does helm store in repositories.yaml if you add an Artifactory repo?

@stefanprodan
Copy link
Member

@squaremo I can't answer that since I don't have an Artifactory account. Will get back to you when I'll get one.

Copy link
Member

@dholbach dholbach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few suggestions on the docs part. Thanks for writing this up so clearly!

site/helm-integration.md Show resolved Hide resolved
```
spec:
chart:
gitRepository: git@github.com/weaveworks/flux

This comment was marked as abuse.

This comment was marked as abuse.


- Kubernetes Custom Resource (CR) manifests contain all the information needed to do a Chart release. There is 1-2-1 releationship between a Helm Chart and a Custom Resource.
```

This comment was marked as abuse.

This comment was marked as abuse.

@squaremo
Copy link
Member Author

squaremo commented Oct 9, 2018

Thanks for the suggestions Daniel 🌺

@@ -0,0 +1,4 @@
# NB Helm clients will refuse to play with Tiller that is older. 2.8.2
# is the first release that had checksums, so it's used.
HELM_VERSION=2.8.2

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

@mellena1
Copy link

mellena1 commented Oct 9, 2018

@squaremo Just out of curiosity, for this PR, are you planning on adding/have already added a way for flux to update the version number of the helm chart on GitHub. So if a new chart version is available on the repo, update the FluxHelmRelease on Github, similarly to what happens when a new Docker image is pushed?

@squaremo
Copy link
Member Author

squaremo commented Oct 9, 2018

Just out of curiosity, for this PR, are you planning on adding/have already added a way for flux to update the version number of the helm chart on GitHub. So if a new chart version is available on the repo, update the FluxHelmRelease on Github, similarly to what happens when a new Docker image is pushed?

Not in this PR, but it's a desirable feature for sure. The trickiness is this: the helm operator doesn't have access to the config repo -- it just looks at what's happening in the cluster. But perhaps it's something that could be built into fluxd, since that is the Keeper of the Git.

@squaremo
Copy link
Member Author

Latest commits add support for mentioning a git repo in an FHR:

# ...
spec:
  chart:
    git: git@github.com:squaremo/flux-helm-test
    ref: master
    path: charts/podinfo
  values:
  # ...

This assumes it will be able to clone the git repo with the SSH config and keys available in the container -- which means, by default, any key you've mounted at /etc/fluxd/ssh/identity. In principle you should be able to go further and mount an SSH config mentioning multiple keys, and the multiple keys. (How this can be made easier is TBD).

@squaremo
Copy link
Member Author

Now the helm operator uses Conditions like a grown-up controller.

@squaremo
Copy link
Member Author

Image with all of the above: quay.io/squaremo/helm-operator:21e593f

@squaremo
Copy link
Member Author

  • figure out what to do with requirements.yml

#1450 deals with this for the helm.integrations.flux.weave.works/v1alpha2 operator (that currently in master), by having the helm command line client in the image, and calling helm repo up; helm dep build before each release. It's that simple because there's only one chart repo, and all charts are from git.

In the case of flux.weave.works/v1beta1 operator (i.e., that in this PR), there's a wider variety of situations. Namely,

  • charts can come from Helm repos

.. but packaged charts come with their own dependencies (helm package won't let you proceed unless charts/ satisfies your requirements.yaml). So we only have to deal with the charts from git.

  • charts can come from more than one git repo

For the purpose of dependencies, I don't think we care where the chart came from, only what it refers to.

  • releases (at some point) may be supplied with different credentials

It seems like the repositories.yaml and index/chart cache belong with the credentials, since those decide what you can access, and all releases sharing credentials will have the same access.
Releases can share a cache, since they will copy what they need from it.

@squaremo
Copy link
Member Author

squaremo commented Oct 23, 2018

Rebass! This now includes the updating dependencies bit, for charts from git. I pushed a newly (locally) built image: quay.io/squaremo/helm-operator:9de22ad

EDIT: double rebass! I've autosquashed the fixups, now.

@squaremo
Copy link
Member Author

squaremo commented Nov 5, 2018

I am going to 1. remove the section of docs pertaining to authentication (possibly replacing it with an explanation of the lack of authentication); 2. write a quick guide to upgrading to the new Helm operator; 3. squash the fixups out; 4. merge this using the Helm operator alpha "get out of review free" card (though don't let that stop you giving it a look over!).

@demikl
Copy link

demikl commented Nov 5, 2018

I'm testing with your "nightly" images: quay.io/squaremo/helm-operator:dfb5be1 and accompanying fluxd: quay.io/squaremo/flux:dfb5be1

Installing an Helm release from a HelmRelease object works fine, getting the chart from a private helm repo. That's great ! But that breaks the auto-updating feature of fluxd ; whereas I have a newer image available for one of my deployment (managed by a Deployment manifest, not an Helm release), flux won't update it automatically as soon as I have a HelmRelease object in my git repository. A manuel trigger through fluxctl release ... works fine.

In the logs of flux, it seems that the automatic sync-loop is interrupted by this error :

ts=2018-11-05T14:55:57.98102016Z caller=images.go:17 component=sync-loop msg="polling images"
ts=2018-11-05T14:55:58.020065631Z caller=images.go:33 component=sync-loop error="checking services for new images: Unsupported kind helmrelease"

The HelmRelease object in my git repository has the following header :

apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.chart-image: glob:1.0.*.PREPROD
...

@squaremo
Copy link
Member Author

squaremo commented Nov 5, 2018

@demikl Hey thanks for trying the nightly out, much appreciated!

Well spotted on the automation failure. I realised today that the code as of the last nightly build doesn't properly understand the new API version (there's one line missing ... argh). I've fixed that, and rebased, and so on. I'll do another nightly build er, today.

This sets out the main points of the design for the helm operator (and
the bits of fluxd that interpret FluxHelmReleases).
So that you can use charts from arbitrary repos (in principle), give
the GitChartSource fields for a URL and ref (branch).
For the helm operator to be able to sync from git repos mentioned by
charts, rather than from one pre-configured repo, we need to be able
to dynamically start mirroring git repos, and respond when they have
been refreshed.

To that end, this commit introduces git.Mirrors, essentially a map to
which you can idempotently add repos to be mirrored.
This is a change in design: rather than having a git repo that you
provide to the operator via command-line args, let people specify for
each FluxHelmRelease if it comes from a git repo, and if so, where in
the repo the chart lives (branch and path).

To adapt to this design, instead of having a single *git.Export which
is updated when the single (command-line argument) git repo has new
commits, we use the newly minted git.Mirrors to dynamically keep track
of all git repos in play.

Within that scheme there are a few choices to be made:

 * How do you provide keys for authenticating with git+ssh?

I have punted on this for now, and left it to the user to mount a key
into the correct location (same as before). This means you can use at
most one identity, and if that's a "deploy key" for github, at most
one repo, since github doesn't let you reuse keys.

You can of course mount more keys and a ssh_config which points at
them, but it could be made easier.

 * Do we keep a mirror per FluxHelmRelease?

That would be safe; however, it would most likely do much more work
than necessary, since it's expected that some git repos will contain
more than one chart.

Instead, one mirror is kept per git repository. If each FHR came with
its own SSH key, we might need to segregate instances of the same git
repo (since some might have valid creds and some not). But this is
good enough for now.

 * Can we pool clones?

Each FHR can refer to a distinct path and branch, even if it shares
the git repo. So to be safe, I have given each FHR its own clone;
these are updated when the files in the branch and path have changed
since the last update.
I.e., for the recent change to allow each FHR to specify its own git
repo.
Conditions are a way for a controller to report the status of
particular resource. In the case of FluxHelmReleases, there's (at
least) a couple of things we want to report on:

 - has the controller managed to get the chart mentioned in the FHR?
 - has the controller managed to release the chart?

These can be used to determine whether the FluxHelmRelease is "ready",
for various definitions of "ready" (e.g., has it released the chart
version given in the FHR? If not, why not?)

To be able to set conditions reporting the status of charts from Helm
repos, the bit doing the condition setting must know what happens when
attempting to fetch the charts. That bit is the `chartsync` package,
which has the core loop reconciling the state of the cluster, the git
mirroring, and (now) the cached charts. So, move the chart fetching
code, and its call sites, to `chartsync`.

Moving the chart fetching code from `release` to `chartsync` makes
sense for another reason: it means the argument to
`release.Install(..)` can be a path to the chart, uniformly, and no
interpretation of the chart source in FluxHelmRelease is required
there.

When calculating the changes to conditions, we have to take into
account those already in effect. As an approximation to checking the
cluster value each time, mutate the local value, so that later changes
will be considered against those earlier.
We commit the Go code that's generated by
`./bin/helm/update_codegen.sh`, rather than generating it afresh. So
we should at least check that it still corresponds to the source code
it's generated from.
Instead of requiring people to cut across from the old version of the
helm operator to the new, support both in fluxd.

This means you can not only have different versions of the custom
resources, you can keep running the old operator, while you transition
to the new.

To keep the `kind` part of Resource IDs unambiguous, I have renamed
the new custom resource to `HelmRelease` (it was already in the API
`flux.weave.works/v1beta`). This is the same as the kind bitnami's
helm operator uses, but the `apiVersion` is different (so Kubernetes
won't confuse them), and it's unlikely you'd be running both
operators.
As a stop-gap in the absence of per-_resource_ authentication, this
commit makes it possible to provide repository credentials by mounting
a `repositories.yaml` file into the operator container.

I've replaced the aspirational documentation on using
`chartPullSecret`s with practical documentation on how to mount that
aforementioned file and SSH keys; and, updated the example deployment
with example volumes and mounts.
@squaremo
Copy link
Member Author

squaremo commented Nov 5, 2018

Nightlies: quay.io/squaremo/flux:c3d0052 and quay.io/squaremo/helm-operator:c3d0052

UPDATE: rebuilt with updated kubeyaml, quay.io/squaremo/flux:5a7b7fe and quay.io/squaremo/helm-operator:5a7b7fe

I have added the new CRD and not removed the old one, since deleting
the old one will also delete the custom resources, which may provoke
the old operator into deleting the corresponding releases.

I've used the annotation

    helm.sh/hook: crd-install

on both CRD manifests. This means they'll get applied before other
resources, and that they won't be removed again if the chart is
removed or changed.
@squaremo squaremo changed the title WIP Support charts from Helm repo Support new API version allowing charts from Helm repos, per-release git repos Nov 7, 2018
@squaremo squaremo merged commit 4586fb5 into master Nov 7, 2018
@squaremo
Copy link
Member Author

squaremo commented Nov 7, 2018

Merged with helm operator "Get out of review free" card. Now retiring that card.

@demikl
Copy link

demikl commented Nov 14, 2018

@demikl Hey thanks for trying the nightly out, much appreciated!

Well spotted on the automation failure. I realised today that the code as of the last nightly build doesn't properly understand the new API version (there's one line missing ... argh). I've fixed that, and rebased, and so on. I'll do another nightly build er, today.

Hi ! I've spotted that you've released Helm Operator 0.5.0, but there's no associated updated Flux so creating HelmRelease objects will break the sync loop. Do you plan on releasing an updated Flux soon ?

@squaremo
Copy link
Member Author

there's no associated updated Flux so creating HelmRelease objects will break the sync loop.

Most stuff will work with the current release of the flux daemon (including syncing resources). Image updates, through fluxctl or automation, won't work until flux 1.8.1, which will be along shortly.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants