Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Application auto-discovery #1766

Closed
alexec opened this issue Jun 17, 2019 · 20 comments
Closed

Application auto-discovery #1766

alexec opened this issue Jun 17, 2019 · 20 comments
Labels
component:applications-set Bulk application management related enhancement New feature or request type:usability Enhancement of an existing feature

Comments

@alexec
Copy link
Contributor

alexec commented Jun 17, 2019

Is your feature request related to a problem? Please describe.

A number of users would like something a bit more that app-of-apps. This would probably be served by auto-discovery of apps within a repo.

Describe the solution you'd like

Point Argo CD at a repo and let it fly.

@alexec alexec added enhancement New feature or request cluster management labels Jun 17, 2019
@alexmt
Copy link
Collaborator

alexmt commented Jun 17, 2019

Related issue: #1431

argoproj.io/AppsSource - implements application auto-discovery. Instead of forcing user to create an application for each directory the argoproj.io/AppsSource should scan repo and create (delete obsolete apps) automatically.

@wmedlar
Copy link
Contributor

wmedlar commented Jun 17, 2019

How would an AppsSource determine the corresponding AppProject for autodiscovered apps? A user might want to be able to specify that certain directories belong to a permissive project that allows certain cluster-scoped resources, while other directories belong to a far more restrictive project.

@joshuasimon-taulia
Copy link

joshuasimon-taulia commented Jun 17, 2019

Here is my current workaround for auto-generating apps-in-app from a kubernetes manifest monorepo. ./components is symlinked to openshift/templates/components. The general idea here is that the helm template for the parent app loops through components/* and creates an argo app for each folder on the fly. Then, I manually create a parent app for each deployment environment (ex: integration, staging, production) which each source a unique values-$ENVIRONMENT.yaml file from the parent helm template folder.

{{ range $path, $bytes := .Files.Glob "components/*/configuration.json" }}
  {{- $app := base (dir $path) }}
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: {{ $app }}
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: {{ $.Values.namespace }}
    server: {{ $.Values.server | default "https://kubernetes.default.svc" }}
  project: {{ $.Values.project | default "default" }}
  source:
    path: openshift/templates/components/{{ $app }}/argotest
    repoURL: git@github.com:ops-stuff/kube-templates
    targetRevision: argocd
  syncPolicy:
{{ end }}

I would imagine some folks keep their argo app yaml/k8s manifests in the same repo as their application code. Auto discovery of a configurable filename pattern under a certain Github org would probably work well (like the Jenkins github organization folder plugin).

@alexmt
Copy link
Collaborator

alexmt commented Jun 18, 2019

@wmedlar I was thinking that user might add .argocd-app file into app directory and specify app specific settings like project, and optionally kustomize, ksonnet etc properties. The AppsSource might have default project value

@dave08
Copy link

dave08 commented Jun 18, 2019

It might also be nice if a list of folders could be provided to AppSource that would decide the order the applications would be applied in, since order is important, especially for infrastructure. Then it wouldn't try to auto-detect anything, it would just use that list, avoiding a bunch of Application files.

@OmerKahani
Copy link
Contributor

@alexmt how does .ardocd-app file is different from an application YAML?

@milesarmstrong
Copy link
Contributor

milesarmstrong commented Jun 18, 2019

Not quite auto-discovery, but I think its related enough to share. This is our current App-of-Apps approach:

  1. The pipeline that bootstraps the cluster uses kubectl apply to create a single Application CRD pointing at a base repo for that class of cluster.
  2. The base repo contains multiple Application CRDs, each of which represents a team (or some other collection of related services) and points at a repo that team owns.
  3. The team repos contain multiple Application CRDs that point to their service repos, which contain the manifests.
                            Team's
                         Application
                            repos

                            +---+            +---+
                            |   |            |   |
                            | A +------------> M |
 Cluster                    | . |            |   |
bootstrap    +---+      +-->+ . |     +---+  +---+
  repo       |   |      |   | . |     |   |
             | A +------+   | A +---->+ M |
  +---+      | . |          |   |     |   |
  |   |      | . |          +---+     +---+
  | A +----->+ . |
  |   |      | . |          +---+            +---+
  +---+      | . |          |   |            |   |
             | A +------+   | A +------------> M |
             |   |      |   | . |            |   |
             +---+      +-->+ . |     +---+  +---+
                            | . |     |   |
         Base repo for      | A +---->+ M |
         cluster class      |   |     |   |
                            +---+     +---+

                                     Team's service
                                         repos

Every repo is kustomized, and we use overlays to patch the spec.source.path to the correct overlay for the env, e.g. kustomize/overlays/<env> all the way down the tree.

This gives us a nice logical separation where:

  • The Application in the cluster bootstrap repo rarely has to change
  • Applications in the base repo change infrequently, e.g. if we add/rename a team
  • Applications in the team's repos can change frequently, but they own these repos.

At the moment, we're just missing:

  • The ability to create Application CRDs in any namespace
  • A concept of heierarchy between Applications, and a display of this in the UI similar to the diagram above.

Argo is looking awesome so far, thanks for all your work!

@kwladyka
Copy link

kwladyka commented Jun 18, 2019

Describe the solution you'd like

But what is the real issue what we are solving?

I think it needs to work in that way:

  • Applications are in separate folders as it is now
  • There is a place where we choose Applications to deploy for environment

So the things which we can discuss are:

  • How to choose Applications? By name? By path? Auto detection?

Personally I feel path make it easy and it is common solution. No reason to make it more complex in code. It looks like a trap for maintenance for users also. But maybe somebody can show good working example from another software in that context?

If you feel you are repeating yourself in Application of Applications make helm Chart from it.

Only 1 issue which I have is: when creating new Application I copy manifest and sometimes I forget to change Application name or namespace :/ It could be a disaster in production. If you can prevent it by detecting possible mistake and pause to confirm or something like that it could be advantage, but it is not critical.

Going further maybe people needs good best practice helm Chart example for this purpose instead of this changes?

oh ok I see second things to improve:
I have to run manually my Application of Applications when start new cluster, after manually deploy Argo CD by:

kubectl --context=[cluster-name] apply -k k8s/argo-cd/production
kubectl --context=[cluster-name] apply -f manually/argo-cd-clusters-bootstraping/production.yaml

Generally I decided to set it separately from Argo CD Application, because when Applications are syncing Bootstrap Application also change status so I want to keep it alone to not make confusion about Argo CD Application status.

So when I want to make a change in Bootstrap Application manifest I have to do it manually by apply. I would like to keep it as GitOps part. But I keep Bootstrap Application separate because of this status confusion.

So maybe some kind of self upgrading without loop? Like from Bootstrap Application I can point to Bootstrap Application path in repo without creating Application to sync itself (loop)? It is hard to predict what issue it can bring now when looping it. So I prefer to do it manually, because this are very rare changes.

Visualisation of the issue:
I have to do manually update for Bootstrap Application:
Bootstrap Application -> App1, App2, App3

GitOps, but loop:
Bootstrap Application -> Bootstrap Application, App1, App2, App3

@stevesea
Copy link
Contributor

I currently use my app-of-apps setup to:

  1. define logical groupings of related Applications (e.g. I might have 'infrastructure', 'front-end', 'back-end', etc or team-based groupings)

  2. define common parameters to apply to a set of Applications (e.g. defining destination cluster/namespace, helm params or kustomize overlays, setting targetRevision, which project they belong to, etc). There's also multiple cases of overriding things... for instance,

    a) some apps need to override ignoreDifferences to allow for HPA, others won't.

    b) another odd case could be I've got N services within my project. some of them use Helm , the rest use Kustomize. For helm, I'd want to set an env-specific values override; for kustomize, I'd want to set an env-specific overlay.

  3. In my setup, I've got a 'nonprod' Argo CD deployment. It manages multiple 'test' and 'staging' environments. I deploy the same microservices to many environments from a single Argo CD, so my app-of-apps setup also needs to create unique names for all the AppProject and Application CRDs.

Where do those use cases fit within application auto-discovery?

@johnmarcou
Copy link
Contributor

johnmarcou commented Jun 20, 2019

Hi,

This is the workaround we are using for argocd application auto-discovery.

First, a new plugin is created in argocd called "app-monitor":

#!/usr/bin/env bash
# Argocd plugin to print the "argocd.yaml" manifests found in the git repository, targeting the same branch

# The argocdapp app name (eg: project-xx-prod), using this plugin, is used to extract the "env name" to determine the branch (dev,stage,prod,master)
BRANCH=${ARGOCD_APP_NAME##*-}

for app in `find . -name argoapp.yaml`; do
  APP_BRANCH=`yq r $app spec.source.targetRevision`
  if [ "$BRANCH" == "$APP_BRANCH" ]; then
    cat $app;
    echo "---";
  fi
done

Then, we have multiple "GitOps Git repo" which stores the applications definition and the related argoapp.yaml to deploy them. At that stage, we could just kubectl apply -f argoapp.yaml on each application in the repository to declare the App in ArgoCD.

This is what the app-monitor is actually doing for us. Each "GitOps Git repo" is associated with/monitored by a "repo-app-monitor" ArgoCD App (using the "app-monitor" plugin). When a new argoapp.yaml file is push to a "GitOps Git repo", the monitor app associated with that repo detects the new ArgoCD manifest and auto register it.

For instance:

  • 1 GitOps Git repo called "user-projects"
  • 1 ArgoCD App monitor, called "user-projects-monitor", is watching "user-projects" (this app-monitor is part of the ArgoCD config itself)
  • 3 helm charts + 3 argoapp.yaml files for apps A, B, C in the GitRepo
  • the monitor find and deploys 3 resources which are ArgocdApp A, B, C
  • 3 ArgoApp A, B, C deploys their resources

When/If ArgoCD is reinstalled from scratch, the configuration setup the Repositories + Monitors. All the applications are automatically auto-discovered and redeployed. There is no state/config to backup/restore.

REM: I ignore the branch question in the description for readability.

Snippet:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: user-projects-monitor-master
  namespace: argocd
spec:
  destination:
    namespace: argocd
    server: https://kubernetes.default.svc
  project: default
  source:
    path: /
    plugin:
      name: apps-monitor
    repoURL: https://****/um/user-projects.git
    targetRevision: master

@stale
Copy link

stale bot commented Aug 19, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Aug 19, 2019
@alexmt alexmt removed the wontfix This will not be worked on label Aug 19, 2019
@alexec alexec pinned this issue Aug 19, 2019
@alexec
Copy link
Contributor Author

alexec commented Aug 19, 2019

I've pinned this issue as it is interesting.

@tschonnie
Copy link

tschonnie commented Aug 22, 2019

Not quite auto-discovery, but I think its related enough to share. This is our current App-of-Apps approach

I wanted to implement quite the same approach as @milesarmstrong in #1766 (comment). Here is my concept for applications-of-applications: https://github.com/tschonnie/argocd-pipeline

However as @milesarmstrong mentioned, as long as every dev-team needs access to the argocd namespace and there is no project inheritance each dev-team can create also evil project definitions and point there team-applications to the evil project definition (e.g. destination namespace = "*").

@alexmt alexmt added the backlog label Aug 26, 2019
@alexec alexec unpinned this issue Oct 23, 2019
@alexec alexec pinned this issue Nov 27, 2019
@alexmt alexmt unpinned this issue Jan 27, 2020
@alexmt alexmt removed the backlog label May 13, 2020
@jannfis jannfis added component:applications-set Bulk application management related type:usability Enhancement of an existing feature labels May 14, 2020
@ghost
Copy link

ghost commented Sep 1, 2020

@jannfis out of interest, how would the ApplicationSet practically solve this?

@alexmt
Copy link
Collaborator

alexmt commented Sep 2, 2020

@Tpx01 ApplicationSet is going to provide various generators. For example git file generator can scan repository and create app for each file that matches some pattern.

@ghost
Copy link

ghost commented Sep 3, 2020

@alexmt I looked into last night and indeed this concept could be very powerful! Thanks for pointing this out..

@grzegorzjudas
Copy link

grzegorzjudas commented Mar 30, 2023

Is it still on the radar? Open for almost 4 years, no recent activity. I was thinking I could get rid of my "app of apps" and the shared resources warnings by just connecting repo - I was sure this would work out-of-the-box when I do, assuming it'd scan my repo for YAMLs and apply them.

@DrummyFloyd
Copy link

would be great to have such a feature, especially when you want to bootstrap a cluster from scratch , you only have to set a repository where all you svcs are in .. and all will be deployed and started as you want ..
hope we we'll see something similar in the future

@AmazingTurtle
Copy link

AmazingTurtle commented Mar 20, 2024

When using the app-of-apps approach, you can easily and frequently run into sync waves issues especially when nesting your applications, argocd refuses to sync when the app-of-apps has sync wave -5 and a children has -6 for example. That's why I'm considering to opt for an auto discovery feature.

@jannfis
Copy link
Member

jannfis commented Mar 20, 2024

I think this issue is deprecated, because the use case has been implemented by ApplicationSet's Git generator since quite a while. Sync waves for ApplicationSet are practically implemented by ApplicationSet's Progressive Sync feature and will be further augmented by Application dependencies.

I'll be closing this issue.

@jannfis jannfis closed this as completed Mar 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:applications-set Bulk application management related enhancement New feature or request type:usability Enhancement of an existing feature
Projects
None yet
Development

No branches or pull requests