Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skupper in Production? #487

Open
RajKuni opened this issue May 19, 2021 · 12 comments
Open

Skupper in Production? #487

RajKuni opened this issue May 19, 2021 · 12 comments

Comments

@RajKuni
Copy link

RajKuni commented May 19, 2021

Hi,

I've been using Skupper in a dev environment to connect services in multiple clusters. It works great and was super easy to setup. So I'm really interested in using this in a production environment. Is it safe to use Skupper in production? Can it handle production volume traffic?

Also, does a helm chart exist for skupper? And is it possible to run multiple replicas of the skupper router in a cluster for redundancy/high-availability?

Thank you!

@ted-ross
Copy link
Member

ted-ross commented May 19, 2021 via email

@grs grs closed this as completed May 20, 2021
@grs grs reopened this May 20, 2021
@RajKuni
Copy link
Author

RajKuni commented May 20, 2021

Do you think it would be possible to create my own helm chart using the output that the skupper utility generates? (does the utility have a dry-run flag that just prints out all the yaml?)

Is it possible to generate my own TLS certs and have the skupper instances use that? (To more easily keep track of certs).

How do we upgrade existing skupper instances? Will whatever configuration they have (connections to peers, currently used certs) persist upon upgrade?

Also, I did try creating a helm chart with the YAML files that define a site controller: https://github.com/skupperproject/skupper/tree/0.6/cmd/site-controller.

But anytime I changed the config and rolled out the updated yaml, the changes did not get reflected (e.g. On first deploy I set console to "true", on second rollout, I set console to "false". The console service does not get deleted to reflect this change).

@grs
Copy link
Member

grs commented May 20, 2021

Do you think it would be possible to create my own helm chart using the output that the skupper utility generates? (does the utility have a dry-run flag that just prints out all the yaml?)

No, there is no dry-run mode. The primary issue there is that when using a loadbalancer service, skupper init waits for the IP/hostname to be written into the status before generating the certs as those certs need to be valid for that IP/hostname.

You could however replace skupper init (or use of site controller) by manually created yaml.

Is it possible to generate my own TLS certs and have the skupper instances use that? (To more easily keep track of certs).

Not when using skupper init. If manually creating the appropriate yaml then the certs would be part of that.

How do we upgrade existing skupper instances? Will whatever configuration they have (connections to peers, currently used certs) persist upon upgrade?

You can use the skupper update command (which will update to e the cli version). If using the sitecontroller, you can just update the version of the site controller and it will then try to update the sites.

Also, I did try creating a helm chart with the YAML files that define a site controller: https://github.com/skupperproject/skupper/tree/0.6/cmd/site-controller.

But anytime I changed the config and rolled out the updated yaml, the changes did not get reflected (e.g. On first deploy I set console to "true", on second rollout, I set console to "false". The console service does not get deleted to reflect this change).

At present not all (in fact most) options can be dynamically updated. That is on the roadmap but we haven't got there yet. The options that will be updated are router-logging and router-debug-mode at present.

@RajKuni
Copy link
Author

RajKuni commented May 20, 2021

Okay, I see. Thank you.

@ted-ross
Copy link
Member

RajKuni,

Would you be willing to share a little bit about your use case? What is it you wish to accomplish with a Helm chart? What is the scale and speed of your application?

Thanks,
-Ted
email: ted@nethopper.io

@RajKuni
Copy link
Author

RajKuni commented May 21, 2021

We have pipelines that deploy applications and related resources to GKE clusters via helm charts. It would be nice to fit Skupper under the same umbrella. Having everything defined in code as yaml also makes it easy to replicate config in multiple clusters and if we ever run into a disaster recovery scenario, we just need to run the helm chart to bring everything back up to the way it was before.

As for high availability, it would be nice to run atleast two skupper router instances per cluster so that if one pod goes down, the link is still maintained. Also, GKE does cluster auto-upgrades. The upgrade happens by introducing a new node into the cluster and then draining an old node. In this scenario, with the use of pod anti-affinity which will ensure pods are scheduled on different nodes , having multiple instances helps w/ maintaining uptime.

Regarding scale and speed of our application/platform - currently getting around 30 requests/sec at peak and try to serve requests within ~200-300 ms. The amount of requests we serve is going up pretty fast as we grow.

@sahil87
Copy link

sahil87 commented Nov 3, 2021

Even we used Skupper in production.
But GKE auto-upgrades/maintenance brought skupper status down twice.

Is there any way Skupper could reconnect using a long lived token, maybe configured via ConfigMap instead of command line- so even if nodes get deleted and recreated by GKE (or any other managed K8S service), Skupper comes back online on its own?

@grs
Copy link
Member

grs commented Nov 3, 2021

Have you seen https://skupper.io/docs/declarative/tutorial.html? It may help in part. However it depends on what exactly happened in your scenario. Links should be automatically re-established even when lost. If using a load-balancer service and the ip of that changes, that would prevent further reconnection without upating to the new address. Not sure if that might have been your issue or not?

@waldner
Copy link
Contributor

waldner commented Jul 27, 2022

Is there any news on this issue, especially the helm chart? Most software is deployed with helm charts nowadays, and having an official skupper chart would really help, especially in those situations where skupper is installed as part of a larger product/deployment, and could be used as a subchart.

@emlagowski
Copy link

I'm also interest in helm chart or dry-run option to create helm chart. All our deployments / infra / tools are installed on clusters via GitOps and git repositories with helm charts. Installing anything via command line is strongly unwanted on production. Maybe is there any manifest with all CRDs etc. that could be used to install Skupper from yaml file? I can go further with it.

@DreamingRaven
Copy link

DreamingRaven commented Jun 5, 2024

I too would be interested in a helm chart for skupper. I also note that there is a skupper-operator which may make it easier to create a helm chart for skupper through the operator to handle the nitty gritty details. Ideally without having to interact with the OLM which itself does not have a packaged helm chart but does have https://github.com/operator-framework/operator-lifecycle-manager/tree/master/deploy/chart

@sahil87
Copy link

sahil87 commented Oct 24, 2024

Deploying skupper with Kustomize

Got this right after a few tries. Hope this helps others deploying skupper with Kustomize!
This Kustomization assumes you need a single namespace installation.

Source / Central cluster

  • kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://raw.githubusercontent.com/skupperproject/skupper/1.8/cmd/site-controller/deploy-watch-current-ns.yaml
# This configmap is based on https://skupper.io/docs/yaml/index.html#site-configmap-yaml-reference
# We have set edge: "false" over here as we use a hub-n-spoke structure. The value depends on your use case.
  - resources/source-site-configmap.yaml
  - resources/source-secret-request.yaml
  • resources/source-secret-request.yaml:
apiVersion: v1
kind: Secret
metadata:
  labels:
    skupper.io/type: connection-token-request
  name: source-secret

Destination / Edge cluster(s)

  • kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://raw.githubusercontent.com/skupperproject/skupper/1.8/cmd/site-controller/deploy-watch-current-ns.yaml
# This configmap is based on https://skupper.io/docs/yaml/index.html#site-configmap-yaml-reference 
# We have set edge: "true" over here as we use a hub-n-spoke structure. The value depends on your use case.
  - resources/skupper-site-configmap.yaml 
  - resources/source-secret.yaml  
  • resources/source-secret.yaml
    This is the secret generated from Source Cluster's kubectl context, into Destination Clusters' resources.
    This can be done by:
kubectl get secret -n skupper_namespace -o json source-secret | \
  jq 'del(.metadata.namespace,.metadata.resourceVersion,.metadata.uid) | .metadata.creationTimestamp=null' > \ 
  resources/source-secret.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants