-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skupper in Production? #487
Comments
On Wed, May 19, 2021 at 5:21 PM RajKuni ***@***.***> wrote:
Hi,
I've been using Skupper in a dev environment to connect services in
multiple clusters. It works great and was super easy to setup. So I'm
really interested in using this in a production environment. Is it safe to
use Skupper in production? Can it handle production volume traffic?
I'm very glad to hear that you've had a good experience with Skupper so
far. The router that is the foundation on which Skupper is built has been
used in production environments for years. Skupper is, or will very soon
be, supported commercially for production use by a large corporation
(IBM/Red Hat) and a startup (nethopper.io).
Also, does a helm chart exist for skupper? And is it possible to run
multiple replicas of the skupper router in a cluster for redundancy?
I'm not aware of there being a Helm chart for Skupper. Multiple router
replicas are not presently supported by Skupper but this is a roadmap
feature that will be coming in the future.
…-Ted
Thank you!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#487>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFEKJTAKURGFEXF77QARQLTOQSTZANCNFSM45FQCKJA>
.
|
Do you think it would be possible to create my own helm chart using the output that the skupper utility generates? (does the utility have a dry-run flag that just prints out all the yaml?) Is it possible to generate my own TLS certs and have the skupper instances use that? (To more easily keep track of certs). How do we upgrade existing skupper instances? Will whatever configuration they have (connections to peers, currently used certs) persist upon upgrade? Also, I did try creating a helm chart with the YAML files that define a site controller: https://github.com/skupperproject/skupper/tree/0.6/cmd/site-controller. But anytime I changed the config and rolled out the updated yaml, the changes did not get reflected (e.g. On first deploy I set console to "true", on second rollout, I set console to "false". The console service does not get deleted to reflect this change). |
No, there is no dry-run mode. The primary issue there is that when using a loadbalancer service, skupper init waits for the IP/hostname to be written into the status before generating the certs as those certs need to be valid for that IP/hostname. You could however replace skupper init (or use of site controller) by manually created yaml.
Not when using skupper init. If manually creating the appropriate yaml then the certs would be part of that.
You can use the skupper update command (which will update to e the cli version). If using the sitecontroller, you can just update the version of the site controller and it will then try to update the sites.
At present not all (in fact most) options can be dynamically updated. That is on the roadmap but we haven't got there yet. The options that will be updated are router-logging and router-debug-mode at present. |
Okay, I see. Thank you. |
RajKuni, Would you be willing to share a little bit about your use case? What is it you wish to accomplish with a Helm chart? What is the scale and speed of your application? Thanks, |
We have pipelines that deploy applications and related resources to GKE clusters via helm charts. It would be nice to fit Skupper under the same umbrella. Having everything defined in code as yaml also makes it easy to replicate config in multiple clusters and if we ever run into a disaster recovery scenario, we just need to run the helm chart to bring everything back up to the way it was before. As for high availability, it would be nice to run atleast two skupper router instances per cluster so that if one pod goes down, the link is still maintained. Also, GKE does cluster auto-upgrades. The upgrade happens by introducing a new node into the cluster and then draining an old node. In this scenario, with the use of pod anti-affinity which will ensure pods are scheduled on different nodes , having multiple instances helps w/ maintaining uptime. Regarding scale and speed of our application/platform - currently getting around 30 requests/sec at peak and try to serve requests within ~200-300 ms. The amount of requests we serve is going up pretty fast as we grow. |
Even we used Skupper in production. Is there any way Skupper could reconnect using a long lived token, maybe configured via ConfigMap instead of command line- so even if nodes get deleted and recreated by GKE (or any other managed K8S service), Skupper comes back online on its own? |
Have you seen https://skupper.io/docs/declarative/tutorial.html? It may help in part. However it depends on what exactly happened in your scenario. Links should be automatically re-established even when lost. If using a load-balancer service and the ip of that changes, that would prevent further reconnection without upating to the new address. Not sure if that might have been your issue or not? |
Is there any news on this issue, especially the helm chart? Most software is deployed with helm charts nowadays, and having an official skupper chart would really help, especially in those situations where skupper is installed as part of a larger product/deployment, and could be used as a subchart. |
I'm also interest in helm chart or dry-run option to create helm chart. All our deployments / infra / tools are installed on clusters via GitOps and git repositories with helm charts. Installing anything via command line is strongly unwanted on production. Maybe is there any manifest with all CRDs etc. that could be used to install Skupper from yaml file? I can go further with it. |
I too would be interested in a helm chart for skupper. I also note that there is a skupper-operator which may make it easier to create a helm chart for skupper through the operator to handle the nitty gritty details. Ideally without having to interact with the OLM which itself does not have a packaged helm chart but does have https://github.com/operator-framework/operator-lifecycle-manager/tree/master/deploy/chart |
Deploying skupper with KustomizeGot this right after a few tries. Hope this helps others deploying skupper with Kustomize! Source / Central cluster
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/skupperproject/skupper/1.8/cmd/site-controller/deploy-watch-current-ns.yaml
# This configmap is based on https://skupper.io/docs/yaml/index.html#site-configmap-yaml-reference
# We have set edge: "false" over here as we use a hub-n-spoke structure. The value depends on your use case.
- resources/source-site-configmap.yaml
- resources/source-secret-request.yaml
apiVersion: v1
kind: Secret
metadata:
labels:
skupper.io/type: connection-token-request
name: source-secret Destination / Edge cluster(s)
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/skupperproject/skupper/1.8/cmd/site-controller/deploy-watch-current-ns.yaml
# This configmap is based on https://skupper.io/docs/yaml/index.html#site-configmap-yaml-reference
# We have set edge: "true" over here as we use a hub-n-spoke structure. The value depends on your use case.
- resources/skupper-site-configmap.yaml
- resources/source-secret.yaml
kubectl get secret -n skupper_namespace -o json source-secret | \
jq 'del(.metadata.namespace,.metadata.resourceVersion,.metadata.uid) | .metadata.creationTimestamp=null' > \
resources/source-secret.yaml |
Hi,
I've been using Skupper in a dev environment to connect services in multiple clusters. It works great and was super easy to setup. So I'm really interested in using this in a production environment. Is it safe to use Skupper in production? Can it handle production volume traffic?
Also, does a helm chart exist for skupper? And is it possible to run multiple replicas of the skupper router in a cluster for redundancy/high-availability?
Thank you!
The text was updated successfully, but these errors were encountered: