Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing pack content: Bundle custom st2packs image vs Shared NFS storage? #18

Open
armab opened this issue Sep 21, 2018 · 3 comments
Open

Sharing pack content: Bundle custom st2packs image vs Shared NFS storage? #18

armab opened this issue Sep 21, 2018 · 3 comments

Comments

@armab
Copy link
Member

@armab armab commented Sep 21, 2018

Bundling custom st2 packs as immutable Docker image

In current implementation for this chart to use custom st2 packs we rely on building a dedicated Docker image with pack content and virtualenv, pre-installed and bundled beforehand, see: https://docs.stackstorm.com/latest/install/ewc_ha.html#custom-st2-packs
As a downside means any writes like st2 pack install or saving the workflow from st2flow won't work in current HA env.

Share content via ReadWriteMany shared file system

There is an alternative approach, - sharing pack content via read-write-many NFS (Network File System) as High Availability Deployment doc recommends.

Examples

For example, There is a stable Helm chart nfs-server-provisioner which codifies NFS in an automated way.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
lists K8s supported solutions for ReadWriteMany (RWX) volumes.

From that list of volumes, additional interest apart of NFS, goes to CephFS and https://github.com/rook/rook Rook, a CNCF-hosted project for storage orchestration.

Feedback Needed!

As beta is in progress and both methods have their pros and cons, we’d like to hear your feedback, ideas, experience running any of these shared file-systems and which way would work better for you.

@troshlyak

This comment has been minimized.

Copy link

@troshlyak troshlyak commented Mar 13, 2019

I would be very interested to understand what would be the best way to update/add/remove or in general the lifecycle of the packs, given the proposed above approaches.

For the shared content file system I guess we can spin a k8s job that would mount the shared filesystem, update it, and then update mongodb through st2 pack.load packs=some_pack.

For the customs pack Docker image it seems that we need to kill and recreate all containers (which generally should be fine for a immutable infrastructure) in order to rerun the initcontainers that are responsible for coping the pack/virtualenv data into the container, but then what would happen with long running actions (we have actions running from 30mins up to 2h+) inside st2actionrunner containers? Is there a way to ensure that the st2actionrunner container is idle, before recreating it? And then it comes the question on how do we update mongodb - I guess we need to rerun the job-st2-register-content job as part of the pack update process.

@shusugmt

This comment has been minimized.

Copy link

@shusugmt shusugmt commented Mar 13, 2019

I was a bit away from k8s but now it seems providing RWX capable PV become much easier than before. AKS natively support RWX PV via AzureFile plugin. EKS/GKE, they both provide managed NFS services (Amazon Elastic File System/Google Cloud Filestore) and we can use nfs-client-provisioner BUT, the NFS backed PV may suffer in terms of performance especially when the pack has many dependent pip packages, and/or the package that needs native binary to be built.

IMO though the current approach seems much stable since the only problem is mongodb part.

which generally should be fine for a immutable infrastructure

I feel this is the reasonable way for now. We can just put a load balance in front, then switch traffic once the new st2 deployment become ready. But this is only possible if you can just through away any data like execution logs stored in mongodb. And of course you need some extra work like exporting/importing st2kv data between new/old clusters. This approach is which exactly called as "Content Roll-Over" method in the doc.

@troshlyak

This comment has been minimized.

Copy link

@troshlyak troshlyak commented Mar 13, 2019

Indeed "Roll-Over" deployments seem like the way to go and the mongodb "problem" can most probably be fixed, by externalising it, as discussed in #24. This would also eliminate the need to export/import st2kv. And having the RabbitMQ "within" the deployment should limit the communication between sensor/triggers, runners etc. inside the deployment as we don't want a runner from old deployment to pick up a job for a new action that is missing in the old deployment packs.

I've actually just quickly tested this and I'm able to see the running tasks from both old/new deployment, so I can switch on the fly and still maintain control over the "old" running tasks and interact with them (like canceling them). Now I need to understand how to prevent the sensors from both deployments triggering simultaneously, while the old deployment is "draining" from tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.