Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing pack content: Bundle custom st2packs image vs Shared NFS storage? #18

Closed
arm4b opened this issue Sep 21, 2018 · 7 comments · Fixed by #199
Closed

Sharing pack content: Bundle custom st2packs image vs Shared NFS storage? #18

arm4b opened this issue Sep 21, 2018 · 7 comments · Fixed by #199

Comments

@arm4b
Copy link
Member

arm4b commented Sep 21, 2018

Bundling custom st2 packs as immutable Docker image

In current implementation for this chart to use custom st2 packs we rely on building a dedicated Docker image with pack content and virtualenv, pre-installed and bundled beforehand, see: https://docs.stackstorm.com/latest/install/ewc_ha.html#custom-st2-packs
As a downside means any writes like st2 pack install or saving the workflow from st2flow won't work in current HA env.

Share content via ReadWriteMany shared file system

There is an alternative approach, - sharing pack content via read-write-many NFS (Network File System) as High Availability Deployment doc recommends.

Examples

For example, There is a stable Helm chart nfs-server-provisioner which codifies NFS in an automated way.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
lists K8s supported solutions for ReadWriteMany (RWX) volumes.

From that list of volumes, additional interest apart of NFS, goes to CephFS and https://github.com/rook/rook Rook, a CNCF-hosted project for storage orchestration.

Feedback Needed!

As beta is in progress and both methods have their pros and cons, we’d like to hear your feedback, ideas, experience running any of these shared file-systems and which way would work better for you.

@troshlyak
Copy link

I would be very interested to understand what would be the best way to update/add/remove or in general the lifecycle of the packs, given the proposed above approaches.

For the shared content file system I guess we can spin a k8s job that would mount the shared filesystem, update it, and then update mongodb through st2 pack.load packs=some_pack.

For the customs pack Docker image it seems that we need to kill and recreate all containers (which generally should be fine for a immutable infrastructure) in order to rerun the initcontainers that are responsible for coping the pack/virtualenv data into the container, but then what would happen with long running actions (we have actions running from 30mins up to 2h+) inside st2actionrunner containers? Is there a way to ensure that the st2actionrunner container is idle, before recreating it? And then it comes the question on how do we update mongodb - I guess we need to rerun the job-st2-register-content job as part of the pack update process.

@shusugmt
Copy link

I was a bit away from k8s but now it seems providing RWX capable PV become much easier than before. AKS natively support RWX PV via AzureFile plugin. EKS/GKE, they both provide managed NFS services (Amazon Elastic File System/Google Cloud Filestore) and we can use nfs-client-provisioner BUT, the NFS backed PV may suffer in terms of performance especially when the pack has many dependent pip packages, and/or the package that needs native binary to be built.

IMO though the current approach seems much stable since the only problem is mongodb part.

which generally should be fine for a immutable infrastructure

I feel this is the reasonable way for now. We can just put a load balance in front, then switch traffic once the new st2 deployment become ready. But this is only possible if you can just through away any data like execution logs stored in mongodb. And of course you need some extra work like exporting/importing st2kv data between new/old clusters. This approach is which exactly called as "Content Roll-Over" method in the doc.

@troshlyak
Copy link

troshlyak commented Mar 13, 2019

Indeed "Roll-Over" deployments seem like the way to go and the mongodb "problem" can most probably be fixed, by externalising it, as discussed in #24. This would also eliminate the need to export/import st2kv. And having the RabbitMQ "within" the deployment should limit the communication between sensor/triggers, runners etc. inside the deployment as we don't want a runner from old deployment to pick up a job for a new action that is missing in the old deployment packs.

I've actually just quickly tested this and I'm able to see the running tasks from both old/new deployment, so I can switch on the fly and still maintain control over the "old" running tasks and interact with them (like canceling them). Now I need to understand how to prevent the sensors from both deployments triggering simultaneously, while the old deployment is "draining" from tasks.

@ericreeves
Copy link
Contributor

ericreeves commented Feb 25, 2021

Commenting to provide some additional feedback given our current use case.

We are deployed to EKS, and have an EFS volume used for mounting "packs" and "virtualenvs". We are currently using the outstanding NFS pull request and it has been working like a champ. We have an in-house API/UI built that allows the construction of user workflows. Those workflows are ultimately written to the shared EFS volume by a Lambda function so we can update things on the fly. For our use case, any sort of pack-sharing mechanism that requires "bake and deploy" is not going to do the trick.

And to make things a bit more fun, we do have a core set of internal packs that are essentially shared libraries utilized by the custom packs that we develop with our API/UI. We're considering a build job that takes our "shared library packs", assembles a package (maybe simply a tarball), and deploys them to the cluster using a Lambda that writes to the EFS volumes and issues the "register" and "setup_virtualenvs" API calls. We could use a custom pack image for this piece, but then I need to make some significant changes to the Helm chart to support both a custom pack image AND NFS. I truly do not want to do this, because we'd like to stay more in-line with master for easier updates and ability to contribute back.

Cheers!

@cognifloyd
Copy link
Member

I am working on converting an old 1ppc k8s install to stackstorm-ha charts.

We use Ceph + rook to handle packs, configs, and virtualenvs with approximately this:

volumeMounts:
- name: st2-packs
  mountPath: /opt/stackstorm/packs
- name: st2-configs
  mountPath: /opt/stackstorm/configs
- name: st2-virtualenvs
  mountPath: /opt/stackstorm/virtualenvs

volumes:
- name: st2-packs
  flexVolume:
    driver: ceph.rook.io/rook
    options:
      fsName: fs1
      clusterNamespace: rook-ceph
      path: /st2/packs
- name: st2-configs
  flexVolume:
    driver: ceph.rook.io/rook
    options:
      fsName: fs1
      clusterNamespace: rook-ceph
      path: /st2/configs
- name: st2-virtualenvs
  flexVolume:
    driver: ceph.rook.io/rook
    options:
      fsName: fs1
      clusterNamespace: rook-ceph
      path: /st2/virtualenvs

This has been working quite well for some time. I'm happy to put together a PR to add support for this to stackstorm-ha.

@cognifloyd
Copy link
Member

cognifloyd commented Jun 17, 2021

I started to add charts for the rook-ceph operator, but I think that setting up a storage backend should be out-of-scope for the StackStorm chart because:

  1. There are many different in-tree k8s volume types plugins, and even more out-of-tree volume plugins that can use flexVolume (the older standard) or CSI (the newer standard). rook-ceph + flexVolume happens to be the one that I need, but others are very likely to need NFS or some other storage solution in their cluster.
  2. Our current solution, copy packs from images into emptyDir volumes, makes for a good robust cluster-agnostic default.
  3. Setting up an operator like rook-ceph ideally uses namespaces separate from the st2 namespace. This becomes problematic when considering installations like helm install --namespace st2 stackstorm-ha . because that ends up putting all of that storage infrastructure in the st2 namespace.

So, I think we should allow something like this in values.yaml:

st2:
  packs:
    images: []
    use_volumes: true
    volumes:
      packs:
        # volume definition here (required when st2.packs.use_volumes = true)
      virtualenvs:
        # volume definition here (required when st2.packs.use_volumes = true)
      configs:
        # optional volume definition here

Translating the example from my previous comment into this context I would put this in my values.yaml:

st2:
  packs:
    images: []
    use_volumes: true
    volumes:
      packs:
        flexVolume:
          driver: ceph.rook.io/rook
          options:
            fsName: fs1
            clusterNamespace: rook-ceph
            path: /st2/packs
      virtualenvs:
        flexVolume:
          driver: ceph.rook.io/rook
          options:
            fsName: fs1
            clusterNamespace: rook-ceph
            path: /st2/virtualenvs
      configs:
        flexVolume:
          driver: ceph.rook.io/rook
          options:
            fsName: fs1
            clusterNamespace: rook-ceph
            path: /st2/configs

Then we can translate that into volume definitions that get mounted to /opt/stackstorm/packs, /opt/stackstorm/virtualenvs, (and maybe /opt/stackstorm/configs).

What does everyone think of this approach? Is this simple enough but flexible enough for most requirements?

@ericreeves
Copy link
Contributor

I think that looks like a fantastic, flexible approach!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants