Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for nomad #52

Open
shumin1027 opened this issue Dec 5, 2022 · 10 comments
Open

support for nomad #52

shumin1027 opened this issue Dec 5, 2022 · 10 comments

Comments

@shumin1027
Copy link

shumin1027 commented Dec 5, 2022

I try to use cvmfs through CSI Plugin in nomad, but I encounter problems when creating volume it seems that the access_mode parameter configured in nomad is not supported

here is my volume config file:

type = "csi"
id   = "cvmfs-volume"
name = "cvmfs-volume"

plugin_id = "cvmfs0"

capability {
  access_mode     = "multi-node-reader-only"
  attachment_mode = "file-system"
}

mount_options {
  fs_type = "cvmfs2"
}

secrets {}

the error info:

root@ubuntu:~/nomad-jobs# nomad volume create cvmfs.volume.hcl 
Error creating volume: Unexpected response code: 500 (1 error occurred:
        * controller create volume: CSI.ControllerCreateVolume: volume "cvmfs-volume" snapshot source &{"" ""} is not compatible with these parameters: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported)

Can you provide an example of using cvmfs csi in nomad
Ref: #51

@gman0
Copy link
Collaborator

gman0 commented Dec 5, 2022

Hi @shumin1027. It seems this error originates from having non-nil AccessibilityRequirements (i.e. topology) in CreateVolumeRequest rather than AccessMode.

if req.GetAccessibilityRequirements() != nil {
return errors.New("volume accessibility requirements are not supported")
}

I don't have Nomad environment at hand so I cannot test this. Can you pass logs from the controller plugin (running with -v=5 verbosity level) to see what Nomad's CSI client is passing the driver? Is there a way to pass nil topology requirements when creating a volume?

@gman0
Copy link
Collaborator

gman0 commented Dec 5, 2022

Another way to do this is to create the volume manually (similar to how you would manually create a PersistentVolume and its PersistentVolumeClaim in Kubernetes). Is this possible in Nomand? That way you would circumvent the provisioning stage.

You can see a Kubernetes example for this here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/example/volume-pv-pvc.yaml

@shumin1027
Copy link
Author

@gman0
The example given above is to manually create volume

nomad volume create cvmfs.volume.hcl 

This is the log output by the controller plugin when creating a volume manually:

I1205 13:47:16.773803       1 grpcserver.go:136] Call-ID 3443: Call: /csi.v1.Controller/CreateVolume
I1205 13:47:16.774096       1 grpcserver.go:137] Call-ID 3443: Request: {"accessibility_requirements":{},"name":"cvmfs-volume","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"cvmfs2"}},"access_mode":{"mode":3}}]}
E1205 13:47:16.774136       1 grpcserver.go:141] Call-ID 3443: Error: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported

@gman0
Copy link
Collaborator

gman0 commented Dec 5, 2022

What I meant is to just register an existing volume without actually triggering CreateVolume call on the driver. I'm not familiar with Nomad so I'm not sure if it's possible.

The volume itself is "virtual", it's just a reference for cvmfs-csi.

@gman0
Copy link
Collaborator

gman0 commented Dec 5, 2022

Relaxing validation and letting it pass on "accessibility_requirements":{} doesn't seem too unreasonable -- we may include this change in the next point release.

@shumin1027
Copy link
Author

shumin1027 commented Dec 6, 2022

What I meant is to just register an existing volume without actually triggering CreateVolume call on the driver. I'm not familiar with Nomad so I'm not sure if it's possible.

The volume itself is "virtual", it's just a reference for cvmfs-csi.

@gman0
Execute the register command can be completed correctly:

nomad volume register cvmfs.volume.hcl 

csi plugin node can correctly access the content on cvmfs, but it seems that it cannot be automatically mounted in the application container

JuiceFS gives a good support use case,we can use it as a reference:

https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/cookbook/csi-in-nomad.md

https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/introduction.md#mount-by-process-by-process

@gman0
Copy link
Collaborator

gman0 commented Dec 8, 2022

csi plugin node can correctly access the content on cvmfs, but it seems that it cannot be automatically mounted in the application container

Is there an error message we could troubleshoot? My first guess would be missing rslave or rshared in the container mount (HostToContainer mount propagation in Kubernetes terminology)? See Example: Automounting CVMFS repositories and a Pod definition example.

https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/introduction.md#mount-by-process-by-process

I'm not sure I understood correctly, but cvmfs-csi doesn't distinguish between "mount-by-pod" and "mount-by-process". The cvmfs-csi node plugin needs to be already running on all nodes of the cluster that are expected to use CVMFS volumes (DaemonSet in Kubernetes terminology).

@shumin1027
Copy link
Author

shumin1027 commented Dec 10, 2022

@gman0 Thank you for your help,your guess might be right,nomad seems to be temporarily unsupported mount-propagation

This is the mount information of application container:
image

@gman0
Copy link
Collaborator

gman0 commented Dec 12, 2022

Thanks for following this up, @shumin1027. We can continue once this is resolved in Nomad.

@shumin1027
Copy link
Author

shumin1027 commented Dec 15, 2022

@gman0
I fixed this issue: https://github.com/hashicorp/nomad/issues/15524, the mount-propagation option can be successfully set when mounting the volume

When I use the host volume and directly mount '/cvmfs' on the host into the container and set mount-propagation to rslave, everything is as expected

But when I use the cvmfs-csi volume and set mount-propagation to rslave, it still not automatically installed in the application containerthe application container

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants