You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an EC2 instance in us-east-1, and we're trying to mount a volume that's in ca-central-1. We can successfully mount via IP / NFS (So communication is working fine), but we specifically need to use efs-utils in this instance because we're actually mounting an access point. The problem is that mount.efs wants to use us-east-1 as the region for the mount, since that's where the instance is. The only way I can make the mount work is by updating efs-utils.conf to hardcode the region to ca-central-1, but this doesn't work for our use case because on this same instance we also have many volumes mounted from us-east-1.
I browsed through the code of mount.efs, and it doesn't seem like defining a region in the mount options is possible, though I would argue that it should be. I tried overriding the region with a /root/.aws/config file and passing a 'awsprofile=canada' in the mount options, but this didn't do anything helpful.
I believe what I need to do should be possible, based on the docs, but I cannot get it working, and I can't tell from browsing mount.efs code how to make it happen.
The text was updated successfully, but these errors were encountered:
Thanks for bringing this to our attention. We've seen a few use cases now where this would be beneficial. We can't make any guarantees as to if/when this will be implemented, but we'll update this thread with any updates as they come up.
I would also like to see this. My use case is multiple EFS volumes in different regions, that I want to write to in the same CI pipeline, so I need to mount all of them.
Multiple mounts to different regions does in fact work, but to do it I have to programmatically change the region in the config file before every mount command. It's really fragile and leaves the config file in an uncertain state if anything goes wrong.
Adding a mount option for the region seems like the logical choice for maximum flexibility.
A lot of similar use cases are also covered in this other issue:
We have an EC2 instance in us-east-1, and we're trying to mount a volume that's in ca-central-1. We can successfully mount via IP / NFS (So communication is working fine), but we specifically need to use efs-utils in this instance because we're actually mounting an access point. The problem is that mount.efs wants to use us-east-1 as the region for the mount, since that's where the instance is. The only way I can make the mount work is by updating efs-utils.conf to hardcode the region to ca-central-1, but this doesn't work for our use case because on this same instance we also have many volumes mounted from us-east-1.
I browsed through the code of mount.efs, and it doesn't seem like defining a region in the mount options is possible, though I would argue that it should be. I tried overriding the region with a /root/.aws/config file and passing a 'awsprofile=canada' in the mount options, but this didn't do anything helpful.
I believe what I need to do should be possible, based on the docs, but I cannot get it working, and I can't tell from browsing mount.efs code how to make it happen.
The text was updated successfully, but these errors were encountered: