-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume driver support #236
Comments
@euank what is the delta to minimally support these changes? Would assume that the ecs-agent would recognize the the host volume doesnt need a Looking into making some changes |
Docker also has additional error conditions associated with volume drivers. Search your souls for if this is MVP. |
We're also very interested in this support |
Any news regarding the support of volume drivers? |
+1 would love to see this. Getting more and more requests. |
+1 this would be great |
+1 this is holding up a lot of use cases and google compute already makes this available |
@cbbarclay any update on when ECS will support Docker volume drivers? Thanks! |
I recently evaluated deploying some new infrastructure on ECS but this prevented me from doing so. |
+100500 |
It's a new record! 😉 |
👍 |
+1 !! |
+1 |
Yeah! will be useful to make possible to use EFS shares via the plugin http://netshare.containx.io/ |
+1 |
1 similar comment
+1 |
+1. An EFS/NFS volume driver is a critical requirement for many use cases. Could be http://netshare.containx.io/ (which is not yet production ready but quite nice) or a new one designed with EFS in mind |
+1 EFS/NFS |
+1 |
+1. I want to use netshare.containx.io too |
+1 |
1 similar comment
+1 |
+1. This will enable ECS to seamlessly support stateful applications, which is a big gap in my opinion. |
+1 |
1 similar comment
+1 |
+1 |
2 similar comments
+1 |
+1 |
At least wrap the nfs volume driver so we can use it with EFS! |
@euank @samuelkarp @aaithal @richardpen This feature request was opened at Oct 26 2015. Almost 2 years have passed and nothing about it was discussed. IMHO, this is a little feature that could be easily be developed by AWS team. Please, someone from AWS could give us an ETA or just something about if this feature is in the roadmap for this month, or this year? |
Guys, I've a workaround for this to work. I'm using Docker Cloudstor as a plugin for persistent storage. To make it work in Amazon ECS, I've did the following steps:
docker plugin install \
--alias cloudstor:aws \
--grant-all-permissions \
docker4x/cloudstor:17.06.0-ce-aws2 \
CLOUD_PLATFORM=AWS \
AWS_REGION=$INSTANCE_REGION \
EFS_SUPPORTED=0 \
DEBUG=1
mkdir /dev/mqueue
docker volume create -d "cloudstor:aws" --opt ebstype=gp2 --opt size=1 --opt backing=relocatable myvol
|
@galindro While we don't generally comment on our future roadmap (including ETA), I can shed a bit more light onto the concerns here. We've generally been thinking about volume drivers in terms of application persistent storage. There are a few different models where persistent storage can be used for applications:
Volume drivers could be paths to enabling all of these use-cases. However, there are some challenges with volume drivers specifically:
|
I just need "blocker" type of storage in the first place... With a few
extra options maybe. I can get around all of it with "magic" hints in the
definitions / the ECS ui. But for how long do we have to keep doing it that
way... :).
…On tor 21 sep. 2017 01:35 Samuel Karp ***@***.***> wrote:
@galindro <https://github.com/galindro> While we don't generally comment
on our future roadmap (including ETA), I can shed a bit more light onto the
concerns here.
We've generally been thinking about volume drivers in terms of application
persistent storage. There are a few different models where persistent
storage can be used for applications:
- Shared storage, where each copy of your application sees the same
files. One of the more common technologies for this is NFS, and the Amazon
EFS service provides a hosted version of NFS. In terms of scheduling, it
doesn't matter how many copies you would have as long as they could all see
the same data.
- Pre-populated, independent storage. This would encompass a situation
where each copy of your application wants to see the same data-set at
start, but does not need to share mutations. Common use-cases for this
involve things like read-only views or copy-on-write; Amazon EBS provides a
hosted block storage service with volumes created from snapshots. In terms
of scheduling, it doesn't matter how many copies you have as long as they
could all start from the same data.
- Persistent, restartable storage. This would be a use-case akin to
upgrading a traditional stateful application in-place, but with a container
wrinkle added. Amazon EBS provides a hosted block storage service where
volumes can be attached and detached from instances. In terms of
scheduling, you'd only have one copy here with its own storage. If you were
to have multiple copies of the application, each would see separate data.
- Persistent, sharded storage. This would be a use-case akin to the
previous one, except that multiple replicas might each see different
shards. In terms of scheduling, we'd have to maintain a mapping of
shard-to-application-copy to ensure that storage doesn't get lost or
unused; complications would occur during deployments or during
scale-out/scale-in.
Volume drivers could be paths to enabling all of these use-cases. However,
there are some challenges with volume drivers specifically:
- Many of the clustered volume drivers require all of your instances
to be able to talk to each other and be configured together. We'd need to
figure out how to handle error conditions where this isn't the case, so
that you have enough information in order to debug it.
- The volume drivers don't indicate to us what their scheduling
requirements are: are we free to start as many copies as we want (desired
count in a service), are we restricted to only running one, do we need to
handle a persistent identity that gets passed in to a given container
configuration, etc.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#236 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXZzaa-S9eY8cpSNv8zexvvFZjMMMEtks5skaFQgaJpZM4GVrUW>
.
|
@samuelkarp I understand your concerns but I don't think they are great enough to prevent this feature from being considered for work.
This is true, however for the simplest (and most obvious?) use-case of using the EBS volume driver, this is not a concern.
They do, they pass back an error (I think, it's been two years since I looked at this) which then indicates to ECS that the container is unschedulable and requires intervention by an administrator. This is how Kubernetes handles the problem. The solution is probably for the user to not use a storage provider with inflexible scheduling requirements - again, EBS does not have this problem (probably... is the 26 volume-per-instance limit still in place?) I understand the desire to not expose users to features that might break as a general concept. Mobile phones, cable boxes, Echo-devices... these are all things which should not come with sharp edges. AWS, however, is a smart platform for smart people. Infrastructure is hard. I think AWS should give its users the benefit of the doubt and allow them enough rope to hang themselves. To that I mean, this feature has caveats for sure but the benefits outweigh the negatives, because right now the alternative is to move your workload to a scheduler which does support storage services... like.. GKE :( |
My apologies, I did not intend to imply that these concerns are preventing this feature from being considered. I just wanted to give a little more information as to the type of things we're thinking about.
An error like that is a very late-binding failure, occurring after a placement decision has taken place. Failures are definitely a signal that can be taken into account by a scheduler, but I was talking about information that can be exposed prior to a placement decision being made. |
In the meanwhile, I think that IT admins/Devops could use my workarround considering that almost all storage driver for Docker exposes the volumes in a way that they could be used by docker run -v myvolume:/foo -w /foo -i -t ubuntu bash The only problem that this approach have, IMHO, is that you need to create a volume first with the I've tested with Docker Couldstor, but I think that it would work with RexRay. @frimik I think that blocker is a little old solution. Try Docker Couldstor. It is a better solution. |
@samuelkarp I want to point out another use-case that I'm not sure is covered by your bullets above. In our applications, data is persisted elsewhere (S3) and updated regularly by separate systems. Our ECS containers need to start from the current state of that data, perform their processing, and write the output elsewhere.
In this case the most important missing features are accounting for disk space during scheduling, and limiting disk usage to some reserved amount. Limiting can be accomplished via volume driver options. Aside from the accounting issues, all we'd need for limiting would be the ability to pass arbitrary |
+1 |
Volume drivers are a pretty critical feature of Docker - it seems weird not to allow us to use them? |
@samuelkarp while I understand there is some appetite for broader and more complex volume driver support, does it seem likely that ECS would allow the mounting of an EBS volume and/or an EFS data store as part of the Task Definition at some point in the near future? My guess is that addresses the vast majority of the use cases right? |
@samuelkarp I know Amazon position is to not disclose the roadmap and whatnot but can you please give us an update on this much needed functionality? I saw the announcement for ENI to container (awsvpc) which solves the networking equivalent of the volume driver ask in this issue. The ability to associate an EBS volume would go a long way. Glad to see ECS feature development is active. |
+1 I poorly assumed this feature would have been present in the task definition features - and was banking on being able to use it. It would be great addition! |
👍 this really puts a damper on our use of AWS Batch (which uses ECS). |
👍 |
This really makes working with efs and ecs difficult. The fact that overlay2 is not supported with nfs4 (due to d_type) in docker means you can't simply change your docker graph location for docker daemon. I had hoped the alternative would be using a log driver, but this is a mess if I can't make unique volumes per task. Alternatively if I could even specify a dynamic value in my volume bind (so each task had it's own "writer" volume) it would make efs and ecs much more usable. |
We have launched support for Docker volume driver: https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-ecs-now-supports-docker-volume-and-volume-plugins/ This feature is available through amzn-ami-2018.03.d-amazon-ecs-optimized AMI, or from agent v1.20.1 You can find the developer guide here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html Please let us know if you have any questions. |
Does this not work with Cloudformation yet? |
CFN support for this feature isn't available yet, but we are working with CFN team to enable it. |
@yunhee-l any chance you could provide a task def example or tutorial for using docker volume driver for efs or ebs? The links you provided are just marketing links and the developer guide just states it's available but no details on how. |
@analogrithems - i spent some time working with it this evening and was able to get a persistent EBS volume attached to a task launched from ECS. I'm no expert at this point, but here are the high points:
|
The VolumeDriver parameter is now part of the Docker Remote API 1.21. It would be great to get this included in the ECS task definition. This would allow me to access more storage options for my Docker containers, for example, accessing EBS volumes through Flocker or using the Ceph RDB driver (http://www.sebastien-han.fr/blog/2015/08/17/getting-started-with-the-docker-rbd-volume-plugin/).
The text was updated successfully, but these errors were encountered: