Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Support for Docker Health Checks when bumping a task definition revision #534

Closed
CpuID opened this Issue Sep 17, 2016 · 58 comments

Comments

Projects
None yet
@CpuID
Copy link

CpuID commented Sep 17, 2016

Currently ECS utilises ELB/ALB health checks to verify when a task is ready to accept traffic, and also when it is safe to terminate additional tasks as part of a rolling replacement/upgrade when bumping a task definition revision (to align with the service deployment configuration parameters).

Would it be possible when an ELB is not in use for an ECS service, to also look at the Docker health check status? There are some scenarios when you may not want an ELB in use, but need to gracefully rolling replace the containers as part of an upgrade.

Details on the feature introduced in Docker 1.12.x:

https://docs.docker.com/engine/reference/builder/#/healthcheck
https://docs.docker.com/engine/reference/run/#/healthcheck

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Oct 19, 2016

@samuelkarp any feedback re this request?

@samuelkarp

This comment has been minimized.

Copy link
Member

samuelkarp commented Oct 20, 2016

@CpuID Thanks for the feature request! I'm interested in as much detail as you (or anyone else also interested in this) can provide, as that helps us both prioritize and make sure we're actually addressing the use-cases appropriately.

Some thoughts on what you've provided already:

  • Docker does not implement a health check policy yet (it was rejected in their initial discussions), but defining policies (what to do when the health check fails) is required for the behavior that you're looking for. Are the types of settings that ELB/ALB health checks already support (timeout, interval, healthy threshold, unhealthy threshold) sufficient, or are you looking for something different?
  • You're asking about services here, but ECS also supports one-off tasks. Would you also want a health check to be performed/enforced for those tasks?
  • For a service that does have a load balancer, would you want both the load balancer health check and the Docker health check to be enforced? If you want both and one health check fails but the other succeeds, how important is it to know which health check failed?
@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Oct 20, 2016

@samuelkarp thanks for the response.

Replies inline below:

Docker does not implement a health check policy yet (it was rejected in their initial discussions), but defining policies (what to do when the health check fails) is required for the behavior that you're looking for. Are the types of settings that ELB/ALB health checks already support (timeout, interval, healthy threshold, unhealthy threshold) sufficient, or are you looking for something different?

I think those thresholds that exist for an ALB/ELB currently are perfectly sufficient for most requirements for now.

You're asking about services here, but ECS also supports one-off tasks. Would you also want a health check to be performed/enforced for those tasks?

I think initially supporting this functionality on ECS services only would be a fair judgement call. When you run an ECS one-off task, there is no service supervising the desired count (to replace an unhealthy instance). I think just allowing ECS one-off tasks to die off on their own as they do currently feels right. If you did do health checking on one-off tasks, then it would come down to what action to take if unhealthy (do you respawn the task? at this point you are entering service with a desired count 1 territory).

With ECS services, when a task is detected as unhealthy it would be terminated, then the ECS scheduler would realise the desired count is not met, and attempt to replace it with a new task. That feels like the real win here.

For a service that does have a load balancer, would you want both the load balancer health check and the Docker health check to be enforced? If you want both and one health check fails but the other succeeds, how important is it to know which health check failed?

Good question... a few valid approaches below (with varying engineering complexity):

Option A

  • If an ECS service has an ELB attached, Docker health checks are ignored, and ELB checks are the source of truth
  • If an ECS service does not have an ELB attached, and Docker health checks exist for a container, these are used instead
  • Simpler, probably easier to understand in documentation from a user's perspective

Option B

  • If an ECS service does not have an ELB attached, and Docker health checks exist for a container, these are used
  • If an ECS service has an ELB attached, and Docker health checks do not exist for any containers, the ELB health checks are used
  • If an ECS service has an ELB attached, and Docker health checks exist for a container, both the ELB or the Docker health check can terminate a container if either fails
  • More complexed, covers more scenarios, at the expense of duplicate health checks
  • Could add debugging confusion for users when implementing

Another consideration is how many container defs on an ECS task def need to have health checks attached, to use which method. As there is that extra abstraction layer between ECS tasks and underlying containers, that could get interesting. Especially since HEALTHCHECK definitions in a Dockerfile effectively run a CMD, as opposed to hitting something on a TCP port (like ELB checks do now, either TCP/HTTP/HTTPS). One option would be to say if at least 1 of the essential container definitions has a HEALTHCHECK attribute attached, it is considered that the entire ECS task definition is covered by Docker health checks.

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Oct 20, 2016

Another note, Docker does have the ability to handle check intervals, retries and timeouts natively now:

https://docs.docker.com/engine/reference/run/#/healthcheck

  --health-cmd            Command to run to check health
  --health-interval       Time between running the check
  --health-retries        Consecutive failures needed to report unhealthy
  --health-timeout        Maximum time to allow one check to run
  --no-healthcheck        Disable any container-specified HEALTHCHECK

There are equivalents of these in the Dockerfile HEALTHCHECK definition, at least for interval/retries/timeout. I think using these is probably the safest approach, health-retries could replace both healthy and unhealthy threshold which exist separately on an ELB/ALB.

@jonleighton

This comment has been minimized.

Copy link

jonleighton commented Nov 8, 2016

I'd like to see support for this, because I'd like to be able to monitor the memory usage of my container. If it climbs above a certain threshold (i.e. due to a memory leak), I'd like the system to gracefully restart my container. The Docker health check seems like a good way to check this.

IMO the Docker health check should take precedence over a LB health check, but they should both be used. The LB health check is useful for checking external indicators (container accepts HTTP requests on port 80), but in the scenario where a memory leak has caused memory usage to jump, the container may still be working fine (for now...)

I'd say that if either health check fails, the container should be considered unhealthy and gracefully restarted.

@bobziuchkovski

This comment has been minimized.

Copy link

bobziuchkovski commented Feb 7, 2017

This functionality would potentially be useful for us as well. We're using ECS to power a rails app. We have a service that runs an nginx container and a rails app container. The nginx container's port is registered on the load balancer and this container proxies requests to the rails container.

Unfortunately, the rails app is very slow to start. The ECS service tries to register the containers on the load balancer almost immediately, and while the nginx container is ready to service requests, the rails container is not. Hence, we start failing load balancer health checks immediately while the rails app is still starting. If enough load balancer checks fail, the service is deregistered, the task is stopped and relaunched again, lather, rinse, repeat.

We can somewhat mask the problem by increasing the load balancer health check interval. This helps give the rails container time to initialize before being marked unhealthy. However, this also increases the amount of time it takes the load balancer to detect and deregister genuinely unhealthy containers, which is undesirable.

If this feature were implemented in such a way that the docker health checks were the primary signal of health, and ECS only registered containers on the load balancer that pass the local docker health checks, then we'd be able to deterministically signal when our rails container is ready to service requests. We'd then be able to keep the load balancer health check interval and thresholds tight so that end-users are not exposed to failing containers for a long period of time.

That would be our potential use case for this feature, though we'd be happy with any means to deterministically signal when to register our containers with the load balancer.

@gunzy83

This comment has been minimized.

Copy link

gunzy83 commented Feb 20, 2017

This is a bit of a ditto but I want to share our use case.

Our backend services running on the JVM are often not healthy by the time ECS has done a full replacement because we are not using ALBs for these backend services and ECS does not know they are unhealthy. We are using Consul for service discovery and client side load balancing at each service including health checks. Using ALBs for every backend service would over complicate things for us.

This feature would help immensely as it would allow us to perform a proper in place replacement of a service and not have to spin up a new service, check Consul for healthy services then remove the old service.

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Feb 20, 2017

@samuelkarp bump just wanted to check if you have any further feedback re implementing this? Is it something on your roadmap based on what you know so far? Did you need any further info from others? Seems there is a bit of interest from the community on getting this functionality...

@samuelkarp

This comment has been minimized.

Copy link
Member

samuelkarp commented Feb 20, 2017

Thank you everyone who has provided feedback so far! The descriptions here are exactly the kind of thing that we look at when trying to figure out what the right UX for a feature would be.

Some of the things I'm understanding from the use-cases here:

  • Health check failures on essential containers should likely cause the task to be torn down.
  • Health status needs to be reported to the ECS backend:
    • From @bobziuchkovski's description, it would be nice if a Docker health check passed before registering the task into a load balancer. However, it also sounds like this use-case might be addressed by adding grace-period support to ECS's existing integration with ELB health checks.
    • From @gunzy83's description, old tasks should not be killed before new tasks are healthy during a service update.

From my own investigation:

  • Docker health checks have no grace period; they'll start executing one interval after the container starts.
  • Docker health checks have no healthy threshold or unhealthy threshold, so a single pass or failure will change the health status.
  • docker inspect will report the 5 most recent statuses and a FailingStreak indicating the number of consecutive failures.
  • docker events does not provide any indication of a failing health check, but does report an exec_start event when the health check command starts.

I'd appreciate more feedback on the following areas:

  • Health check failures for non-essential containers is undefined; would we want to kill those containers or let them continue to run as the rest of the containers in the task are unaffected?
  • I haven't heard whether configuration via the HEALTHCHECK command in a Dockerfile is sufficient, or whether this should be part of the task definition or service. Feedback on this would be greatly appreciated.
    • Should all containers with a HEALTHCHECK defined in the Dockerfile have this behavior? Should it be enabled in the task definition? Note that if you inherit (use FROM) from an image that has a HEALTHCHECK defined in its Dockerfile, your image will inherit that setting as well.
  • Because the Docker health check settings are missing thresholds and grace periods, I'd appreciate feedback on whether these settings are important for Docker health checks.

In terms of engineering work, it sounds like at a minimum the following things would need to be done:

  • The ECS agent will need to determine whether health checks are enabled for a given container and start collecting health check data. Since health check data is only exposed through docker inspect, this will increase the number and frequency of inspect calls that the agent will need to make; we'll need to balance this increase against whatever effect the increased load will have on the Docker daemon. Since there is no event provided for a health check, we'll either need to inspect on a predetermined interval or find the defined timeout (both in the inspect output) and trigger an inspect after seeing an exec_start event for that container.
  • The ECS backend will need to accept a health check status and the ECS agent will need to report a health check status.
  • Either the ECS agent or the ECS backend will need to decide to stop a task where an essential health check is failing.
  • The ECS backend will need to consider the health status of newly-started tasks during a service update prior to stopping the old tasks.
  • The ECS backend will need to consider the health status of newly-started tasks before registering those tasks into a load balancer.
  • Tasks stopped due to failing health checks will need to have a reason describing the failure.
  • (Possibly) The task definition would need to be updated to enable/disable health check, and possibly have fields for the health check parameters.
  • (Possibly) Services would need to be updated to enable/disable health check.

Unfortunately I don't have anything more to share at this time; as a general rule, we don't comment on our future plans. However, we will keep this issue updated as more information comes available.

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Mar 1, 2017

Health check failures for non-essential containers is undefined; would we want to kill those containers or let them continue to run as the rest of the containers in the task are unaffected?

  • My thoughts are if they are marked non-essential, they should probably be left to run? essential containers on the other hand should force marking the entire task as bad or something.

I haven't heard whether configuration via the HEALTHCHECK command in a Dockerfile is sufficient, or whether this should be part of the task definition or service. Feedback on this would be greatly appreciated.

I think just the HEALTHCHECK attribute in a Dockerfile would be sufficient. I guess one option would be a toggle/feature flag in the ECS task definition to say "honor Docker Health Check" which defaults to true. And if there are any edge cases to deal with, these could be feature flagged in the task definition/s also.

Should all containers with a HEALTHCHECK defined in the Dockerfile have this behavior? Should it be enabled in the task definition? Note that if you inherit (use FROM) from an image that has a HEALTHCHECK defined in its Dockerfile, your image will inherit that setting as well.

See above, feature flagged, default on I think feels like the right answer :) To cover that scenario of an upstream FROM giving unintended consequences or something, allowing you to disable the behaviour.

Because the Docker health check settings are missing thresholds and grace periods, I'd appreciate feedback on whether these settings are important for Docker health checks.

Having a grace period might be useful potentially for initial startup. The only real option there would be to let Docker do it's checks natively but have the ECS agent ignore the results of them for the duration of the grace period on container start I suspect. Then once the startup grace period has passed, the ECS agent would act on the results of the health check.

The retries logic feels sufficient in terms of thresholds, open to further feedback from others if that needs to be more detailed though.

@bobziuchkovski

This comment has been minimized.

Copy link

bobziuchkovski commented Mar 1, 2017

At least for our use case, adding grace-period support to ECS's existing integration with ELB/ALB health checks would definitely work.

Failing that, the docker health checks would also work for us as long as containers aren't registered on the ELB/ALB until the docker health check succeeds. I agree with @CpuID that the retries logic would cover our use case as well in terms of thresholds/etc.

@samuelkarp

This comment has been minimized.

Copy link
Member

samuelkarp commented Mar 1, 2017

I guess one option would be a toggle/feature flag in the ECS task definition to say "honor Docker Health Check" which defaults to true.

The risk with defaulting to true here is that it would represent a behavioral change for anyone whose images have HEALTHCHECK defined today. We'd likely need to make this an opt-in change for that reason.

The retries logic feels sufficient in terms of thresholds, open to further feedback from others if that needs to be more detailed though.

I missed this, thanks! It looks like retries covers an unhealthy threshold, but not a healthy threshold; it'd be good to understand if that gap is meaningful to anyone.

@bobziuchkovski

This comment has been minimized.

Copy link

bobziuchkovski commented Mar 1, 2017

As least for my team's use case, the healthy threshold is not very meaningful. The types of health checks we'd be using at a container level wouldn't flap. Our ALB/ELB checks, on the other hand, are where we consider other external service availability and since those could possibly flap, the healthy threshold is more useful for us. So in short, retries would be sufficient for us and even there we'd effectively be using it as an initial startup grace period instead. So maybe I should say a grace period is most meaningful for our use case, but we could achieve something similar with retries.

@hrzbrg

This comment has been minimized.

Copy link

hrzbrg commented Mar 29, 2017

We are basically running the same setup as @gunzy83 and would also benefit from this feature.
My question is, if some of you have workarounds for the time being? We have Java Services that take >60s until they are up and running. ECS reports them as steady as soon as the container runs.
We integrated https://github.com/knatsakis/tc-init-health-check-listener to get a clean Tomcat shutdown if the application doesn't start.

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Apr 17, 2017

@samuelkarp responses inline below:

The risk with defaulting to true here is that it would represent a behavioral change for anyone whose images have HEALTHCHECK defined today. We'd likely need to make this an opt-in change for that reason.

That's fair enough, avoiding breakage for existing tasks is reasonable to make it an opt-in change.

I missed this, thanks! It looks like retries covers an unhealthy threshold, but not a healthy threshold; it'd be good to understand if that gap is meaningful to anyone.

Non-issue here I think, waiting before marking an instance unhealthy (to avoid excessive flapping) feels more important than waiting to mark it healthy for an excessive period. Chances are the only times this would have an impact is if a service is not available more than it's available, and as such would get a healthy check through but then fail multiple checks thereafter, causing degradation potentially (depending on the timeouts).

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Apr 17, 2017

So in short, retries would be sufficient for us and even there we'd effectively be using it as an initial startup grace period instead. So maybe I should say a grace period is most meaningful for our use case, but we could achieve something similar with retries.

I would agree having a startup grace period would be advantageous, either separately or in addition to startup health checks. I have a use case upcoming which may benefit from startup grace periods...

@orby

This comment has been minimized.

Copy link

orby commented May 10, 2017

+1 for startup grace period. But could also use an initial health check separated from the healthy health check (a bit more confusing than the grace period).

I'm going to solve this now by tweaking the health check values before deploy, and put it back after deploy is done.

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented May 15, 2017

@samuelkarp bump any further feedback on the above? hoping to keep the conversation moving so we get to a point where you have a complete enough spec to add it to your implementation roadmap, or someone else can attack it and PR :)

@panga

This comment has been minimized.

Copy link

panga commented May 16, 2017

Docker 17.05 already implemented healthcheck grace periods
moby/moby#28938

@d10i

This comment has been minimized.

Copy link

d10i commented May 21, 2017

Our use case is ECS services using gRPC, which runs on HTTP/2 and so can't be attached to an ELB/ALB. I feel this functionality is essential and very important during deployments in a no-LB setup to ensure that the new version is actually running and serving responses before turning the old version off. It's also generally important in order to maintain a healthy system, by killing unhealthy tasks that are running as they'll keep running forever despite not being serving any responses.

@samuelkarp

This comment has been minimized.

Copy link
Member

samuelkarp commented May 22, 2017

Our usec ase is ECS services using gRPC, which runs on HTTP/2 and so can't be attached to an ELB/ALB.

@dario-simonetti Since you specifically mentioned a lack of HTTP/2 support as being problematic, I think it would be useful to note that Application Load Balancers (ALB) do support HTTP/2. You can see the feature list and comparison between Application Load Balancers and Classic Load Balancers here.

@d10i

This comment has been minimized.

Copy link

d10i commented May 22, 2017

@samuelkarp they do, but they forward the request to the origin converting it to HTTP/1.1

You can send up to 128 requests in parallel using one HTTP/2 connection. The load balancer converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group using the round robin routing algorithm

(from http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html)

@damiencarol

This comment has been minimized.

Copy link

damiencarol commented Nov 3, 2017

We have the same pb here.

@ozonni

This comment has been minimized.

Copy link

ozonni commented Nov 6, 2017

We also have the same problem. This is really critical one, don't understand why it's still not done since one year.

@sunomie

This comment has been minimized.

Copy link

sunomie commented Nov 21, 2017

Really do need a grace period configuration please. As a workaround I'm having to slow-down the interval for our health checks and increase the unhealthy count so that the instances do not fail during startup ... which is not ideal. What's even worse, however, is that if I tune this wrong and the instances do fail during startup they keep failing during startup over and over again, and the stopped containers do not get removed so the disk fills up, and the whole instance goes broke.

@coldsoul

This comment has been minimized.

Copy link

coldsoul commented Nov 22, 2017

+1 for this feature. We have Java apps which on start up bootstrap their caches from a rabbitmq server and it takes a while until the container is actually ready to serve traffic. grace period will definitely help with this.

@SaikiranDaripelli

This comment has been minimized.

Copy link

SaikiranDaripelli commented Nov 30, 2017

This issue is been there from 1 year, and initial grace period for healthcheck is critical for us, is there any other tracking jira for initial grace period alone? is it even planned to fix or we will look of alternatives if it is not even present in roadmap.
This is absolutely critical for java based services whose startup time is high.

@ozonni

This comment has been minimized.

Copy link

ozonni commented Nov 30, 2017

looks like it might be never resolved since AWS announced Kubernetes support.
https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/

@nmeyerhans

This comment has been minimized.

Copy link
Contributor

nmeyerhans commented Nov 30, 2017

@ozonni To set the record straight, EKS is by no means intended to replace ECS. In fact, the AWS Fargate service, also announced at re:Invent, is essentially a managed ECS. ECS will continue to see investment, and we do have this item on our backlog. Unfortunately, it's Amazon's policy to avoid discussing roadmap details publicly, so I can't provide a date for when we might implement it.

@richardpen

This comment has been minimized.

Copy link
Contributor

richardpen commented Dec 12, 2017

Thank you all for commenting on the details, we are actively working on this feature request in #1141.

@ernoaapa

This comment has been minimized.

Copy link

ernoaapa commented Dec 18, 2017

Oh wow! 😱
Such a sad to find out that there's no way to define initial grace period before container is ready (aka. readiness in Kubernetes). Came from Kubernetes world to customer who is using ECS and shocked that this is not available. It's really common to run migrations, warm up cache, etc. at boot and playing with long health checks is not an option (ugly hack, please stop suggesting that!) and also looks like that the #1141 doesn't cover this.
Sad day... saaad saad day for me 😢

@samuelkarp

This comment has been minimized.

Copy link
Member

samuelkarp commented Dec 28, 2017

It is now possible to set a health check grace period when using ELB health checks. Please see the announcement and documentation for further details.

I'm keeping this issue open to track Docker health checks, which are currently in progress in #1141.

@matelang

This comment has been minimized.

Copy link

matelang commented Jan 12, 2018

Having Load Balancer health checks is far from being a good enough solution. It's just a band-aid.

It partially solves the issue, in the case when you have a monolithic deployment, or if you only use server side load balancing.
We have microservices which are using service discovery and client side load balancing. This means that most of our containers do not register themselves into load balancers.
In fact only our API gateways are directly receiving traffic from ELBs.

How are we supposed to update our services without downtime if the ECS itself does not check whether a process in a container has indeed started, connected to downstream resources, and fit to receive traffic?

I think ECS without this feature is good for nothing. Schedulers are supposed to ease deployments and make operating containers safe and easy. ECS in its current state only adds uncertainty to the ops mix (unresponsive/disconnecting agent, zombie tasks, no support for swap, no support for Docker Health API).

We are seriously considering migrating to something else for the lack of this feature. 😠

@richardpen richardpen referenced this issue Jan 15, 2018

Merged

Enable container health check #1141

8 of 8 tasks complete

@aaithal aaithal added this to the 1.17.0 milestone Jan 17, 2018

@aaithal aaithal removed this from the 1.17.0 milestone Jan 26, 2018

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Feb 13, 2018

@richardpen now that #1141 is merged and released as of 1.17.0, can you confirm if the public documentation has been updated to reflect how to use the feature? Does this require any special configuration?

Thanks again for knocking this feature out by the way :) Much appreciated.

@adnxn

This comment has been minimized.

Copy link
Contributor

adnxn commented Feb 13, 2018

@richardpen now that #1141 is merged and released as of 1.17.0, can you confirm if the public documentation has been updated to reflect how to use the feature? Does this require any special configuration?

@CpuID, thanks for checking in on this feature. So far the ECS agent side changes are out with the 1.17.0 release, but we are still waiting on this feature to be supported on the ECS service side.

@matelang

This comment has been minimized.

Copy link

matelang commented Feb 15, 2018

@adnxn do we know a timeline on when this feature will be fully supported?

@adnxn

This comment has been minimized.

Copy link
Contributor

adnxn commented Feb 16, 2018

@matelang, sorry we don't have a publicly available timeline for this.

@CpuID

This comment has been minimized.

Copy link
Author

CpuID commented Feb 16, 2018

thanks for checking in on this feature. So far the ECS agent side changes are out with the 1.17.0 release, but we are still waiting on this feature to be supported on the ECS service side.

@adnxn ah ok thx, hopefully soon :)

@adnxn

This comment has been minimized.

Copy link
Contributor

adnxn commented Mar 9, 2018

@CpuID, We've added task def support for the health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the Create a container section of the Docker Remote API and the HEALTHCHECK parameter of docker run.

@IngmarStein

This comment has been minimized.

Copy link

IngmarStein commented Mar 9, 2018

@adnxn does CloudFormation already support the HealthCheck property for container definitions? The documentation at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html has not been updated, yet.

@hwatts

This comment has been minimized.

Copy link

hwatts commented Mar 9, 2018

I've had a look at this today and it doesn't look like ECS observes the health status during deployments. Is this by design?

For e.g. create a service with one healthy container and perform a deployment (min 100% max 200%) that's broken and goes to UNHEALTHY, the healthy container (old version) is stopped, the deployment completes and the unhealthy container (new version) remains and is then terminated over and over as it's unhealthy.

@dmazurek

This comment has been minimized.

Copy link

dmazurek commented Mar 9, 2018

Further (and I don't mean to go too off topic), but ECS doesn't even observe DRAINING status. It regularly shuts down active instances and leaves draining ones up when I resize clusters.

@richardpen

This comment has been minimized.

Copy link
Contributor

richardpen commented Mar 15, 2018

@hwatts Thanks for bringing this to us, we are currently working on fixing this, will let you know when there is an update.

@richardpen

This comment has been minimized.

Copy link
Contributor

richardpen commented Mar 15, 2018

Since the container health check feature has been released, I'm closing this feature request. For other related issues, I have created #1298 and #1297 for tracking purpose. Feel free to create a new issue if it's not tracked anywhere in the future, thanks.

@richardpen richardpen closed this Mar 15, 2018

@lnkisi

This comment has been minimized.

Copy link

lnkisi commented Mar 21, 2018

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.