You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running local development envrionment for our internal PaaS.
virtual machine BusyBox v1.24.2 for docker-machine
docker-compose (1.7.1)
Nomad is running as a container with an Apline:latest base image.
Issue
Health status not updating even though service status in Consul is critical.
The container that the job brings up has an endpoint to change the http response code.
The initial response status is 200.
I change the response from 200 to 500 via POST to an endpoint.
I would then expect the health status to change to unhealthy.
We are looking to add a feature where unhealthy allocations get replaced first when scaling down instead of the healthy ones.
Reproduction steps
I run the job defined in the job file section below.
nomad run example.nomad
Check the status to ensure it is healhty.
/ # nomad status example
ID = example
Name = example
Submit Date = 09/22/17 15:07:29 UTC
Type = service
Priority = 50
Datacenters = dev
Status = running
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
cache 0 0 1 0 1 0
Latest Deployment
ID = 90a455e5
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy
cache 1 1 1 0
Allocations
ID Node ID Task Group Version Desired Status Created At
9d134111 c6bee743 cache 2 run running 09/22/17 15:07:29 UTC
Now that the status is critical, I would expect the deployment to be unhealhty
/ # nomad status example
ID = example
Name = example
Submit Date = 09/22/17 15:07:29 UTC
Type = service
Priority = 50
Datacenters = dev
Status = running
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
cache 0 0 1 0 1 0
Latest Deployment
ID = 90a455e5
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy
cache 1 1 1 0
Allocations
ID Node ID Task Group Version Desired Status Created At
9d134111 c6bee743 cache 2 run running 09/22/17 15:07:29 UTC
@Btlyons1 Hey the deployment object tracks the initial health of the newly placed allocation and is only valid during a rolling update or canary process. The deployment has entered a terminal status and is no longer being tracked. This is the desired behavior since it is being used to do a rolling update, not track the long term health of the allocation.
Btlyons1
changed the title
Health Check status not updating after service status turns critical
[Question] Health Check status not updating after service status turns critical
Sep 25, 2017
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad version
Nomad 0.6.3
Consul v0.8.1
Operating system and Environment details
Running local development envrionment for our internal PaaS.
virtual machine BusyBox v1.24.2 for docker-machine
docker-compose (1.7.1)
Nomad is running as a container with an Apline:latest base image.
Issue
Health status not updating even though service status in Consul is critical.
The container that the job brings up has an endpoint to change the http response code.
The initial response status is 200.
I change the response from 200 to 500 via POST to an endpoint.
I would then expect the health status to change to unhealthy.
We are looking to add a feature where unhealthy allocations get replaced first when scaling down instead of the healthy ones.
Reproduction steps
I run the job defined in the job file section below.
Check the status to ensure it is healhty.
Check Consul
Change the stutus to 500
Check Consul to ensure status is critical
Now that the status is critical, I would expect the deployment to be unhealhty
Check the allocation logs
Job file
The text was updated successfully, but these errors were encountered: