New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to update a service (scale down) after few instances are deleted, results in "500 Internal server Error" #1996
Comments
@sangeethah you should be allowed to update the service which is in updating-active state. I'm going to look whats going on here |
@sangeethah could you update your validation test test_services.py/check_service_activate_delete_instance_scale to print out the service.name as if the line below fails, the random uuid gets printed instead of a real service name:
|
@alena1108 , added logging of service name and id before service update is done - addressed in - rancher/validation-tests#118 |
@sangeethah awesome, please update this bug with the info once it happens again. |
Here is the error message from the latest log file happening on the failure: Exiting with code [RESOURCE_BUSY] : RESOURCE_BUSY io.cattl Looks like the process failed to schedule, therefore error was returned to the API immediately. Going to look at the code and update the bug with findings. |
Verified by deleting instance followed by scale down. Able to update the service which is in updating-active state |
This issue is seen again when testing on latest build on master - Oct 6 The same error is seen when service was attempted to scale up after one of the instance was stopped:
Following exception seen in logs: Caused by: io.cattle.platform.lock.exception.FailedToAcquireLockException: Failed to acquire lock [schedule/service.42277.CHANGE]
|
Server version - V0.59.0-rc1 Attempting to update a service (scale up) after few instances are stopped results in "Internal Server Error"
|
Same issue as #2493, closing as a duplicate |
Server version - Build from master - sep 8
Create a service with scale 4.
Delete one of the instances.
Update the service to scale down. API throws a "Internal Server Error" message.
I do not see any exception being logged in the server logs and not able to locate a service/container with the id being returned in the API.
Note - In my test scripts , I do not wait for the service to get to "active" state when service update is attempted.
I should be allowed to update service while it is still in "update-activating" state ? Even if we do not allow for this , API should throws action not allowed error message in this case.
The text was updated successfully, but these errors were encountered: