Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bail if we fail to cleanup failed instance #77

Merged
merged 2 commits into from
Feb 7, 2023

Conversation

gabriel-samfira
Copy link
Member

If we fail to cleanup failed instance, we return before retrying to recreate it.

Fixes: #76

Signed-off-by: Gabriel Adrian Samfira gsamfira@cloudbasesolutions.com

if we fail to cleanup failed instance, we return before retrying to
recreate it.

Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
@maigl
Copy link
Contributor

maigl commented Feb 6, 2023

.. what also could be a related problem is that the whole 'create new runners' and 'delete old runners' process takes longer than the time ticker. This could cause work to pile up.. but I haven't check to code if this is a problem or not.

@gabriel-samfira
Copy link
Member Author

.. what also could be a related problem is that the whole 'create new runners' and 'delete old runners' process takes longer than the time ticker. This could cause work to pile up.. but I haven't check to code if this is a problem or not.

In theory, it should be fine. When we create a new instance we transition from pending_create to creating. The create loop will ignore anything that is not in pending_create. Same thing for deleting. We transition from pending_delete to deleting.

runner/pool/pool.go Outdated Show resolved Hide resolved
Co-authored-by: Michael Kuhnt <maigl@users.noreply.github.com>
@gabriel-samfira gabriel-samfira merged commit 226536b into cloudbase:main Feb 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Multiple instances with same name on provider
2 participants