Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing pipeline status when pod complete too fast on k8s backend #3468

Closed
3 tasks done
eliasscosta opened this issue Mar 6, 2024 · 2 comments · Fixed by #3722
Closed
3 tasks done

Missing pipeline status when pod complete too fast on k8s backend #3468

eliasscosta opened this issue Mar 6, 2024 · 2 comments · Fixed by #3722
Labels
backend/kubernetes bug Something isn't working

Comments

@eliasscosta
Copy link
Contributor

Component

agent

Describe the bug

Sometimes pod run too fast and it's impossible to woodpecker-agent get the status and log.

State:          Terminated
Reason:       Completed
Exit Code:    0
Started:      Wed, 06 Mar 2024 12:45:54 -0300
Finished:     Wed, 06 Mar 2024 12:45:56 -0300

When pod did it the pipeline/step continue running forever until someone decide to cancel.

System Info

{"source":"https://github.com/woodpecker-ci/woodpecker","version":"2.3.0"}

Additional context

No response

Validations

  • Read the docs.
  • Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
  • Checked that the bug isn't fixed in the next version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]
@eliasscosta eliasscosta added the bug Something isn't working label Mar 6, 2024
@eliasscosta
Copy link
Contributor Author

Applying a workaround solution such as adding a sleep command, we realized that the issue is not related to the pod terminating too quickly, but rather to how the agent parses the logs. When we modified the output to be as plain text as possible, the step started to finish correctly.

@eliasscosta
Copy link
Contributor Author

We tested setting the agent to run a small number of workflows (2 workflows per agent) with limited CPU resources, then we discovered we suffer the same issue described here: #2253 After we applied the fix to it we do not see this behavior anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend/kubernetes bug Something isn't working
Projects
None yet
2 participants