Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Actions checkout gets stuck forever randomly #550

Open
yatima1460 opened this issue Jul 5, 2021 · 3 comments
Open

Actions checkout gets stuck forever randomly #550

yatima1460 opened this issue Jul 5, 2021 · 3 comments

Comments

@yatima1460
Copy link

I noticed a very strange problem related to actions/checkout@v2

Randomly it would get stuck forever at "C:\Program Files\Git\cmd\git.exe" checkout --detach when using a self hosted runner on Windows

In the beginning I thought it would be caused by slow internet connection or a slow PC, so I left it running..

Turns out even after 6 hours it's still stuck and it seems got cancelled automatically by GitHub Actions

It's pretty random and I can't find any other info about this problem

Self hosted runner info:
Microsoft Windows [Version 10.0.19042.1052]

Git Version on self hosted runner:
git version 2.32.0.windows.1

Docker on self hosted version (shouldn't be relevant but still):
Docker version 20.10.7, build f0df350

ESET:
8.0.2028.0

Other notes:
The self hosted runner has multiple GitHub runners installed in different directories

My only theory is that something locks some files inside the repo and git waits forever until they are unlocked

It doesn't seem to happen on a AWS EC2 instance, but only on a local office PC

So even if it's the self hosted runner's fault, antivirus installed or something else it's wise to add an internal timeout to actions checkout I think 🤔 with optional skipping the action that waits for locked files?

image

@abeerthakkar
Copy link

We are facing the same problem. We have linux runners.

@yatima1460
Copy link
Author

yatima1460 commented Aug 12, 2021

We are facing the same problem. We have linux runners.

well then we can exclude ESET or the operating system
could be an internal logic error or bad tolerance when network conditions are bad

in fact office network is not the greatest, AWS instead is top notch

@jalmena
Copy link

jalmena commented Jan 27, 2025

2025 and this still happens randomly.
We've Linux runners also, running on a large new server.
The 6 hour timeout for this operation could by slightly adjusted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants