Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatal: sha1 file '<stdout>' write error: Broken Pipe #2428

Closed
canyou2014 opened this issue Jul 19, 2017 · 12 comments
Closed

fatal: sha1 file '<stdout>' write error: Broken Pipe #2428

canyou2014 opened this issue Jul 19, 2017 · 12 comments

Comments

@canyou2014
Copy link

git push via SSH and returned this error:

$ git push origin master    
Git LFS: (8 of 8 files) 572.02MB/572.02MB
Write failed: Broken pipe
send-pack: protocol error: bad band #69
Counting objects: 15, done.
Delta compression using up to 4 threads.
Compressing objects: 100%(15/15), done.
fatal: sha1 file '<stdout>' write error: Broken Pipe 
@technoweenie
Copy link
Contributor

Can you run with GIT_TRACE=1 ? That looks more like a Git issue, since the Git LFS upload succeeded. It could also be a local SSH timeout issue, as LFS will run a short SSH command before the upload.

@canyou2014
Copy link
Author

@technoweenie The total upload time of LFS objects was 22 minute, is there some SSH connections during the upload of LFS? Will this be the cause of SSH timeout? I ran GIT_TRACE=1 and the error was the same, I am sorry I did not record it. And then I ran git push --no-verify , the push went successfully with everything pushed completely.

@technoweenie
Copy link
Contributor

is there some SSH connections during the upload of LFS? Will this be the cause of SSH timeout?

Git LFS calls ssh git@your-host.com git-lfs-authenticate ... to get temporary auth for the LFS API calls. The ssh command runs and exits cleanly, so I think it's up to your local ssh config. If you use an HTTPS git remote, or configure remote.{name}.lfsurl, you won't have this issue.

For example, if you're using GitHub, you could set it up like this:

$ git remote add origin git@github.com:user/repo
$ git config remote.origin.lfsurl "https://github.com/user/repo.git/info/lfs"

This way Git will use SSH, while LFS will use HTTPS. Seems complicated though, but it's an option.

@canyou2014
Copy link
Author

I did a test: uninstalled the LFS and added a sleep time (20 minutes) to the pre-push hook, which resulted in an SSH timeout. As follows:

$ git push origin master 
1200/1200 > .......................................................... # sleep 20 minutes
Counting objects: 7, done..
Delta compression using up to 4 threads.
Compressing objects: 100%(3/3), done.
Writing objects: 100% (7/7), 546 bytes | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)
Write failed: Broken pipe
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly

Is the case of LFS similar? I mean: when doing a git push with SSH, LFS will use too much time during that time, so it causes an SSH timeout issue.

@ttaylorr
Copy link
Contributor

so it causes an SSH timeout issue.

I agree with this. I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

I think there are two things we could do here:

  1. Increase the keep-alive time of your ssh-agent (this may be out of your control if the remote end closes, which it appears to do in the comment that you posted above).
  2. Teach LFS to send a keep alive byte on the SSH connection that Git opens, similar to git/git@8355868.

That commit only works during receive_pack() operations, but this is a 'push', so it's calling send_pack(). We'd need some way to get access to the SSH connection that Git is opening, or teach Git the same receive.keepAlive option for send_pack operations.

@peff what do you think?

@peff
Copy link

peff commented Jul 24, 2017

I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

Right, modulo s/LFS/Git/ in the first sentence (which I think then matches the rest of your comment). We have to do it that way because Git can't kick off the pre-push hook until it knows what's going to be pushed, and it doesn't know that until it makes the ssh session and gets the server's ref advertisement. So the server is waiting for the client to send the list of refupdate commands, during which the ssh connection is sitting idle.

It's not clear to me what is killing the ssh connection. It could be that something at the network level is unhappy with the idle TCP connection. This could be GitHub-side reverse proxies, or just some firewall in the path. Increasing the frequency of ssh keepalives could help here.

But it could also be an application-level timeout above the ssh layer. Git by default doesn't have any timeouts waiting for the incoming packfile, but not all servers terminate directly at actual C Git. GitHub terminates at a custom proxy layer with its own timeouts, I'm not sure what JGit does, and I have no clue what other hosts like GitLab or Atlassian do. An ssh keep-alive won't help there; you'd need something to tell the application layer that we're still chugging.

The right solution is to have Git send application-level keepalives while the pre-push hook is running, to tell the other side that yes, we really are doing useful work and it should keep waiting. But implementing that is going to be hard. The existing keep-alives could be hacked into the protocol only because the sender in those cases was sending sideband-encoded data. So we can send empty sideband-0 packets.

But in the phase that would need keep-alives here, the next thing to happen is the client sending the ref-update pktlines. Those are in pktline format, but there's no sideband byte. And while technically a server can distinguish between a flush packet ("0000") and an empty pktline ("0004"), existing implementations don't (and wouldn't know what to do with an empty pktline at this stage anyway).

So you'd need a protocol extension to Git, that would work something like:

  1. The server's initial advertisement adds a new capability, client-keepalive.

  2. New clients recognize that, and when used with a capable server, mention client-keepalive to tell the server they will use it.

  3. While the pre-push hook runs, the Git client would then generate keepalive packets as part of the command-list, which the server would just throw away.

The only option I could come up with to hack a noop into the existing protocol was by sending meaningless delete refs/does/not/exist commands. But besides being a horrific hack in the first place, it also generates "warning: deleting a non-existent ref" messages. ;)

So I don't think there's really anything for LFS to do here. The issue is in Git, and would apply to other long-running pre-push hooks, too. It actually applies to sending the pack itself, too. If you have a large or badly packed repo, you could stall on pack-objects preparing the pack before it starts sending any bytes (this is pretty rare in practice, and is usually fixed by running git gc on the client side). Possibly a new keepalive capability should also imply that the client can send keepalives between the ref update and the start of the pack contents.

In the meantime, the obvious workarounds are:

  1. If you have a big LFS push, do it separately beforehand, which would make the pre-push step largely a noop.

  2. Use a protocol for the Git push that doesn't keep a connection open. Git-over-http is stateless, and there's no open connection while the hook runs.

@ttaylorr
Copy link
Contributor

I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

Right, modulo s/LFS/Git/ in the first sentence (which I think then matches the rest of your comment).

Ah, you caught my mistake! I originally thought this was an LFS problem, and started typing my comment with that assumption. It looks like I forgot to update part of it.

If you have a big LFS push, do it separately beforehand, which would make the pre-push step largely a noop.

@peff good idea. @canyou2014, this is possible via git lfs push.

@KavinduGayan
Copy link

git push via ssh, I got this error

Delta compression using up to 8 threads.
Compressing objects: 100% (563/563), done.
remote: fatal: Unable to create temporary file '/git_lab/app_lk.git/./objects/pack/tmp_pack_XXXXXX': Permission denied
fatal: sha1 file '' write error: Broken pipe
error: remote unpack failed: index-pack abnormal exit
error: failed to push some refs to 'ssh://XXXXX/git_lab/app_lk.git'

@peff
Copy link

peff commented Jul 5, 2018

@KavinduGayan The "broken pipe" on the client side generally just means that the server hung up the connection in the middle of the push. In your case, I think the interesting error is the remote: one, which looks like a server-side configuration problem.

@ttaylorr
Copy link
Contributor

ttaylorr commented Jul 5, 2018

Thanks, @peff. I agree with your comment above and thusly think that this is an unrelated issue. I am going to close it for now.

@ttaylorr ttaylorr closed this as completed Jul 5, 2018
@Satyam141026
Copy link

fatal: the remote end hung up unexpectedly
fatal: sha1 file '' write error: Broken pipe
error: failed to push some refs to 'https://github.com/Satyam141026/Starapppppy.git'
how to remove this error

@bk2204
Copy link
Member

bk2204 commented May 24, 2022

Hey,

This is a Git error, not a Git LFS error, and would indicate that the remote side hung up unexpectedly. You should look at seeing why that would be: perhaps you have a bad connection, or something in between Git and the remote side (e.g., a firewall, antivirus, or proxy) is causing problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants