-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error "BUG: missing headers nil" #81
Comments
I don't think it is a regression on our side. I haven't gotten this error in a while (though users with huge repositories have), but today I have a few times. I think this is a symptom of something having gone wrong on the server, but I might be wrong. Edit: As noted below by @vermiculus another possibility is that we get a valid response but a bug in |
I am hitting the same issue, after upgrading to the latest version. |
I'm also hitting this issue, as of 3-4 days ago, when I use magithub and it attempts to get the github status header, issues section, or pull requests section. |
I'm hitting this issue also with |
Exactly the same issue happens to me. Any workarounds? |
I've run into this problem before during development. Google (or whatever) search |
https://emacs.stackexchange.com/a/32952/2264 I don't believe I ever actually submitted a bug report but I may just be forgetting. If I did, I don't think I heard anything back. |
@vermiculus thanks for the info! However, I'm not being able to write a workaround if there is any. Was anyone able to make it work? I'd appreciate if you share the solution here. Thanks! |
Changed the title because this can also happen with Gitlab. |
@humitos I don't believe I was able to find a solution; just sharing my investigation :-) |
I've run into this problem also. |
@vermiculus I would very much appreciate it if you could investigate this further. |
I found a possible work-around: diff --git a/ghub.el b/ghub.el
index 5a1cfc8..b7eef54 100644
--- a/ghub.el
+++ b/ghub.el
@@ -738,7 +738,8 @@ (defun ghub--basic-auth-errorback (url &optional prompt _overwrite _realm _args)
(if (assoc "X-GitHub-OTP" (ghub--handle-response-headers nil nil))
(progn
(setq url-http-extra-headers
- `(("Content-Type" . "application/json")
+ `(("Pragma" . "no-cache")
+ ("Content-Type" . "application/json")
("X-GitHub-OTP" . ,(ghub--read-2fa-code))
;; Without "Content-Type" and "Authorization".
;; The latter gets re-added from the return value. Please try that (making sure to recompile |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@tarsius still getting the same error after applying your fix. |
This comment has been minimized.
This comment has been minimized.
@fkhodkov Lets find out if we actually get a 304. Please add a debug statement to |
This comment has been minimized.
This comment has been minimized.
No, no line with |
What did |
Hmm, I don't see anything like this in |
@tarsius, I made a couple of observations:
How else could we help debug that? |
I'm also having this problem using Magithub, I'll try to find some time this weekend to help debugging it. |
I'm also now receiving this error after updating. Here's the debug information it looks like you added:
Am I to understand that the buffer provided by url.el is effectively empty? |
FWIW, I can consistently make it work by doing |
Can this issue be reopened since its not resolved? |
If someone provides new information beyond "it happens for me too, sometimes", then I will investigate but since I think the issue is either on the server side or in |
I'm seeing this in master
Uninstalling forge, magit and ghub, then reinstalling same did not fix. This fixed it:
|
I'm seeing this with the following config:
The failure is triggered by innocuous projects' issues on a private Gitlab enterprise server (v13.12.2) or on GitHub. |
Same reproduced here with doom emacs native compilation enabled and: error in process filter: ghub--handle-response-headers: BUG: missing headers But only for one repository the other work well |
|
I'm on Emacs 27.1 on Debian Linux (Bullseye), and I'm getting this error. I am using the latest ghub (20220403.1248) and magit (20220406.1950) and forge (20220407.1932). I initially tried to uninstall ghub, forge and magit, exit emacs and then start emacs again, but that did not solve the problem. This issue is definitely not fixed in version 26.3 of Emacs. I'm not on macOS. In order for me to test this, I do M-x forge-reset-database, and then M-x forge-pull. I then have to wait a long time, because there are almost 2000 issues. The problem occurs at the same issue every single time (issue #1789). I attempted to add
|
I'm afraid it's still a mystery bug that I don't know how to fix. |
I upgraded to emacs 28.1, same problem. When I added the
Is there a way that ghub can just skip an issue when ghub--handle-response-headers hits a missing header? Its this one issue that prevents me from using forge, an issue that I'm never going to need to even access via forge, so it being missing would not be a problem for me. Interestingly, the one issue that this was getting stuck on was an issue that was imported from redmine to gitlab, and it had in its body only this:
I believe this was a tag added by a kanban add-on, at one point. I replaced that with some dummy text, and re-ran forge-pull and it then got stopped by a different issue, which also had this tag. I've now removed three of these and each time it stops somewhere else with that same tag. I'm unsure how many issues I have with this tag. |
Please keep such an issue so that we can use if for debugging purposes. |
I also have these in the body of issues that were imported from redmine:
Looking at
|
Unfortuantely, if I create an empty project, and add two issues, one with |
There were over 200 of these issues, I went in and removed these tags from all of them, and then |
That's too bad. Now I won't be able to debug it:
|
Unfortunately, there is something else going on, because I added those tags back, dropped the forge database, and can still do a |
I'm hitting this as well rather often when downloading pullreqs from a Gitlab instance.
Forge is able to download some pullreqs and the process stops mid-flight: With this repo it stopped after processing 33 pullreqs. Looking at
(generated by grepping for I'm surprised it crashes when processing
Let me know what else I can provide to help you debugging. |
I've added some debug code myself to (defun ghub--handle-response-headers (status req)
(goto-char (point-min))
(forward-line 1)
(message "%s: %d" (url-recreate-url (ghub--req-url req)) (buffer-size))
(let (headers)
(when (memq url-http-end-of-headers '(nil 0))
... and indeed it's
which is delivered for processing only with 2 chars. I doubt the response comes corrupted from the server so something must be broken down the line in |
I think I found the root cause, at least of my problem. Emacs' (defun url-http-chunked-encoding-after-change-function (st nd length)
...
(if (= 0 url-http-chunked-length)
(progn
;; Found the end of the document! Wheee!
(url-http-debug "Saw end of stream chunk!")
(setq read-next-chunk nil)
(url-display-percentage nil nil)
;; Every chunk, even the last 0-length one, is
;; terminated by CRLF. Skip it.
(when (looking-at "\r?\n")
(url-http-debug "Removing terminator of last chunk")
(delete-region (match-beginning 0) (match-end 0)))
(if (re-search-forward "^\r?\n" nil t)
(url-http-debug "Saw end of trailers..."))
(if (url-http-parse-headers)
(url-http-activate-callback)))))))))) The problem is that if the last This is rather difficult to trigger as "normally" the whole response has already arrived to the client when @@ -36,6 +36,7 @@
(defvar url-current-object)
(defvar url-http-after-change-function)
(defvar url-http-chunked-counter)
+(defvar url-http-chunked-last-crlf-missing nil)
(defvar url-http-chunked-length)
(defvar url-http-chunked-start)
(defvar url-http-connection-opened)
@@ -1068,7 +1069,15 @@ the callback to be triggered."
Cannot give a sophisticated percentage, but we need a different
function to look for the special 0-length chunk that signifies
the end of the document."
- (save-excursion
+ (if url-http-chunked-last-crlf-missing
+ (progn
+ (goto-char url-http-chunked-last-crlf-missing)
+ (when (looking-at "\r\n")
+ (url-http-debug "Saw the last CRLF.")
+ (delete-region (match-beginning 0) (match-end 0))
+ (if (url-http-parse-headers)
+ (url-http-activate-callback))))
+ (save-excursion
(goto-char st)
(let ((read-next-chunk t)
(case-fold-search t)
@@ -1145,13 +1154,14 @@ the end of the document."
(url-display-percentage nil nil)
;; Every chunk, even the last 0-length one, is
;; terminated by CRLF. Skip it.
- (when (looking-at "\r?\n")
+ (if (not (looking-at "\r?\n"))
+ (setq-local url-http-chunked-last-crlf-missing (point))
(url-http-debug "Removing terminator of last chunk")
- (delete-region (match-beginning 0) (match-end 0)))
- (if (re-search-forward "^\r?\n" nil t)
- (url-http-debug "Saw end of trailers..."))
- (if (url-http-parse-headers)
- (url-http-activate-callback))))))))))
+ (delete-region (match-beginning 0) (match-end 0))
+ (if (re-search-forward "^\r?\n" nil t)
+ (url-http-debug "Saw end of trailers..."))
+ (if (url-http-parse-headers)
+ (url-http-activate-callback))))))))))))
(defun url-http-wait-for-headers-change-function (_st nd _length)
;; This will wait for the headers to arrive and then splice in the In the example list of requests of my post above, the request that's actually breaking things is I can consistently trigger the bug w/o the patch and once Probably worth a bug report for Emacs. |
As per [0], the last chunk of 0 bytes is always accompanied by a last CRLF that signals the end of the message: chunked-body = *chunk last-chunk trailer-part CRLF ^ this one chunk = chunk-size [ chunk-ext ] CRLF chunk-data CRLF chunk-size = 1*HEXDIG last-chunk = 1*("0") [ chunk-ext ] CRLF chunk-data = 1*OCTET ; a sequence of chunk-size octets `url-http-chunked-encoding-after-change-function' is able to process (and remove) that terminator IF AVAILABLE in the buffer when processing the response, however it won't wait for it if it's not yet there. In other words: | Bottom of the response buffer | Bottom of the full response | | (visible to url-http) | (to be delivered to Emacs) | | ------------------------------+-----------------------------| | 0\r\n | 0\r\n | | | \r\n | If the last chunk is processed when the bottom of the response buffer is as above (note that the whole response has not yet been delivered to Emacs), url-http will call the user callback without waiting for the final terminator to be read from the socket. This is normally not an issue when doing one-shot requests, but it's problematic when the connection is reused immediately. As there are 2 bytes from the request N that have not been dealt with, they'll be considered as part of the response of the request N+1. On top, it turns out that when processing the headers of request N+1, `url-http-wait-for-headers-change-function' will consider the request a "headerless malformed response" delivering it broken to the caller. The proposed fix implements a state in which `url-http-chunked-encoding-after-change-function` properly waits for the very last element of the message preventing the problem explained above from happening. For additional context, this bug was found when debugging magit/ghub (see [1] for details). [0] https://datatracker.ietf.org/doc/html/rfc7230#section-4.1 [1] magit/ghub#81
|
As per [0], the last chunk of 0 bytes is always accompanied by a last CRLF that signals the end of the message: chunked-body = *chunk last-chunk trailer-part CRLF ^ this one chunk = chunk-size [ chunk-ext ] CRLF chunk-data CRLF chunk-size = 1*HEXDIG last-chunk = 1*("0") [ chunk-ext ] CRLF chunk-data = 1*OCTET ; a sequence of chunk-size octets `url-http-chunked-encoding-after-change-function' is able to process (and remove) that terminator IF AVAILABLE in the buffer when processing the response, however it won't wait for it if it's not yet there. In other words: | Bottom of the response buffer | Bottom of the full response | | (visible to url-http) | (to be delivered to Emacs) | | ------------------------------+-----------------------------| | 0\r\n | 0\r\n | | | \r\n | If the last chunk is processed when the bottom of the response buffer is as above (note that the whole response has not yet been delivered to Emacs), url-http will call the user callback without waiting for the final terminator to be read from the socket. This is normally not an issue when doing one-shot requests, but it's problematic when the connection is reused immediately. As there are 2 bytes from the request N that have not been dealt with, they'll be considered as part of the response of the request N+1. On top, it turns out that when processing the headers of request N+1, `url-http-wait-for-headers-change-function' will consider the request a "headerless malformed response" delivering it broken to the caller. The proposed fix implements a state in which `url-http-chunked-encoding-after-change-function` properly waits for the very last element of the message preventing the problem explained above from happening. For additional context, this bug was found when debugging magit/ghub (see [1] for details). [0] https://datatracker.ietf.org/doc/html/rfc7230#section-4.1 [1] magit/ghub#81
The patch has been merged into Emacs' master and it should be already available in 29's snapshots. If you know a release manager to bribe so the patch is backported to 28.x it'll surely be money well spent (I've tried already). In the meantime, users of Emacs 28 or older can always monkey patch the offending code in their configuration or build Emacs themselves. Maybe it's worth patching |
Thanks so much for this, @nbarrientos! I have asked in the debbugs thread for the fix to be included in 28.2. I am also going to change the error message and provide instructions on how to use the fixed version of the faulty function. We got to do this anyway, even if it ends up being included in 28.2. |
Cool, although I might be looking in the wrong place but I can't see your message :/
Nice, thanks. |
Hm, I cannot find it locally either. But I now think I won't push for the fix to be included in 28.2. We have to add a monkey patch anyway, for existing releases. And I don't feel like arguing against the policy of only fixing regressions. IMO it would make sense to make an exception here, but everyone feels like that about the bugs that affect them.
I have done that in 5eed205. I've already pushed that to |
As 26faa2b943675107e1664b2fea7174137c473475 is not included I believe that the variable has to be buffer-local'ed before assignment. Relates to magit#81
Yep, I tend to concur 😉
I believe there's something missing, I've sent a pull request. I haven't tried the hack itself, I was gonna but I saw the missing |
Thanks again, @tarsius. The overridden function seems to work (Emacs 28.1). My reproducer is unable to trigger the bug with However, even though I'm very excited to see this sorted out, I see this approach as a dangerous bet (especially if enabled by default if
Indeed not having the fix in 28.x is a bummer but to be honest dunno if This Is The Way. Of course you have the last word as maintainer, just my 2cts. |
I had looked at all the commits that changed this function since 25.1. One fixes indentation, another removes some XEmacs-only text-properties, and 4f1df40db36b221e7842bd75d6281922dcb268ee seems to try to fix the same bug as your change but without fully succeeding (or maybe it's just a related bug). Regardless, assuming that including your bugfix is the right thing to do, then including this is also appropriate. I have extended the comment before the advise to add a note about this.
It's probably not the Right Thing to Do™ but I am getting really sick of these |
I used magithub for some time, and today it stopped working. After some investigation I found that it seems to be ghub-related.
For example, I get an error after doing this:
M-x auth-source-forget-all-cached
M-x ghub-create-token
After entering Host, Username, Package and not modifying Scopes, I get the following backtrace:
, even though in github settings I see newly-created token. I can regenerate token and manually store it in
.authinfo.gpg
, but then I'll get similar error after every attempt to do anything magithub-related. For example, this I get after trying tomagithub-clone
the ghub's repo:The text was updated successfully, but these errors were encountered: