Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log "Spam": "Could not do a head request #715

Closed
fkrauthan opened this issue Dec 20, 2020 · 55 comments · Fixed by #716
Closed

Log "Spam": "Could not do a head request #715

fkrauthan opened this issue Dec 20, 2020 · 55 comments · Fixed by #716

Comments

@fkrauthan
Copy link

fkrauthan commented Dec 20, 2020

Describe the bug
I recently updated to the latest watchtower version. But now I seem to get spammed with the error message:

Could not do a head request, falling back to regular pull.

which gets logged as well as creates an email notification every time.

To Reproduce
Steps to reproduce the behavior:

  1. Updated watchtower to latest image
  2. Deploy a container from a gitlab hosted registry
  3. Let it run
  4. Monitor logs and emails

Expected behavior
Ether a way to disable email notifications in case of this error. Or a fix for why the HEAD request is failing.

Environment

  • Platform: Ubuntu (Linux)
  • Architecture: x86_64
  • Docker version: 19.03.13, build 4484c46d9d
Logs from running watchtower with the --debug option
2020-12-20T22:04:01Z [I] Running a one time update.
2020-12-20T22:04:01Z [D] Checking containers for updated images
2020-12-20T22:04:01Z [D] Retrieving running containers
2020-12-20T22:04:01Z [D] Trying to load authentication credentials.
                         container: /play-for-today-website
                         image: registry.gitlab.com/***/website:latest
2020-12-20T22:04:01Z [D] Loaded auth credentials for user gitlab+deploy-token-32212, on registry registry.gitlab.com/***/website:latest, from file /config.json
2020-12-20T22:04:01Z [D] Got image name: registry.gitlab.com/***/website:latest
2020-12-20T22:04:01Z [D] Credentials loaded
2020-12-20T22:04:01Z [D] Checking if pull is needed
                         container: /play-for-today-website
                         image: registry.gitlab.com/***/website:latest
2020-12-20T22:04:01Z [D] Building challenge URL
                         URL: https://registry.gitlab.com/v2/
2020-12-20T22:04:02Z [D] Got response to challenge request
                         header: Bearer realm="https://gitlab.com/jwt/auth",service="container_registry"
                         status: 401 Unauthorized
2020-12-20T22:04:02Z [D] Checking challenge header content
                         realm: https://gitlab.com/jwt/auth
                         service: container_registry
2020-12-20T22:04:02Z [D] Setting scope for auth token
                         image: registry.gitlab.com/***/website
                         scope: repository:registry.gitlab.com/***/website:pull
2020-12-20T22:04:02Z [D] Credentials found.
                         credentials: ******
2020-12-20T22:04:02Z [D] Parsing image ref
                         host: registry.gitlab.com
                         image: cogindo/***/website
                         normalized: registry.gitlab.com/***/website:latest
                         tag: latest
2020-12-20T22:04:02Z [D] Doing a HEAD request to fetch a digest
                         url: https://registry.gitlab.com/v2/***/website/manifests/latest
2020-12-20T22:04:02Z [I] Could not do a head request, falling back to regular pull.
2020-12-20T22:04:02Z [D] Reason: registry responded to head request with 401
2020-12-20T22:04:02Z [D] Pulling image
                         container: /play-for-today-website
                         image: registry.gitlab.com/***/website:latest
2020-12-20T22:04:04Z [D] No new images found for /play-for-today-website

Additional context

The issue seem to happen only for the gitlab registry. Maybe that helps fixing it.

@github-actions
Copy link

Hi there! 👋🏼 As you're new to this repo, we'd like to suggest that you read our code of conduct as well as our contribution guidelines. Thanks a bunch for opening your first issue! 🙏

@eilandert
Copy link

i can confirm this, upgraded few hours back and getting spammed

@piksel
Copy link
Member

piksel commented Dec 20, 2020

Yeah, this is because Github Packages (not Github Container Registry) uses non-standard URLs for their images so it's not possible to normalize the names.
This shouldn't matter as neither the rate limits, nor the HEAD requests apply to Github Packages.
There definitely should be a way to opt out of HEAD-based checking for repos that does not conform to the OCI Distribution spec.

Edit: Saw that this was about GitLab and not Github. But the same thing applies. They also use a three-part URL, but it seems like you could omit the image name?

@eilandert
Copy link

Yeah, this is because Github Packages (not Github Container Registry) uses non-standard URLs for their images so it's not possible to normalize the names.
This shouldn't matter as neither the rate limits, nor the HEAD requests apply to Github Packages.
There definitely should be a way to opt out of HEAD-based checking for repos that does not conform to the OCI Distribution spec.

Edit: Saw that this was about Git_Lab_ and not Github. But the same thing applies. They also use a three-part URL, but it seems like you could omit the image name?

I have the issue on dockerhub, not github or gitlab. Or am I missing something?

I have no limits since I have a pro account on dockerhub, so I am good either way.
If it makes no difference, maybe it's a good idea to get rid of the spam message and move it to debug?

@jpayne-procella
Copy link

I'm not sure I understand this last comment. Watchtower upgraded itself on me today and ever since then I've been getting this log spam (to teams notification in my case). What am I supposed to be doing differently?

@fkrauthan
Copy link
Author

@piksel Yeah my problem is less rate limiting (as you are correct at least so far they did not introduce anything like docker) as the amount of noice this (by the sounds of it expected) behaviour brings.

@piksel
Copy link
Member

piksel commented Dec 21, 2020

The reason for it being displayed is that it's warning that the check will count towards the rate limit (that it failed to do a rate-limit friendly check).

@eilandert That's odd, could you open a ticket with debug logs? That should not be happening for dockerhub.

Just moving it to debug is not ideal as for those that need the HEAD-based checking it's relevant if it stops working as it may lead to them getting rate limited. But it shouldn't be too difficult to just add some way to turn it off if it's not relevant.

@fkrauthan
Copy link
Author

Maybe this can be ether an opt-in feature (e.g. specifying for which registry we want it). Or at least an opt-out feature (again should be configurable on a per registry level).

@eilandert
Copy link

eilandert commented Dec 21, 2020

@piksel

I see I got a authentication failed, thats strange. I replaced my full credentials with a token last week, ran it with debug and everything was fine.

Output of one docker (i removed the credentials)
2020-12-21T04:27:56+01 [D] Trying to load authentication credentials.
                           container: /nginx
                           image: index.docker.io/eilandert/nginx-modsecurity3-pagespeed:latest
2020-12-21T04:27:56+01 [D] Loaded auth credentials for user eilandert, on registry index.docker.io/eilandert/nginx-modsecurity3-pagespeed:latest, from file /config.json
2020-12-21T04:27:56+01 [D] Got image name: index.docker.io/eilandert/nginx-modsecurity3-pagespeed:latest
2020-12-21T04:27:56+01 [D] Credentials loaded
2020-12-21T04:27:56+01 [D] Checking if pull is needed
                           container: /nginx
                           image: index.docker.io/eilandert/nginx-modsecurity3-pagespeed:latest
2020-12-21T04:27:56+01 [D] Building challenge URL
                           URL: https://index.docker.io/v2/
2020-12-21T04:27:57+01 [D] Got response to challenge request
                           header: Bearer realm=
                           https://auth.docker.io/token\",service: \"registry.docker.io\""
                           status: 401 Unauthorized
2020-12-21T04:27:57+01 [D] Checking challenge header content
                           realm: https://auth.docker.io/token
                           service: registry.docker.io
2020-12-21T04:27:57+01 [D] Setting scope for auth token
                           image: index.docker.io/eilandert/nginx-modsecurity3-pagespeed
                           scope: repository:index.docker.io/eilandert/nginx-modsecurity3-pagespeed:pull
2020-12-21T04:27:57+01 [D] Credentials found.
                           credentials: 
2020-12-21T04:27:57+01 [D] Parsing image ref
                           host: index.docker.io
                           image: eilandert/nginx-modsecurity3-pagespeed
                           normalized: docker.io/eilandert/nginx-modsecurity3-pagespeed:latest
                           tag: latest
2020-12-21T04:27:57+01 [D] Doing a HEAD request to fetch a digest
                           url: https://index.docker.io/v2/eilandert/nginx-modsecurity3-pagespeed/manifests/latest
2020-12-21T04:27:57+01 [I] Could not do a head request, falling back to regular pull.
2020-12-21T04:27:57+01 [D] Reason: registry responded to head request with 401
2020-12-21T04:27:57+01 [D] Pulling image
                           container: /nginx
                           image: index.docker.io/eilandert/nginx-modsecurity3-pagespeed:latest
2020-12-21T04:27:58+01 [D] No new images found for /nginx

@simskij
Copy link
Member

simskij commented Dec 21, 2020

@eilandert does it fail the head request for all containers/images or just some? any difference depending on whether it's a public or private image, for instance? would you mind showing us your docker run command or docker-compose.yml ?

@eilandert
Copy link

eilandert commented Dec 21, 2020

@eilandert does it fail the head request for all containers/images or just some? any difference depending on whether it's a public or private image, for instance? would you mind showing us your docker run command or docker-compose.yml ?

@simskij it fails for every image

Here is a part of my docker-compose, the container I pasted before and watchtower
version: '2.4'
services:
  nginx:
    container_name: nginx
    image: index.docker.io/eilandert/nginx-modsecurity3-pagespeed:latest
    stop_grace_period: 3s
    depends_on:
-removed-
    ports:
    - 80:80
    - 443:443
    restart: always
    volumes:
-removed-
    environment:
      - TZ=Europe/Amsterdam
    networks:
-removed-
    labels:
      - "com.centurylinklabs.watchtower.scope=myguard"
  
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./watchtower/config.json:/config.json
    networks:
-removed-
    environment:
      - TZ=Europe/Amsterdam
      - WATCHTOWER_DEBUG=true
      - WATCHTOWER_TRACE=true
    command: --interval 300 --scope myguard

@simskij
Copy link
Member

simskij commented Dec 21, 2020

Sorry to bother you with a lot of questions, @eilandert, but it really sounds like you've still got some issues with your credentials. 🤔

Would you mind diffing your docker config to the one you use for watchtower? Something along the lines of diff ~/.docker/config.json ./watchtower/config.json should probably do.

@piksel
Copy link
Member

piksel commented Dec 21, 2020

Also @fkrauthan you have masked out a key part to this issue, the image URL. If you don't want to post it here and it's more acceptable to do so, would you mind sending it to info at containrrr dot dev?

@eilandert
Copy link

@simskij
the only difference are the credentials,
in ~/.docker/config.json are my full credentials,
in ./watchtower/config.json I've put a dockerhub token, which worked fine until yesterday

I linked back to ~/.docker/config.json (with the full credentials) but the issue remains the same. I can push builds to dockerhub so the credentials must be working.

@eilandert
Copy link

@simskij same issue when I run watchtower anonymous

@deividaspetraitis
Copy link

deividaspetraitis commented Dec 21, 2020

Not sure is it related but I got same errors but with reason: registry responded to head request with 404

time="2020-12-21T14:03:51+02:00" level=debug msg="Got response to challenge request" header="Basic realm=\"Registry realm\"" status="401 Unauthorized"
time="2020-12-21T14:03:51+02:00" level=debug msg="Parsing image ref" host=docker.**** image=admin-api normalized="docker.****/admin-api:e03ce2e106ad37493d6eb8a1519ee0975e4e42ab" tag=e03ce2e106ad37493d6eb8a1519ee0975e4e42ab
time="2020-12-21T14:03:51+02:00" level=debug msg="Doing a HEAD request to fetch a digest" url="https://docker.****/v2/library/admin-api/manifests/e03ce2e106ad37493d6eb8a1519ee0975e4e42ab"
time="2020-12-21T14:03:51+02:00" level=info msg="Could not do a head request, falling back to regular pull."
time="2020-12-21T14:03:52+02:00" level=debug msg="Reason: registry responded to head request with 404"
time="2020-12-21T14:03:52+02:00" level=debug msg="Pulling image" container=/admin-api_1 image="docker.****/admin-api:e03ce2e106ad37493d6eb8a1519ee0975e4e42ab"
time="2020-12-21T14:03:52+02:00" level=debug msg="No new images found for /admin-api_1"

It seems request URL is not correct according to the reference it should be:
https://docker.****/v2/admin-api/manifests/e03ce2e106ad37493d6eb8a1519ee0975e4e42ab but not
https://docker.****/v2/library/admin-api/manifests/e03ce2e106ad37493d6eb8a1519ee0975e4e42ab


Edit 1: We're using self-hosted docker registry based on https://docs.docker.com/registry/

@piksel
Copy link
Member

piksel commented Dec 21, 2020

@deividaspetraitis what type of container registry is it you use?
It's kind of hard to tell when the relevant parts of the log are censored out, but I think it might be same problem, using a three-stage path to the image...

@deividaspetraitis
Copy link

deividaspetraitis commented Dec 21, 2020

@piksel sorry for not providing this information in my original comment. We're using self-hosted docker registry based on https://docs.docker.com/registry/

@piksel
Copy link
Member

piksel commented Dec 21, 2020

@deividaspetraitis is "library" part of your image ref?

@deividaspetraitis
Copy link

deividaspetraitis commented Dec 21, 2020

@piksel not really, I'm capable to pull image manually by issuing following query: docker pull docker.registry.local/admin-api:e03ce2e106ad37493d6eb8a1519ee0975e4e42ab and it works just fine, I might be wrong but it seems like Watchtower appends this prefix to the URL when sending HEAD request to obtain resource information.

@simskij
Copy link
Member

simskij commented Dec 21, 2020

@deividaspetraitis:

I might be wrong but it seems like Watchtower appends this prefix to the URL when sending HEAD request to obtain resource information.

You're totally correct in that observation! It does this for all images not stored under an org or user, as this is how "official images" are resolved. We could probably hotfix this so it only does this for dockerhub images. In the meantime, if you push the image as someorg/somerepo it should work with head requests as well.

And yes, as someone said previously, being able to opt out of head requests should likely be an option. Initially globally, but down the line even per repo or registry.

@piksel
Copy link
Member

piksel commented Dec 21, 2020

Okay, yeah it's the opposite problem. Because you only have a single segment it adds the library/ prefix as the first segment, since that is where the docker-library images are using.

@jpayne-procella
Copy link

time="2020-12-21T14:23:48Z" level=debug msg="Credentials found." credentials=XXXX
time="2020-12-21T14:23:48Z" level=debug msg="Parsing image ref" host=index.docker.io image=XXX/XXX normalized="docker.io/XXX/XXX:release" tag=release
time="2020-12-21T14:23:48Z" level=debug msg="Doing a HEAD request to fetch a digest" url="https://index.docker.io/v2/XXX/XXX/manifests/release"
time="2020-12-21T14:23:48Z" level=info msg="Could not do a head request, falling back to regular pull."
time="2020-12-21T14:23:48Z" level=debug msg="Reason: registry responded to head request with 401"

My credentials are correct (token based) and I'd rather bump into rate limits than get an alert every time this runs and gets a HEAD error.

@simskij
Copy link
Member

simskij commented Dec 21, 2020

v1.1.1 will begin to roll out quite soon. Could you please have a look again once it's updated?

@jpayne-procella: I get your point, but at the same time, we want watchtower to always use head requests for DockerHub, both for rate limit and for alleviating their servers from some unnecessary requests.

@jpayne-procella
Copy link

Sure... but there's a difference between always trying HEAD on dockerhub and spamming notifications every time it fails. Which is every time in my case.

@tkalus
Copy link
Contributor

tkalus commented Dec 21, 2020

What about splitting the difference? Keep the HEAD check for all containers/registries but selectively log.Debug() or log.Warning() based on the registry hostname?

@tkalus
Copy link
Contributor

tkalus commented Dec 21, 2020

Hey @simskij

I can confirm that I'm still seeing the non-actionable-because-HEAD-unsupported notification for Could not do a head request, falling back to regular pull while running v1.1.2.

Here's the lightly-redacted loglines I'm seeing:

watchtower    | time="2020-12-21T23:37:29Z" level=info msg="Could not do a head request, falling back to regular pull."
watchtower    | time="2020-12-21T23:37:29Z" level=debug msg="Reason: registry responded to head request with &{401 Unauthorized 401 HTTP/1.1 1 1 map[Content-Length:[160] Content-Type:[application/json] Date:[Mon, 21 Dec 2020 23:37:29 GMT] Docker-Distribution-Api-Version:[registry/2.0] Www-Authenticate:[Bearer realm=\"https://gitlab.com/jwt/auth\",service=\"container_registry\",scope=\"repository:<snip:gitlab_org>/<snip:gitlab_repo>:pull\",error=\"insufficient_scope\"] X-Content-Type-Options:[nosniff]] {} 160 [] false false map[] <snip>}"
watchtower    | time="2020-12-21T23:37:29Z" level=debug msg="Pulling image" container=/<snip:gitlab_repo> image="registry.gitlab.com/<snip:gitlab_org>/<snip:gitlab_repo>:latest"
watchtower    | time="2020-12-21T23:37:30Z" level=debug msg="No new images found for /<snip:gitlab_repo>"

@simskij
Copy link
Member

simskij commented Dec 21, 2020

Alright! Lets proceed with your suggestion then. 👍🏻 I'll have a look tomorrow. Thanks for letting me know!

@jpayne-procella
Copy link

@fkrauthan and @jpayne-procella: did v1.1.2 improve the situation for y'all as well?

yes - the noise has completely disappeared from my notifications. Thanks!

@fkrauthan
Copy link
Author

Looks good now. Thank you very much for the fast fix.

@jwillmer
Copy link

I still get this message with the latest (10h old - 0ecb05bef26f) watchtower image. I'm using a private GitHub registry.

@simskij
Copy link
Member

simskij commented Dec 22, 2020

I still get this message with the latest (10h old - 0ecb05bef26f) watchtower image. I'm using a private GitHub registry.

Are you using GitHub Packages or ghcr.io?

@jwillmer
Copy link

@simskij sorry, I wanted to write GitLab registry. Nowadays all services have a registry so I got confused. Let me know if I can post any log output that might help you.

@yonisolo
Copy link

Hello, I got the same problem, until yesterday i was running on 1.0.3, after upgrading to 1.1.3, i got the same log
Could not do a head request, falling back to regular pull.
dowgrading solved the issue.

@simskij
Copy link
Member

simskij commented Dec 22, 2020

Hello, I got the same problem, until yesterday i was running on 1.0.3, after upgrading to 1.1.3, i got the same log
Could not do a head request, falling back to regular pull.
dowgrading solved the issue.

@yonisolo, I assume you're also on either GitHub Packages or GitLab? 😅

@yonisolo
Copy link

yonisolo commented Dec 22, 2020

Hello, I got the same problem, until yesterday i was running on 1.0.3, after upgrading to 1.1.3, i got the same log
Could not do a head request, falling back to regular pull.
dowgrading solved the issue.

@yonisolo, I assume you're also on either GitHub Packages or GitLab? 😅

My bad, sorry, i'm pulling from a private repository, same than @deividaspetraitis
and i'm using the docker image containrrr/watchtower

thank you for your reply

@fkrauthan
Copy link
Author

I just noticed that since today I am getting the log messages emailed again. Was there another release that reverted the fix?

@simskij
Copy link
Member

simskij commented Dec 22, 2020

Hey @fkrauthan!

v1.1.3, which is the current live version should have all the fixes. Although the PR provided by @tkalus will further improve things as it will push all failed HEAD request logs to debug except when the registry is DockerHub.

@yonisolo and @deividaspetraitis, this should improve things for you as well, given that the GitLab Registry hasn't implemented the HEAD option.

@fkrauthan
Copy link
Author

Ok so it looks like that version 1.1.2 fixed the issue (I was not getting any emails about the HEAD request) and version 1.1.3 started triggering the emails again for the issue.

@ghost
Copy link

ghost commented Dec 22, 2020

To avoid important communication to get lost in a closed issues no one monitors, I'll go ahead and lock this issue. If you want to continue the discussion, please open a new issue. Thank you! 🙏🏼

@ghost ghost locked as resolved and limited conversation to collaborators Dec 22, 2020
@containrrr containrrr unlocked this conversation Dec 23, 2020
@simskij simskij reopened this Dec 23, 2020
@simskij
Copy link
Member

simskij commented Dec 23, 2020

let's keep this open for a while longer 😅

@simskij
Copy link
Member

simskij commented Dec 23, 2020

v1.1.4 in CI now. Should be with you in an hour or so. Please keep the updates coming regarding whether you get spammed or not, as well as whether HEAD requests work (if you're on quay.io, dockerhub.com, or ghcr.io)

@yonisolo
Copy link

I did many tests, it seems that latest tag doesn't work but specifying 1.1.4, the HEAD request works...
I'm using this registry: https://hub.docker.com/_/registry
Many thanks for your job!
Regards,

@simskij
Copy link
Member

simskij commented Dec 23, 2020

latest and v1.1.4 both point at the same hash so shouldn't be any difference. However, if you already had v1.1.3 as latest, it would require a self-update before you'd end up at v1.1.4.

Worst case, running docker pull containrrr/watchtower:latest and then restarting watchtower should solve the issue. 👍🏼

Thanks for letting us know! 🙏🏼

@jwillmer
Copy link

I don't know if this is related to the latest changes or just a server problem - It's talking about awaiting headers:

time="2020-12-23T09:20:43Z" level=info msg="Unable to update container "...": Error response from daemon: Get https://registry.gitlab.com/v2/..../latest: Get https://gitlab.com/.../auth?account=...&service=container_registry: net/http: request canceled (Client.Timeout exceeded while awaiting headers). Proceeding to next.

@simskij
Copy link
Member

simskij commented Dec 23, 2020

@jwillmer

Yeah. This is not the same from what I can tell. Based in the log entry, its the get request that fails for you.

I'd make sure it works outside watchtower and if it does, raise a separate ticket as nothing has changed in the regular pull flow.

I'd also investigate if its the same for all containers or just some.

@deividaspetraitis
Copy link

deividaspetraitis commented Dec 23, 2020

Well done, thank you very much, no longer receiving Could not do a head request, falling back to regular pull. messages as notifications for private registry based on https://docs.docker.com/registry/ ! 🙏🏼

@jwillmer
Copy link

jwillmer commented Dec 23, 2020

@simskij works via docker pull [image]. I suspect that GitLab might block to many calls at once since I have 12 different images hosted on GitLab that all get checked by watchtower. I will play around with it an open a new issue if I can confirm my suspicion. In the meantime I can confirm that I don't get any log entries about Could not do a head request anymore.

@fkrauthan
Copy link
Author

I can confirm it works for me without an issue with version 1.1.4. No more notifications about head request fail for my gitlab repo.

@simskij
Copy link
Member

simskij commented Dec 24, 2020

Yay! Let's close this. In case you have any lingering problems, feel free to open a new issue.

Thanks all for assisting in getting this shipped! ❤️

@simskij simskij closed this as completed Dec 24, 2020
@containrrr containrrr locked and limited conversation to collaborators Dec 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants