Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Private registry push reports "blob upload unknown" in client even though data gets pushed correctly #2225

Open
kskalski opened this issue Mar 26, 2017 · 29 comments

Comments

@kskalski
Copy link

My setup is following:

  • registry from upstream 2.6 image
    • single replica running on kubernetes
    • storage is backed by NFS
  • access to registry is handled by haproxy loadbalancer with https termination (talks to registry through plain http on port 5000)
  • docker on Debian 17.03.0-ce, build 60ccb22

Previously this set-up was working when I didn't use https, I have the loadbalancer expose the registry as http on port 5000 and I can use it with "localhost:5000" address (on each machine where loadbalancer runs) - in this scenario pushing and using the images works fine.

Now I'm trying to push image to registry through https endpoint:
$ docker push images.bigkuber.inside.datax.pl/kskalski-dataflows
but I'm getting

65723c989499: Pushing  2.56 kB
0ab03bb42090: Layer already exists
7a44d82d0fdd: Pushing [==================================================>] 1.086 MB/1.086 MB
b447dea7bad2: Pushing [==================================================>] 3.584 kB
a8d49715960c: Pushing [==================================================>] 4.096 kB
07d1c5d3b264: Pushing [==================================================>] 6.656 kB
0ef630c8efc9: Retrying in 5 seconds
910397601113: Waiting
316d0eee43a0: Waiting
c32193acdde5: Waiting
34787f338616: Waiting
35c20f26d188: Waiting
c3fe59dd9556: Waiting
6ed1a81ba5b6: Waiting
a3483ce177ce: Waiting
ce6c8756685b: Waiting
30339f20ced0: Waiting
0eb22bfb707d: Waiting
a2ae92ffcd29: Waiting
blob upload unknown

However even though push fails, the image is actually uploaded correctly, I can get its status and use it by new containers:

$ docker pull images.bigkuber.inside.datax.pl/kskalski-dataflows
Using default tag: latest
latest: Pulling from kskalski-dataflows
Digest: sha256:9cfae8e7c4bda509c5d16acbf2ae66c62fe8acb5709579a6e18f18923bd88635
Status: Image is up to date for images.bigkuber.inside.datax.pl/kskalski-dataflows:latest

I enabled debug on client docker, here is an exempt from relevant time interval:

Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.030831365+02:00" level=debug msg="Pushing layer: sha256:b447dea7bad20f7b52b36d7337a6c83cb30cbad1a29aa8c797c37384b17e4531"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.061256021+02:00" level=debug msg="Pushing layer: sha256:65723c989499242e11b29c1295aa3b547f1467d615c029e00a3cb0923a0c29e6"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.061303530+02:00" level=debug msg="Pushing layer: sha256:a8d49715960cb92ff168733c04cd426fb674b542096411f7d1e1db28d9bcf609"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.061954618+02:00" level=debug msg="Pushing layer: sha256:07d1c5d3b26422751e3c464265eb13148248b70f892ea0d404fd240425e0bb55"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.063894654+02:00" level=debug msg="Pushing layer: sha256:7a44d82d0fdde5954e177b58898ddfeae9c60c7db0c871c20b191ef857bf22ba"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.076830463+02:00" level=debug msg="Assembling tar data for e6893eaa2260964bee7d8bb9da7c4188f48606e0d5fbafba2b55838085917992"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.081559957+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.081840809+02:00" level=debug msg="Pushing layer: sha256:0ef630c8efc92b60505b62e7d8c942defd43bbd9d1ac869f6b48246149e24ab1"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.108774427+02:00" level=debug msg="Assembling tar data for d673eff3ea3a4a9ae6034dd7e922d3b13ea471b7c480342d35f58d0800088a3b"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.110283019+02:00" level=debug msg="Assembling tar data for 6a0340f028c1fe428c324236ff32d551c3edfccc614c18861e99554be127aa6a"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.110364733+02:00" level=debug msg="Assembling tar data for 22e24cc59be1ad28fcc8e58beb3853808e23f546cbdf10c7f2d70b6a6b06f567"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.111267528+02:00" level=debug msg="Assembling tar data for 8380d1e921c9fe78d71a3a6901ede37f356520dfb1d0ebd705e4d0dcbc998802"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.111713175+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.111906935+02:00" level=debug msg="Checking for presence of layer sha256:910397601113062c71d20ce1a50fc3ce0b4573e3eb5e37d9d777cded43961f6b (sha256:2b7dde8c38ea6846165bf7c01af325d72d1d4af8eafe58e7b57d09c0a7b77d3c) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.112535988+02:00" level=debug msg="Assembling tar data for ddc858a799ed50a534a4fed59ac72d1525c2bb10c3befe51622c87cb124ab19e"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135253097+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135309486+02:00" level=error msg="Upload failed, retrying: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135363366+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135545460+02:00" level=debug msg="Pushing layer: sha256:316d0eee43a0f6b0b5bd2145b4b94e8e29607110d32acf255e7dc6e1ae3f6f28"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135670578+02:00" level=debug msg="Checking for presence of layer sha256:c32193acdde589a51f40676750f007a76539cd4cb5500bfd2556c0673c165a90 (sha256:28b75cbefef624f1c57d7a2e061b2e6de129eaf44da264563b209147f19734a1) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135548087+02:00" level=error msg="Attempting next endpoint for push after error: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135839434+02:00" level=debug msg="Skipping v1 endpoint https://images.bigkuber.inside.datax.pl because v2 registry was detected"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135894206+02:00" level=debug msg="Pushing layer: sha256:34787f33861626ac4a649170b17d4f81e25ab1aeb300d9455f96b8e5402d229f"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.137523719+02:00" level=debug msg="Pushing layer: sha256:35c20f26d18852b74cc90afc4fb1995f1af45537a857eef042a227bd8d0822a3"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.150771828+02:00" level=debug msg="Checking for presence of layer sha256:c3fe59dd955634c3fa1808b8053353f03f4399d9d071be015fdfb98b3e105709 (sha256:81cf5426393a4ac116dac26d8e0f95ea3ba85afcc09bc6eafdbd2efc598aa180) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.157682152+02:00" level=debug msg="Pushing layer: sha256:6ed1a81ba5b6811a62563b80ea12a405ed442a297574de7440beeafe8512a00a"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.166616697+02:00" level=debug msg="Pushing layer: sha256:a3483ce177ce1278dd26f992b7c0cfe8b8175dd45bc28fee2628ff2cf063604c"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.178014534+02:00" level=debug msg="Assembling tar data for 47fb797a1dcf74556ef32502e03e7b0ac004cee062557980cbc926e06f51bd4b"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.178823622+02:00" level=debug msg="Assembling tar data for c4e08667b54798acaa89006674f1fab79651c4ec635e18d4e1ad6adea129aaf8"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.179815379+02:00" level=debug msg="Pushing layer: sha256:ce6c8756685b2bff514e0b28f78eedb671380084555af2b3833e54bb191b262a"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.180345018+02:00" level=debug msg="Checking for presence of layer sha256:30339f20ced009fc394410ac3360f387351641ed40d6b2a44b0d39098e2e2c40 (sha256:3318dd58ae6084d70d299efb50bcdf63e861f2dc3d787e03a751581e606442d9) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.188727340+02:00" level=debug msg="Assembling tar data for a928c5db828f3fd176e7eaf494da8b78e2617a32559c790f6af47e9f41e75a5b"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.188856382+02:00" level=debug msg="Assembling tar data for 26deb052b00c3f52d7f83ad2cb741fb489c51329a264ac1b05c8fe779953770d"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.189596714+02:00" level=debug msg="Checking for presence of layer sha256:0eb22bfb707db44a8e5ba46a21b2ac59c83dfa946228f04be511aba313bdc090 (sha256:8d9ed335b7dbe095ecfbbfe0857d07971283db0119f7a4aa490f9cbe06187335) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.189888533+02:00" level=debug msg="Checking for presence of layer sha256:a2ae92ffcd29f7ededa0320f4a4fd709a723beae9a4e681696874932db7aee2c (sha256:e12c678537aee9a1a1be8197da115e7c4d01f2652344f492a50ca8def9993d1e) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.191143301+02:00" level=debug msg="Assembling tar data for ad37e77046b99529724637bc995cd2ae9e125ac930063259f39c916ccd8975e1"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.219138614+02:00" level=debug msg="Assembling tar data for 48e2bf3066643a77760f3a53b7ac4bbf137eb1887c4c15ccd78ffd748b2b7f86"

As could be expected, registry server is not reporting any problems:

2017-03-26T04:14:09.400200681Z 10.38.0.0 - - [26/Mar/2017:04:14:09 +0000] "HEAD /v2/kskalski-dataflows/blobs/sha256:8d9ed335b7dbe095ecfbbfe0857d07971283db0119f7a4aa490f9cbe06187335 HTTP/1.1" 200 0 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 
2017-03-26T04:14:09.410077768Z time="2017-03-26T04:14:09Z" level=info msg="response completed" go.version=go1.7.3 http.request.host=images.bigkuber.inside.datax.pl http.request.id=e726db90-591c-465b-9a5d-ecdf980ae24c http.request.method=POST http.request.remoteaddr=192.168.168.114 http.request.uri="/v2/kskalski-dataflows/blobs/uploads/" http.request.useragent="docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" http.response.duration=19.552371ms http.response.status=202 http.response.written=0 instance.id=b985b0b1-c857-474d-b112-ad6ede7c62c9 version=v2.6.0  
2017-03-26T04:14:09.410104571Z 10.38.0.0 - - [26/Mar/2017:04:14:09 +0000] "POST /v2/kskalski-dataflows/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 
2017-03-26T04:26:22.159163623Z time="2017-03-26T04:26:22Z" level=info msg="response completed" go.version=go1.7.3 http.request.host=images.bigkuber.inside.datax.pl http.request.id=51dbde7d-8785-446b-8403-fb47c684b0eb http.request.method=GET http.request.remoteaddr=192.168.168.114 http.request.uri="/v2/" http.request.useragent="docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=2.710846ms http.response.status=200 http.response.written=2 instance.id=b985b0b1-c857-474d-b112-ad6ede7c62c9 version=v2.6.0  
2017-03-26T04:26:22.159399084Z 10.38.0.0 - - [26/Mar/2017:04:26:22 +0000] "GET /v2/ HTTP/1.1" 200 2 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 
2017-03-26T04:26:22.188904478Z time="2017-03-26T04:26:22Z" level=info msg="response completed" go.version=go1.7.3 http.request.host=images.bigkuber.inside.datax.pl http.request.id=373d6cbd-94b9-40c2-aa45-25c8aa3ad944 http.request.method=GET http.request.remoteaddr=192.168.168.114 http.request.uri="/v2/kskalski-dataflows/manifests/latest" http.request.useragent="docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=18.19002ms http.response.status=200 http.response.written=4304 instance.id=b985b0b1-c857-474d-b112-ad6ede7c62c9 version=v2.6.0  
2017-03-26T04:26:22.188938918Z 10.38.0.0 - - [26/Mar/2017:04:26:22 +0000] "GET /v2/kskalski-dataflows/manifests/latest HTTP/1.1" 200 4304 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 

I suspect something is configured incorrectly in loadbalancer doing https termination, since going through localhost:5000 works well. There must be some specific interaction of client and server that happens only during push, but not in any other operation, such that loadbalancer's https serving causes problems.

@ghost
Copy link

ghost commented Aug 25, 2017

Any news on this?

@jsumners
Copy link

Try adding http-request set-header X-Forwarded-Proto https if { ssl_fc } to your backend configuration in your HAProxy configuration.

@AndreaGiardini
Copy link

I had the same problem with Nginx reverse-proxy behind Amazon ELB (doing ssl termination). Forcing the protocol to https fixes the problem

@ghost
Copy link

ghost commented Jan 5, 2018

@AndreaGiardini you just saved my life! I spent a whole morning on this issue (with the exact same configuration... ELB -> Nginx -> Registry)! ❤️

@Nick-Harvey
Copy link

@AndreaGiardini I think I might be running into the same thing, but I'm not a nginx expert, would you mind posting your config? or at least the portion you added?

Here's what i've done:

# This file is largely based on the one written by @Djelibeybi in:
    #      https://github.com/Djelibeybi/Portus-On-OracleLinux7/

    events {
      worker_connections 1024;
    }

    http {
      default_type  application/octet-stream;
      charset       UTF-8;

      # Some basic config.
      server_tokens off;
      sendfile      on;
      tcp_nopush    on;
      tcp_nodelay   on;

      # On timeouts.
      keepalive_timeout     65;
      client_header_timeout 240;
      client_body_timeout   240;
      fastcgi_read_timeout  249;
      reset_timedout_connection on;

      ## Set a variable to help us decide if we need to add the
      ## 'Docker-Distribution-Api-Version' header.
      ## The registry always sets this header.
      ## In the case of nginx performing auth, the header will be unset
      ## since nginx is auth-ing before proxying.
      map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
        '' 'registry/2.0';
      }

      upstream {{ template "portus.fullname" . }} {
        least_conn;
        server {{ template "portus.fullname" . }}:{{ .Values.portus.service.port }} max_fails=3 fail_timeout=15s;
      }

      upstream {{ template "registry.fullname" . }}:{{ .Values.registry.service.port }} {
        least_conn;
        server {{ template "registry.fullname" . }}:{{ .Values.registry.service.port }} max_fails=3 fail_timeout=15s;
      }

      server {
        server_name {{ template "nginx.fullname" . }}

        {{- if .Values.portus.tls.enabled }}
        listen {{ .Values.nginx.service.port }} ssl http2;

        ##
        # SSLblob unknown to registry

        ssl on;

        # Certificates
        ssl_certificate /certificates/portus.crt;
        ssl_certificate_key /certificates/portus.key;

        # Enable session resumption to improve https performance
        #
        # http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html
        #ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 10m;

        # Enables server-side protection from BEAST attacks
        # http://blog.ivanristic.com/2013/09/is-beast-still-a-threat.html
        ssl_prefer_server_ciphers on;

        # Disable SSLv3 (enabled by default since nginx 0.8.19)
        # since it's less secure than TLS
        # http://en.wikipedia.org/wiki/Secure_Sockets_Layer#SSL_3.0
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

        # Ciphers chosen for forward secrecy and compatibility.
        #
        # http://blog.ivanristic.com/2013/08/configuring-apache-nginx-and-openssl-for-forward-secrecy.html
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        {{- else }}
        listen {{ .Values.nginx.service.port }} http2;
        {{- end }}

        ##
        # Docker-specific stuff.

        proxy_set_header Host $http_host;   # required for Docker client sake
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;

        # disable any limits to avoid HTTP 413 for large image uploads
        client_max_body_size 0;

        # required to avoid HTTP 411: see Issue #1486
        # (https://github.com/docker/docker/issues/1486)
        chunked_transfer_encoding on;

        ##
        # Custom headers.

        # Adding HSTS[1] (HTTP Strict Transport Security) to avoid SSL stripping[2].
        #
        # [1] https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
        # [2] https://en.wikipedia.org/wiki/SSL_stripping#SSL_stripping
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

        # Don't allow the browser to render the page inside a frame or iframe
        # and avoid Clickjacking. More in the following link:
        #
        # https://developer.mozilla.org/en-US/docs/HTTP/X-Frame-Options
        add_header X-Frame-Options DENY;

        # Disable content-type sniffing on some browsers.
        add_header X-Content-Type-Options nosniff;

        # This header enables the Cross-site scripting (XSS) filter built into
        # most recent web browsers. It's usually enabled by default anyway, so the
        # role of this header is to re-enable the filter for this particular
        # website if it was disabled by the user.
        add_header X-XSS-Protection "1; mode=block";

        # Add header for IE in compatibility mode.
        add_header X-UA-Compatible "IE=edge";

        # Redirect (most) requests to /v2/* to the Docker Registry
        location /v2/ {
          # Do not allow connections from docker 1.5 and earlier
          # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
          if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
            return 404;
          }

          ## If $docker_distribution_api_version is empty, the header will not be added.
          ## See the map directive above where this variable is defined.
          add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;

          {{- if .Values.portus.tls.enabled }}
          proxy_pass https://{{ template "registry.fullname" . }}:{{ .Values.registry.service.port }};
          {{- else }}
          proxy_pass http://{{ template "registry.fullname" . }}:{{ .Values.registry.service.port }};
          {{- end }}

          proxy_set_header Host $http_host;   # required for docker client's sake
          proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 900;
          proxy_buffering on;
        }

        # Portus needs to handle /v2/token for authentication
        location = /v2/token {
          {{- if .Values.portus.tls.enabled }}
          proxy_pass https://{{ template "portus.fullname" . }};
          {{- else }}
          proxy_pass http://{{ template "portus.fullname" . }};
          {{- end }}

          proxy_set_header Host $http_host;   # required for docker client's sake
          proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 900;
          proxy_buffering on;
        }

        # Portus needs to handle /v2/webhooks/events for notifications
        location = /v2/webhooks/events {
          {{- if .Values.portus.tls.enabled }}
          proxy_pass https://{{ template "portus.fullname" . }};
          {{- else }}
          proxy_pass http://{{ template "portus.fullname" . }};
          {{- end }}

          proxy_set_header Host $http_host;   # required for docker client's sake
          proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 900;
          proxy_buffering on;
        }

        # Portus handles everything else for the UI
        location / {
          try_files $uri/index.html $uri.html $uri @{{ template "portus.fullname" . }};
        }

        location @{{ template "portus.fullname" . }} {
          {{- if .Values.portus.tls.enabled }}
          proxy_pass https://{{ template "portus.fullname" . }};
          {{- else }}
          proxy_pass http://{{ template "portus.fullname" . }};
          {{- end }}

          proxy_set_header Host $http_host;   # required for docker client's sake
          proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 900;
          proxy_buffering on;
        }
      }
    }

@AndreaGiardini
Copy link

@Nick-Harvey I think you should modify
proxy_set_header X-Forwarded-Proto $scheme
to
proxy_set_header X-Forwarded-Proto https

@bitva77
Copy link

bitva77 commented Feb 15, 2018

@AndreaGiardini thank you! This was the fix for me as well going from an F5 which terminates SSL for the client, on to nginx via http and then the docker registry.

@twall
Copy link

twall commented Feb 17, 2018

I'm seeing a similar issue with a docker registry (nexus3) behind AWS CloudFront. I've set the X-Forwarded-Proto, but the push still comes back with "unknown blob". After some minutes worth of delay (and all components are already pushed), the command completes successfully.

@dignajar
Copy link

dignajar commented Jan 8, 2019

I have the same issue, but I'm running Artifactory behind a Nginx.

Nginx (SSL)(https) -> Artifactory (http)

server {
        server_name example.com;
        listen 443 ssl;

        # Logs
        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        # SSL
        ssl_certificate      /etc/nginx/ssl/cert.pem;
        ssl_certificate_key  /etc/nginx/ssl/privkey.pem;
        ssl_session_cache shared:SSL:1m;
        ssl_prefer_server_ciphers   on;

        if ($http_x_forwarded_proto = '') {
                set $http_x_forwarded_proto  $scheme;
        }

        # Disable checking of client request body size
        client_max_body_size 0;
	chunked_transfer_encoding on;

        rewrite ^/v2/(.*) /artifactory/v2/docker/$1;

        location /artifactory/ {
                proxy_read_timeout  2400s;
                proxy_pass_header   Server;
                proxy_cookie_path   ~*^/.* /;

                proxy_set_header    X-Artifactory-Override-Base-Url                                $http_x_forwarded_proto://$host:$server_port/artifactory;
                proxy_set_header    X-Forwarded-Port  $server_port;
                proxy_set_header    X-Forwarded-Proto $scheme;
                proxy_set_header    Host              $http_host;
                proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;

                proxy_pass          http://localhost:8081/artifactory/;
        }
}
The push refers to repository [example.com/test/python]
b483c36d97f7: Pushing [==================================================>]  6.643MB
a3c65ba3a94d: Pushing [==================================================>]  4.608kB
a6ba437d95f4: Retrying in 6 seconds
6b68dfad3e66: Pushing [==================================================>]  837.6kB
cd7100a72410: Pushing [==================================================>]  4.403MB
blob upload unknown

@markgalpin
Copy link

@dignajar if you contact JFrog support, we'll be happy to work with you on this one. That said, this issue feels like a hokey response from the client to a network layer glitch. Seems like if there's an issue other than network layer config, its client-side not server-side (and obviously an edge case, and possibly just bad feedback). The NGINX config you provided looks like the default one we generate (although I haven't checked it line by line) and generally that works (if its not the default NGINX we generate, use the auto-generated one), but there can be additional network-layer issues that can complicate it depending on your specific configuration.

@RichardFoo
Copy link

RichardFoo commented Feb 18, 2019

[Opened a new issue #2862 and moved my comments there because it seems like this might be common to several open issues.]

@ritarya
Copy link

ritarya commented Aug 9, 2019

I am also facing the issue behind the nginx ingress. Push works fine on single replica registry but when i increase the number of replica, it gives the unknown blob upload message after waiting and push. My setup is external docker client -> nginx-ingress -> registry ( 2 replica).

@ericsuhong
Copy link

I also faced the same issue as @ritarya mentioned. When I run private repo with multiple replicas, docker image push was keep retrying and failing. When I reduced # of replicas to 1, issue immediately went away...

@AntonSmuit
Copy link

I also faced the same issue as @ritarya mentioned. When I run private repo with multiple replicas, docker image push was keep retrying and failing. When I reduced # of replicas to 1, issue immediately went away...

may be its another issue with REGISTRY_HTTP_SECRET is not same for all replicas

@ericsuhong
Copy link

We were experimenting with unauthenticated registry for testing (no secrets). We were using local file system, and I think that is why it was causing the issue. The issue went away after we moved to using Azure Storage for backing store.

@BenoitDuffez
Copy link

In case anyone lands on this with an Apache reverse proxy, here's what fixed the issue on my setup:

RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
RequestHeader set "X-Forwarded-SSL" expr=%{HTTPS}

@yoandypv
Copy link

@Nick-Harvey I think you should modify
proxy_set_header X-Forwarded-Proto $scheme
to
proxy_set_header X-Forwarded-Proto https

Genio, un crack

@phuctranhoang
Copy link

Try adding http-request set-header X-Forwarded-Proto https if { ssl_fc } to your backend configuration in your HAProxy configuration.

Thanks @jsumners. You saved my life.

@peeyush-es
Copy link

@Nick-Harvey I think you should modify
proxy_set_header X-Forwarded-Proto $scheme
to
proxy_set_header X-Forwarded-Proto https

Genio, un crack

worked! :D

@duncanhkc
Copy link

I also faced the same issue as @ritarya mentioned. When I run private repo with multiple replicas, docker image push was keep retrying and failing. When I reduced # of replicas to 1, issue immediately went away...

Have you set service.spec.sessionAffinity to ClientIP? https://kubernetes.io/docs/concepts/services-networking/service/
It looks like that you connect to different pods during pushing image.

@joaodrp
Copy link
Collaborator

joaodrp commented Feb 25, 2021

If you're using multiple instances behind a load balancer, please make sure to have the same http.secret configured for all instances, otherwise uploads will fail.

See https://docs.docker.com/registry/configuration/#http for secret:

A random piece of data used to sign state that may be stored with the client to protect against tampering. For production environments you should generate a random piece of data using a cryptographically secure random generator. If you omit the secret, the registry will automatically generate a secret when it starts. If you are building a cluster of registries behind a load balancer, you MUST ensure the secret is the same for all registries.

@natew
Copy link

natew commented May 27, 2021

Just for posterity, I was using Cloudflare and getting this, moving to use our own traefik router fixed it.

@QuentinouLeTwist
Copy link

QuentinouLeTwist commented Oct 7, 2021

I'm running an haproxy with nginx under docker swarm cluster and I had to define following in the configs :

In nginx conf :

proxy_set_header  Host                        $http_host;
proxy_set_header  X-Real-IP                $remote_addr;
proxy_set_header  X-Forwarded-For    $proxy_add_x_forwarded_for;
proxy_set_header  X-Forwarded-Proto https;

And following in the haproxy.cfg :

http-request set-header X-Forwarded-Proto https if { ssl_fc }

And now, no more "unkown blob" issues. Thanks guys!

@maartenschalekamp
Copy link

Just incase anyone else runs into this and they confirmed that their X-Forwarded-Proto is set correctly and you have more than one registry instance/pod running. Check the following two things.

  • The http.secret config value set to same value on both
  • Your underlying storage is shared via NFS or S3. If using NFS set the flag no_wdelay on the mount as well.

@cgperalt-harness
Copy link

cgperalt-harness commented Mar 7, 2022

@ritarya did you find a solution if so could you please put it here? I'm running into the exact same issue. When I have more than two replicas for my private docker registry it gives the unknown blob upload message. When I have one all works just fine. I'm mounting a filestore from GCP to store the docker images

@cgperalt-harness
Copy link

@maartenschalekamp If Im using filestore from GCP which is a managed service. Do you know, How I can enable the flag no_wdelay ?

@goors
Copy link

goors commented Jun 30, 2022

I have same problem with NFS share when running on multiple instances (ingress controller). Scaling down to 1 replica solved that. I mean having multiple replicas would be awesome but I kind of of made peace with just one.

@kbzowski
Copy link

kbzowski commented Nov 19, 2022

I had the same problem when I run multiple instances of registry
docker compose up -d --scale registry=3
Returning to 1x solved it. I don't know where the problem was.

@zcourts
Copy link

zcourts commented Oct 31, 2023

We recently had this issue with 2.8.3 (most recent registry image at time of writing) and found many issues can trigger this.
One unexpected one is if the proxy is rewriting the URL and not putting a trailing slash on requests that were originally sent with one. Check your nginx config and force trailing slash if it was originally provided by the docker client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests