Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"unknown blob" error while pushing images to harbor #174

Closed
karimbzu opened this issue Mar 4, 2019 · 15 comments · Fixed by #271
Closed

"unknown blob" error while pushing images to harbor #174

karimbzu opened this issue Mar 4, 2019 · 15 comments · Fixed by #271

Comments

@karimbzu
Copy link

karimbzu commented Mar 4, 2019

I have successfully deployed harbor-helm on my openshift cluster. I am able to login to harbor. However, when i tried to push image to harbor repository. The "unknown blob" appears after completing the command.
Here is the output of my command:
[root@master harbor-helm]# docker push harbor.192.168.80.130.nip.io/library/alpine:v1.0
The push refers to a repository [harbor.192.168.80.130.nip.io/library/alpine]
503e53e365f3: Pushing [==================================================>] 5.529 MB/5.529 MB
unknown blob
Kindly help me in this regard.

@ywk253100
Copy link
Collaborator

Could you please upload the logs of Harbor components?

@karimbzu
Copy link
Author

karimbzu commented Mar 6, 2019

Thanks for your reply.
please find below the logs: I have gathered from #oc logs {pod} > {podname.txt}, please help me if there is any other way to get logs.
Example: oc logs harbor-harbor-core-6db4559bf8-tjh4q > core.log
notary-signer.log
portal.log
radis.log
core.log
database.log
jobservice.log
notary-server.log
docker_command_output.txt

Thanks and Regards

@karimbzu
Copy link
Author

any update???

@dtucker-csatf
Copy link

@karimbzu Did you figure out what was causing the "unknown blob" issue?

@karimbzu
Copy link
Author

@dtucker-csatf No i have not yet solved the problem. are you facing the same issue? what is your configuration?

@dtucker-csatf
Copy link

dtucker-csatf commented Apr 16, 2019

Helm Config:

expose:
  # Set the way how to expose the service. Set the type as "ingress", 
  # "clusterIP" or "nodePort" and fill the information in the corresponding 
  # section
  type: ingress
  tls:
    # Enable the tls or not. Note: if the type is "ingress" and the tls 
    # is disabled, the port must be included in the command when pull/push
    # images. Refer to https://github.com/goharbor/harbor/issues/5291 
    # for the detail.
    enabled: true
    # Fill the name of secret if you want to use your own TLS certificate
    # and private key. The secret must contain keys named tls.crt and 
    # tls.key that contain the certificate and private key to use for TLS
    # The certificate and private key will be generated automatically if 
    # it is not set
    secretName: ""
    # By default, the Notary service will use the same cert and key as
    # described above. Fill the name of secret if you want to use a 
    # separated one. Only needed when the type is "ingress".
    notarySecretName: ""
    # The commmon name used to generate the certificate, it's necessary
    # when the type is "clusterIP" or "nodePort" and "secretName" is null
    commonName: ""
  ingress:
    hosts:
      core: core-harbor.app.domain
      notary: notary-harbor.domain
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    ports:
      # The service port Harbor listens on when serving with HTTP
      httpPort: 80
      # The service port Harbor listens on when serving with HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled 
      # is set to true
      notaryPort: 4443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving with HTTP
        port: 80
        # The node port Harbor listens on when serving with HTTP
        nodePort: 30002
      https: 
        # The service port Harbor listens on when serving with HTTPS
        port: 443
        # The node port Harbor listens on when serving with HTTPS
        nodePort: 30003
      # Only needed when notary.enabled is set to true
      notary: 
        # The service port Notary listens on
        port: 443
        # The node port Notary listens on
        nodePort: 30004

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker/notary client
# 
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be 
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node 
# 
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://core-harbor.app.domain

# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamicly. 
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you have already existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3", 
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete 
  # operation. Leaving it empty will delete PVCs after the chart deleted
  resourcePolicy: ""
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "vsphere-standard"
      subPath: "registry"
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:
      existingClaim: ""
      storageClass: "vsphere-standard"
      subPath: "chartmuseum"
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: ""
      storageClass: "vsphere-standard"
      subPath: "jobservice"
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external database is used, the following settings for database will 
    # be ignored
    database:
      existingClaim: ""
      storageClass: "vsphere-standard"
      subPath: "database"
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external Redis is used, the following settings for Redis will 
    # be ignored
    redis:
      existingClaim: ""
      storageClass: "vsphere-standard"
      subPath: "redis"
      accessMode: ReadWriteOnce
      size: 1Gi
  # Define which storage backend is used for registry and chartmuseum to store
  # images and charts. Refer to 
  # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage 
  # for the detail.
  imageChartStorage:
    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift", 
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    # and chartmuseum
    type: filesystem
    filesystem:
      rootdirectory: /storage
      #maxthreads: 100
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      #realm: core.windows.net
    gcs:
      bucket: bucketname
      # TODO: support the keyfile of gcs
      #keyfile: /path/to/keyfile
      #rootdirectory: /gcs/object/name/prefix
      #chunksize: "5242880"
    s3:
      region: us-west-1
      bucket: bucketname
      #accesskey: awsaccesskey
      #secretkey: awssecretkey
      #regionendpoint: http://myobjects.local
      #encrypt: false
      #keyid: mykeyid
      #secure: true
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      #region: fr
      #tenant: tenantname
      #tenantid: tenantid
      #domain: domainname
      #domainid: domainid
      #trustid: trustid
      #insecureskipverify: false
      #chunksize: 5M
      #prefix:
      #secretkey: secretkey
      #accesskey: accesskey
      #authversion: 3
      #endpointtype: public
      #tempurlcontainerkey: false
      #tempurlmethods:
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      #endpoint: endpoint
      #internal: false
      #encrypt: false
      #secure: true
      #chunksize: 10M
      #rootdirectory: rootdirectory

imagePullPolicy: IfNotPresent

logLevel: debug
# The initial password of Harbor admin. Change it from portal after launching Harbor
harborAdminPassword: "Harbor12345"
# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "n9wCoMo35arp8me1"

# If expose the service via "ingress", the Nginx will not be used
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

portal:
  image:
    repository: goharbor/harbor-portal
    tag: v1.7.0
  replicas: 1
# resources:
#  requests:
#    memory: 256Mi
#    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

core:
  image:
    repository: goharbor/harbor-core
    tag: v1.7.0
  replicas: 1
# resources:
#  requests:
#    memory: 256Mi
#    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

adminserver:
  image:
    repository: goharbor/harbor-adminserver
    tag: v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v1.7.0
  replicas: 1
  maxJobWorkers: 10
  # The logger for jobs: "file", "database" or "stdout"
  jobLogger: file
# resources:
#   requests:
#     memory: 256Mi
#     cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

registry:
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.6.2-v1.7.0
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

chartmuseum:
  enabled: true
  image:
    repository: goharbor/chartmuseum-photon
    tag: v0.7.1-v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

clair:
  enabled: true
  image:
    repository: goharbor/clair-photon
    tag: v2.0.7-v1.7.0
  replicas: 1
  # The http(s) proxy used to update vulnerabilities database from internet
  httpProxy:
  httpsProxy:
  # The interval of clair updaters, the unit is hour, set to 0 to 
  # disable the updaters
  updatersInterval: 12
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

notary:
  enabled: false
  server:
    image:
      repository: goharbor/notary-server-photon
      tag: v0.6.1-v1.7.0
    replicas: 1
  signer:
    image:
      repository: goharbor/notary-signer-photon
      tag: v0.6.1-v1.7.0
    replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

database:
  # if external database is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    image:
      repository: goharbor/harbor-db
      tag: v1.7.0
    # The initial superuser password for internal database
    password: "Pu@Nro9tsel4u"
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.1"
    port: "5432"
    username: "user"
    password: "password"
    coreDatabase: "registry"
    clairDatabase: "clair"
    notaryServerDatabase: "notary_server"
    notarySignerDatabase: "notary_signer"
    sslmode: "disable"
  ## Additional deployment annotations
  podAnnotations: {}

redis:
  # if external Redis is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    image:
      repository: goharbor/redis-photon
      tag: v1.7.0
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.2"
    port: "6379"
    # The "coreDatabaseIndex" must be "0" as the library Harbor
    # used doesn't support configuring it
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    chartmuseumDatabaseIndex: "3"
    password: ""
  ## Additional deployment annotations
  podAnnotations: {}

Ingress Yaml:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  annotations:
    ingress.kubernetes.io/proxy-body-size: '0'
    ingress.kubernetes.io/ssl-passthrough: 'true'
    ingress.kubernetes.io/ssl-redirect: 'true'
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
  name:  harbor-ingress
  namespace: harbor
  labels:
    app: harbor
    chart: harbor
    heritage: Tiller
spec:
  tls:
    - hosts:
        - core-harbor.app.csatf.domain
      secretName: harbor-ingress
  rules:
    - host: core-harbor.app.csatf.domain
      http:
        paths:
          - path: /
            backend:
              serviceName: harbor-portal
              servicePort: 80
          - path: /api/
            backend:
              serviceName: harbor-core
              servicePort: 8080
          - path: /service/
            backend:
              serviceName: harbor-core
              servicePort: 8080
          - path: /v2/
            backend:
              serviceName: harbor-core
              servicePort: 8080
          - path: /chartrepo/
            backend:
              serviceName: harbor-core
              servicePort: 8080
          - path: /c/
            backend:
              serviceName: harbor-core
              servicePort: 8080
status:
  loadBalancer: {}

CLI:

The push refers to repository [core-harbor.app.domain/library/php]
76b2edb7396e: Pushing [==================================================>]  4.608kB
b045c7b3ddd4: Pushing [==================================================>]  11.78kB
f1524345d8d5: Retrying in 5 seconds
e2479faa6f32: Pushing [==================================================>]  4.096kB
7267e9787061: Retrying in 5 seconds
fd40ac5106c0: Waiting
819c60c22f5f: Waiting
5dd24cee12f7: Waiting
5dacd731af1b: Waiting
unknown blob

LOGs:
harbor-chartmuseum.log
harbor-clairlog.log
harbor-core.log
harbor-database-0-database.log
harbor-jobservice.log
harbor-portal.log
harbor-redis-0-redis.log
harbor-registry.log
redis-master-0.log
redis-slave.log

@karimbzu
Copy link
Author

To my understanding during "unknown blob" error we need to adjust proxy configuration such as in:
https://stackoverflow.com/questions/51508146/blob-unknown-when-pushing-to-custom-registry-through-apache-proxy
However i am installing harbor helm on openshift OKD 3.11, which is also using builtin HAProxy configuration with the settings i.e.
Header add X-Forwarded-Proto "https"
RequestHeader add X-Forwarded-Proto "https"

Apart from it, harbor helm also uses the same setting for its ngnix proxy configuration with the parameters mentioned above.
So i am confused where i need to adjust these settings, in openshift HAProxy or in Ngnix proxy for harbor and may be this causing the "unknown blob" issue.
Can someone assist me in this issue.
Thanks in advance.

@dtucker-csatf
Copy link

I was able to resolve the issue by following the instructions in this link.

@karimbzu
Copy link
Author

Still, i am unable to solve the issue

@NanXuejiao
Copy link

Have you solved this problem? I get the same with you.

@cdchris12
Copy link

Manually adding the changes from #271 got this working for me.

@ywk253100
Copy link
Collaborator

@cdchris12 How did you get the "blob unknown" error? Is there a proxy behind Harbor in your deployment? How do you expose Harbor service, ingress or node port or others?

@kevinsingapore
Copy link

i also got the same error.
put the nginx proxy front of the harbor service.
ane how to solve? who can ? thank u!

@problame
Copy link

problame commented Jun 3, 2020

#174 (comment) solves the problem for me, but the changes made to the helm chart in #271 need to be back-ported to the component that generates the docker-compose.yml

requesting to re-open this issue

@abdialeh
Copy link

abdialeh commented Jul 14, 2021

To my understanding during "unknown blob" error we need to adjust proxy configuration such as in:
https://stackoverflow.com/questions/51508146/blob-unknown-when-pushing-to-custom-registry-through-apache-proxy
However i am installing harbor helm on openshift OKD 3.11, which is also using builtin HAProxy configuration with the settings i.e.
Header add X-Forwarded-Proto "https" RequestHeader add X-Forwarded-Proto "https"
Apart from it, harbor helm also uses the same setting for its ngnix proxy configuration with the parameters mentioned above.
So i am confused where i need to adjust these settings, in openshift HAProxy or in Ngnix proxy for harbor and may be this causing the "unknown blob" issue.
Can someone assist me in this issue.
Thanks in advance.

I just add script to vhost file
Header add X-Forwarded-Proto "https"
RequestHeader add X-Forwarded-Proto "https"

my problem solved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants