Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'docker buildx build ' failed #6111

Closed
shixinlishixinli opened this issue Sep 15, 2023 · 11 comments
Closed

'docker buildx build ' failed #6111

shixinlishixinli opened this issue Sep 15, 2023 · 11 comments
Labels
area/vertical-pod-autoscaler kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@shixinlishixinli
Copy link

hi

i try to use this command to build image " sudo ALL_ARCHITECTURES="amd64" REGISTRY=shixinlishixinli TAG=new make release"
but 'docker buildx build' is failed . i have a proxy in the environment. do you have some suggestions to solve this problem?

this is the output

sudo ALL_ARCHITECTURES="amd64" REGISTRY=shixinlishixinli TAG=new make release
rm -f admission-controller-amd64
docker build --build-arg http_proxy='http://child-prc.intel.com:913' --build-arg https_proxy='http://child-prc.intel.com:913' -t vpa-autoscaling-builder ../../builder
Sending build context to Docker daemon 3.584kB
Step 1/10 : FROM golang:1.20.5
---> d5a118f29bfa
Step 2/10 : LABEL maintainer="Beata Skiba bskiba@google.com"
---> Using cache
---> 9f75a8030889
Step 3/10 : ENV GOPATH /gopath/
---> Using cache
---> 3a9fc243912a
Step 4/10 : ENV PATH $GOPATH/bin:$PATH
---> Using cache
---> f254394f8536
Step 5/10 : ARG GOARCH
---> Using cache
---> b446ab9c7fc0
Step 6/10 : ARG LDFLAGS
---> Using cache
---> 2bd85c426a91
Step 7/10 : RUN go version
---> Using cache
---> dd3c51467258
Step 8/10 : RUN go install github.com/tools/godep@latest
---> Using cache
---> b7e5622eae59
Step 9/10 : RUN godep version
---> Using cache
---> c552baf1035f
Step 10/10 : CMD ["/bin/bash"]
---> Using cache
---> 8e6d07eb04db
Successfully built 8e6d07eb04db
Successfully tagged vpa-autoscaling-builder:latest
docker run -v pwd/../..:/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler vpa-autoscaling-builder:latest bash -c 'cd /gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler && make build-binary-with-vendor-amd64 -C pkg/admission-controller'
make: Entering directory '/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/admission-controller'
CGO_ENABLED=0 LD_FLAGS=-s GO111MODULE=on GOARCH=amd64 GOOS=linux go build -mod vendor -o admission-controller-amd64
make: Leaving directory '/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/admission-controller'
unknown flag: --driver
See 'docker --help'.

Usage: docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
--config string Location of client config files (default
"/root/.docker")
-c, --context string Name of the context to use to connect to
the daemon (overrides DOCKER_HOST env
var and default context set with "docker
context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level
("debug"|"info"|"warn"|"error"|"fatal")
(default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA
(default "/root/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"/root/.docker/cert.pem")
--tlskey string Path to TLS key file (default
"/root/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit

Management Commands:
builder Manage builds
config Manage Docker configs
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
scan* Docker Scan (Docker Inc., v0.23.0)
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes

Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

To get more help with docker, check out our guides at https://docs.docker.com/go/guides/

BUILDER=
docker buildx build --pull --load --platform linux/amd64 -t shixinlishixinli/vpa-admission-controller-amd64:new --build-arg ARCH=amd64 .
unknown flag: --pull
See 'docker --help'.

Usage: docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
--config string Location of client config files (default
"/root/.docker")
-c, --context string Name of the context to use to connect to
the daemon (overrides DOCKER_HOST env
var and default context set with "docker
context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level
("debug"|"info"|"warn"|"error"|"fatal")
(default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA
(default "/root/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"/root/.docker/cert.pem")
--tlskey string Path to TLS key file (default
"/root/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit

Management Commands:
builder Manage builds
config Manage Docker configs
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
scan* Docker Scan (Docker Inc., v0.23.0)
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes

Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

To get more help with docker, check out our guides at https://docs.docker.com/go/guides/

@shixinlishixinli shixinlishixinli added the kind/bug Categorizes issue or PR as related to a bug. label Sep 15, 2023
@jbartosik
Copy link
Collaborator

make: Entering directory '/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/admission-controller'
CGO_ENABLED=0 LD_FLAGS=-s GO111MODULE=on GOARCH=amd64 GOOS=linux go build -mod vendor -o admission-controller-amd64
make: Leaving directory '/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/admission-controller'
unknown flag: --driver
See 'docker --help'.

It seems that the problem is with this line, added by #5867 . @kgolab @voelzmo if you have ideas and time please chime in.

My quick look for "unknown flag: --driver" turned up IQSS/dataverse#9771 which suggests a newer docker version might address the problem

I don't have other ideas right now

@pnasrat
Copy link

pnasrat commented Sep 22, 2023

I came across this while looking for possible first contributor issues, apologies if this isn't helpful comment.

@shixinlishixinli can you run docker buildx version and docker version

I'm not sure how you have buildx installed, it isn't listed in the Management commands section above - in comparison with my local setup (Ubuntu 22.04 with docker-ce 24.0.6 and docker-buildx-plugin 0.11.2 installed via apt) for other ways see https://github.com/docker/buildx#installing and the CLI Plugin design here docker/cli#1534

If buildx is required (and it's the newer default) one option might be to add a target or adjust the docker-build-% target to check for it in the Makefile eg calling docker buildx version

Management Commands:
  builder     Manage builds
  buildx*     Docker Buildx (Docker Inc., v0.11.2)
  compose*    Docker Compose (Docker Inc., v2.21.0)
  container   Manage containers
  context     Manage contexts
  image       Manage images
  manifest    Manage Docker image manifests and manifest lists
  network     Manage networks
  plugin      Manage plugins
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

@pnasrat
Copy link

pnasrat commented Sep 22, 2023

Note I get the same error about flags if I uninstall buildx and run

cd vertical-pod-autoscaler/pkg/admission-controller
make docker-build

It looks as if the docker cli is evaluating the flags and seeing them as not valid and not giving the error that buildx is not installed.

@shixinlishixinli
Copy link
Author

thanks for your reply . you are right . the buildx is not installed in my PC environment.

but i still have a problem of buildx. using docker buildx failed to pull image. but i can get this image by docker pull . i have proxy in my PC environment . do you have any suggestion about how to fix this problem to use buildx to pull image with a proxy environment?

docker buildx build . [+] Building 30.1s (2/2) FINISHED docker-container:cool_cray
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 862B 0.0s
=> ERROR [internal] load metadata for gcr.io/distroless/static:latest 30.0s

[internal] load metadata for gcr.io/distroless/static:latest:


WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
Dockerfile:16

14 |
15 |
16 | >>> FROM gcr.io/distroless/static:latest
17 | MAINTAINER Marcin Wielgus "mwielgus@google.com"
18 |

ERROR: failed to solve: gcr.io/distroless/static:latest: failed to do request: Head "https://gcr.io/v2/distroless/static/manifests/latest": dial tcp: lookup gcr.io: i/o timeout

docker pull gcr.io/distroless/static:latest
latest: Pulling from distroless/static
Digest: sha256:e7e79fb2947f38ce0fab6061733f7e1959c12b843079042fe13f56ca7b9d178c
Status: Image is up to date for gcr.io/distroless/static:latest
gcr.io/distroless/static:latest

cat /etc/docker/daemon.json
{
"features":{"buildkit":false},
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://registry.docker-cn.com","https://hub-mirror.c.163.com","https://registry.aliyuncs.com","https://docker.mirrors.ustc.edu.cn"],
"dns": ["172.18.0.1","74.125.204.82","142.250.102.82","10.248.2.5", "10.239.27.236", "172.17.6.9", "10.0.0.2", "8.8.8.8","127.0.0.53"],
"insecure-registries": [
"nunu-pc.bj.intel.com:5000",
"nvbox.bj.intel.com:5000"
]
}

docker buildx create --driver-opt env.https_proxy=http://child-prc.intel.com:913 --driver-opt env.http_proxy=http://child-prc.intel.com:913

Note I get the same error about flags if I uninstall buildx and run

cd vertical-pod-autoscaler/pkg/admission-controller
make docker-build

It looks as if the docker cli is evaluating the flags and seeing them as not valid and not giving the error that buildx is not installed.

@pnasrat
Copy link

pnasrat commented Oct 2, 2023

See the following docs re proxies and docker buildx https://docs.docker.com/network/proxy/

I tested with a local setup with a virtual machine running squid and running and a second virtual machine running docker and a docker config.json to set the proxies. I validated tailing the squid logs I see that requests go via the proxy for buildx. While the Makefile could be modified to support passing in build environment config setting your defaults is probably the best option

Can you try adding your proxies to your ~/.docker/config.json I've used your command line args above

{
    "proxies": {
      "default": {
        "httpProxy": "http://child-prc.intel.com:913",
        "httpsProxy": "http://child-prc.intel.com:913",
        "noProxy": "127.0.0.0/8"
      }
    }
}

You can test this with a modified example to check the proxies in the builder using the following based upon the build commands in the Makefiles in the various vpa dirs and the docker proxy docs. This creates a builder and tells docker buildx to use it, it runs a build using a HEREDOC to give the builder Dockerfile via stdin the builder image prints the configured environment proxy variables, tries to add a package (which would fail without proxy) and runs a curl command to validate.

BUILDER=$(docker buildx create --driver=docker-container --use)
docker buildx build \
  --no-cache \
  --progress=plain \
  - <<EOF
FROM alpine
RUN env | grep -i _PROXY
RUN apk add curl
RUN curl -I http://google.com
EOF
docker buildx rm ${BUILDER}

@anirudh-hegde
Copy link

Try to set ENV variable key to value in Dockerfile

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2024
@Shubham82
Copy link
Contributor

@shixinlishixinli if your concern is resolved, can we close this issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 30, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants