Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible to use custom images? #3

Open
jwm opened this issue Dec 1, 2020 · 6 comments
Open

Possible to use custom images? #3

jwm opened this issue Dec 1, 2020 · 6 comments

Comments

@jwm
Copy link

jwm commented Dec 1, 2020

We use a locally built image for our Circle workflows, since there are a large number of dependencies that we don't want to spend time installing at build time.

Is it (or will it) be possible to use custom aarch64 images on ARM executors?

Thanks!

@appplemac
Copy link
Contributor

Hey @jwm, thanks for the question. It’s definitely possible to build Docker images and run aarch64 images on the Arm resources. However, as your job runs in a dedicated VM, you’d need to explicitly docker pull and docker run in your config — would that address your need here?

@jwm
Copy link
Author

jwm commented Dec 2, 2020

Hey @appplemac! Our goal is to build an aarch64 binary for our primary app, which we currently build using the Docker executor and a custom container image that we specify with the image attribute. Ideally, we would do the same thing here.

It's somewhat more work than we're willing to do for a trial like this to modify our build config to bring up a container manually after the VM is up, then execute all our build code in it. The way our workflows are structured means it would be more refactoring than docker pull && docker run a-smallish-command to have all the code execute in the container.

Beyond that, we're focused on build time. We'd be happy to test with a VM executor if it launched with a custom image, but the spinup time for a VM is longer than we'd like to add in the long term.

Does that make sense? Happy to provide more detail or specific config sections if it's useful.

@lizthegrey
Copy link

Hey @appplemac! Our goal is to build an aarch64 binary for our primary app, which we currently build using the Docker executor and a custom container image that we specify with the image attribute. Ideally, we would do the same thing here.

It's somewhat more work than we're willing to do for a trial like this to modify our build config to bring up a container manually after the VM is up, then execute all our build code in it. The way our workflows are structured means it would be more refactoring than docker pull && docker run a-smallish-command to have all the code execute in the container.

Beyond that, we're focused on build time. We'd be happy to test with a VM executor if it launched with a custom image, but the spinup time for a VM is longer than we'd like to add in the long term.

Does that make sense? Happy to provide more detail or specific config sections if it's useful.

agreed with @jwm, I was expecting some degree of drop-in "cimg/go:1.15.4" replacement, as we do run a number of commands that must execute within that environment. I suppose what we can do is just manually install the specific go version we need and then proceed with our build steps, but that feels a little bit more fragile than our previous situation.

@glenjamin
Copy link

glenjamin commented Dec 4, 2020

One of the things that makes this a little complicated is that the docker executor runs on large multi-tenant machines, which don't have access to a docker daemon. When remote_docker is used, that creates a standalone VM and connects it up.

Given that we expect many people using Arm to want to build docker containers, we elected to focus on the machine executor setup, as that gets a whole VM and can safely interact with the docker daemon. As you noted, this means you do need to manage dependencies yourselves instead - although we do pre-install some things for convenience (currently go 1.15.2 is preinstalled).

This is an example of how we're doing a Go install on an arm machine in one of our projects.

      - run:
          name: install go
          working_directory: go
          command: |
            go_tar=$(mktemp)
            echo "Downlading Go"
            curl -o "${go_tar}" -sSL https://dl.google.com/go/go<< pipeline.parameters.go-version >>.linux-arm64.tar.gz
            mkdir go/
            echo "Installing Go"
            tar -C "${PWD}" -xvzf "${go_tar}"
            rm -rf "${go_tar}"
            echo "Adding Go to PATH"
            echo "export PATH=\"${PWD}/go/bin:$PATH\"" >> "$BASH_ENV"
            . "$BASH_ENV"
            go version
            echo "Adding GOPATH bin to PATH"
            echo 'export PATH="$PATH:$(go env GOPATH)/bin"' >> "$BASH_ENV"
            echo "Path is now $PATH"

Supporting the docker: executor syntax but executing on isolated VMs is possible, and is something we've thought about. This wouldn't get you the same speed benefits as our multi-tenant system, but it would make usage more convenient.

The other aspect is availability of Arm docker images. The team which manages convenience images has started looking into this, but as you can imagine there's a lot of compatibility stuff to look at for producing arm64 varients of all of our existing x86_64 images.

@appplemac
Copy link
Contributor

@jwm Makes sense, thanks for elaborating! If you’re comfortable sharing a part of your current config, that would be great — but we’d understand if you preferred to not share that.

Let me discuss your use case with the team, and I’ll follow up if I have more questions.

@appplemac
Copy link
Contributor

Hi @lizthegrey , thanks for your input! That’s right — installing the version of Go that’s right for your environment would be an option here. Agreed that installing dependencies during the job can be more fragile, but we do see many customers doing this successfully for various types of dependencies. Will let you know if there is a better option we can suggest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants