Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Makefile and image tags #23

Merged
merged 1 commit into from
Feb 5, 2018

Conversation

deitch
Copy link
Contributor

@deitch deitch commented Jan 1, 2018

Does the following:

  • Update Makefile so a single make target "does the right thing" on any platform (but can be overridden)
  • Add make target for pushing
  • Add a short README.md describing how it can be built and pushed
  • Adds architecture-based image tags to make it easier to deal with single repo yet multiple architectures. This will prep for multi-architecture manifests.
  • Default to amd64

With the above, you end up with a single repo calico/go-build, but with tags per architecture. For example, calico/go-build:latest-amd64 and calico/go-build:latest-arm64 .

The above is the simplest way to manage it, and is how many of the official library images are built (as well as just about everything build by docker themselves, including linuxkit images).

More importantly, it removes "repository bloat", and makes it easy to move to a single multi-architecture manifest.

It also adds a default for amd64, so that anything that uses it as a default continues to work. Of course, with multi-architecture (if/when adopted by project calico), all of that goes away, since you can just do docker pull calico/go-build on any architecture, and it automatically pulls the right one.

@deitch
Copy link
Contributor Author

deitch commented Jan 3, 2018

/cc @djlwilder

@deitch
Copy link
Contributor Author

deitch commented Jan 29, 2018

Following up from the conversation with @caseydavenport on projectcalico/bird#52 :

This is restructured to build on amd64, with ease of adding arm64 and ppc64le. We will have the same questions though: where to build and where to run CI.

We could try to do cross-build here too, but it is slightly more complex, since it isn't just one build, but rather a whole series of apk installs. The go commands, on the other hand, are easy, since they can use GOARCH.

Thoughts?

@deitch
Copy link
Contributor Author

deitch commented Jan 29, 2018

/cc @tomdee at the request of @caseydavenport

@deitch
Copy link
Contributor Author

deitch commented Feb 2, 2018

There was an error in the Makefile; nice to have semaphoreci catch that. Updated.

@caseydavenport and @tomdee what can I do next here to move it up?

Copy link
Contributor

@tomdee tomdee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes themselves look fine.

@tomdee
Copy link
Contributor

tomdee commented Feb 2, 2018

Since most developers and CI systems run amd64 it would be nice if the whole build/release process can run from an amd64 host. For the apk commands, using qemu-static works fine (as I did with the flannel builds). This doesn't seem to work for the go commands though. Running them under qemu leads to an unhandled CPU exception. Maybe the go gets could be done with GOARCH but that means go would need to run outside the container so that's going to work for the go install -v std line.

There are also going to be problems with the patched version of goveralls and the glibc install (which isn't multiarch).

@deitch
Copy link
Contributor Author

deitch commented Feb 3, 2018

Since most developers and CI systems run amd64

Yeah, it is a problem. I know some CI providers are looking seriously at arm64, but nothing there yet.

it would be nice if the whole build/release process can run from an amd64 host

That was the approach we took with bird multi-arch. It was hard to do the gcc part and get it right, but it did work in the end.

For the apk commands, using qemu-static works fine (as I did with the flannel builds).

Please do point a link to where you did it? I have played around with qemu-static, but if you have a working consistent build process in another project, linking here would be helpful.

Maybe the go gets could be done with GOARCH but that means go would need to run outside the container

GOARCH=arm64 go build . inside the container doesn't work?

@tomdee since this looks good to you and approved, can we merge this in as, "able to build for different platforms, each on its own hardware," and then I can open a new PR for building everything from amd64?

@fasaxc
Copy link
Member

fasaxc commented Feb 5, 2018

@deitch I think this was the build that @tomdee was talking about: https://github.com/coreos/flannel/search?utf8=%E2%9C%93&q=qemu&type=

Looks like he mounts in the qemu binary from the host. I've tried doing cross-arch builds for raspberry pi that way on my own project and it seems to work great. Of course, if you hit an instruction that's not supported (as it looks like @tomdee mentioned above) then it'll suddenly go from "great" to "broken"!

@fasaxc fasaxc merged commit d167736 into projectcalico:master Feb 5, 2018
@deitch deitch deleted the multi-makefile branch February 5, 2018 17:10
@deitch
Copy link
Contributor Author

deitch commented Feb 6, 2018

Best as I can tell, there are 2 distinct stages: cross-build (or cross-compile, if you prefer), and cross-image.

Cross-compile appears to do what you said, which is mount qemu into the container when building using:

		-v /usr/bin/qemu-$(ARCH)-static:/usr/bin/qemu-$(ARCH)-static \

Does it then rely on binfmt being properly installed and in place in the kernel, which depends on a certain kernel version, I think?

The cross-image actually copies qemu into the image? See here

Well, one step at a time, I guess.

@deitch
Copy link
Contributor Author

deitch commented Feb 6, 2018

I am going to open a tracking issue for cross-build

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants