Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: Use go modules for SDK dependency management #1566

Merged
merged 30 commits into from
Jul 10, 2019

Conversation

joelanford
Copy link
Member

Description of the change:

  • Adds go.mod and go.sum
  • Removes Gopkg.toml and Gopkg.lock
  • Removes /vendor
  • Updates relevant developer documentation to refer to go modules instead of dep.
  • Updates Makefile, .travis.yml, and hack/image/ to use go modules (and related go mod) commands in CI.

Motivation for the change:
Go modules is the official dependency manager of the Go community and has been incorporated into the go toolchain. By default, new operator-sdk projects now use go modules, and our users will expect operator-sdk to use them as well.

@openshift-ci-robot openshift-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jun 17, 2019
@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 18, 2019
@openshift-ci-robot openshift-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 18, 2019
@lilic
Copy link
Member

lilic commented Jun 19, 2019

@joelanford Guessing this is on hold until the above PR is merged?

go.mod Outdated Show resolved Hide resolved
Copy link
Contributor

@hasbro17 hasbro17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after CI turns green.

@joelanford
Copy link
Member Author

/test sanity
/test unit

@joelanford
Copy link
Member Author

I'm pretty worried that this will result in more frequent flakes. Since we'll be relying on several hundred dependencies (totaling multiple GB of data) being downloaded successfully for every individual test run.

Right now, there are 5 tests that each fetch the entire set of modules (unit, sanity, go e2e, helm e2e, and ansible e2e), so the problem is also 5x exacerbated.

Any thoughts on alternatives? I'm more worried about a solution that will work for OpenShift CI than Travis since we're migrating away from Travis. Does OpenShift CI offer a better caching solution (e.g. mounting a volume, using a cached local base image, etc.).

Would it help to rebuild a base image every once in awhile (e.g. every time we bump our kubernetes dependencies) that has the vast majority of the modules pre-fetched.

@AlexNPavel
Copy link
Contributor

@joelanford We could set up Openshift-CI to use a Dockerfile as a root image and then do a go build ./... on the project, which would download all the deps. Then that single image would be used as the root for all tests and image builds (thus all tests and image builds would have the cached deps). The main risk of that is that use the Dockerfile image root introduces the risk of breaking all the tests due to it's use of the main repo's Dockerfile instead of the PR's. Of course, we could put in some safeguards to make sure that we don't accidentally break it. I think it would be a bit better than having a separately maintained image though (and it shouldn't increase test time too much as Openshift-CI's network is very fast)

@joelanford
Copy link
Member Author

@AlexNPavel Ah okay that's what we were talking about the other day. That definitely sounds like an improvement.

I've got a bunch of comments & questions on that:

  1. Would that root image persist somehow to allow re-running tests on the same commit? For example, say the root image is built and then used for both the sanity and unit tests, but the unit tests fail on some flake. If we then re-run the unit tests without changing the repo, does that use the root image we already built? Or does it have to rebuild? It seems like the node that the next test job gets schedule on would play into this as well?

  2. I don't think the main repo vs PR Dockerfile is a major issue (though it is bothersome). Do you know if that's an intentional feature or if it can be configured somehow? Like we talked about, we could add a sanity test to make sure we don't change that Dockerfile and other files at the same time. Plus, if our code is injected into that root image, could the Dockerfile be as simple as the following, such that we'd very rarely need to change it?

    FROM golang:1.12
    RUN /path/to/our/repo/hack/ci/setup-build-dependencies.sh
  3. I also wonder if it would be possible/beneficial to embed the Dockerfile in the Prow config. If it's possible to embed in openshift/release, would the PR Dockerfile get picked up and used in that scenario?

  4. WRT Openshift CI speed, something about the go modules download is still pretty slow, it looks like. This most recent unit test spent 16+ minutes running the ./hack/ci/setup-build-dependencies.sh script for the test-bin image, which is what's doing the modules downloading.

    See https://storage.googleapis.com/origin-ci-test/pr-logs/pull/operator-framework_operator-sdk/1566/pull-ci-operator-framework-operator-sdk-master-unit/1314/build-log.txt

@AlexNPavel
Copy link
Contributor

@joelanford

  1. The root image should persist. I think /retest would rebuild the images (not 100% sure though), but I don't think individual test reruns would trigger a rebuild.

  2. It's not an intentional feature, it's a limitation in the design of ci-operator. I asked testplatform about it, and while they agree it would be better to use a Dockerfile from the PR, that would actually be quite difficult to do due to the way the ci-operator is designed.

  3. I don't think it's possible to embed dockerfiles in openshift/release, only our repo

  4. That's odd. I would need to see the pod's logs to see what's going on there. I think it should be faster, but maybe it's not?

@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 24, 2019
Copy link
Member

@estroz estroz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added lgtm Indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jun 27, 2019
@openshift-ci-robot openshift-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed lgtm Indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jul 1, 2019
@joelanford
Copy link
Member Author

/test e2e-aws-ansible

1 similar comment
@joelanford
Copy link
Member Author

/test e2e-aws-ansible

@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 2, 2019
@openshift-ci-robot openshift-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 9, 2019
Copy link
Contributor

@AlexNPavel AlexNPavel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@hasbro17 hasbro17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM again.

Copy link
Member

@lilic lilic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants