Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support installation through homebrew on mac #129

Closed
pwittrock opened this issue Sep 1, 2017 · 15 comments
Closed

Support installation through homebrew on mac #129

pwittrock opened this issue Sep 1, 2017 · 15 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pwittrock
Copy link
Contributor

No description provided.

@metmajer
Copy link
Contributor

metmajer commented Sep 6, 2017

@pwittrock: I've recently come across this fantastic project. If contributions are welcome, feel free to assign it to me and I'll look into it!

@pwittrock
Copy link
Contributor Author

Consider is assigned. Just invited you as a collaborator, which allow me to actually assign it to you.

@metmajer
Copy link
Contributor

metmajer commented Sep 6, 2017

Cool, thanks for bringing me on board!

@metmajer
Copy link
Contributor

metmajer commented Sep 8, 2017

@pwittrock, let's have a discussion over the scope of this issue after I've done some research.

Creating support for installation via homebrew involves creating a formula. A formula points to a specific source code archive and defines methods for installation and testing. A formula can be part of the official homebrew-core project, which allows convenient installation via, e.g., brew install kubernetes-apiserver-builder. However, I am not sure this is a good fit for a project that is still in a very early stage and that has frequent updates.

The alternative is to provide our formula in a tap, a third party repo for formulae. Since we own the tap, we can update the formulae at any time (e.g. for version bumps). In this case, an installation via brew install kubernetes-incubator/apiserver-builder/apiserver-builder@0.1 would point to a formula defined in github.com/kubernetes-incubator/homebrew-apiserver-builder/**/apiserver-builder@0.1.rb. This means that, with this approach, we'd need to create an extra repo in the kubernetes-incubator GitHub organization and I am not sure whether this is accepted.

A consolidated approach would store the tap in this repository, by convention under a directory named HomebrewFormula or Formula. Here, the installation would look as follows:

// clones our repo into a local tap store named ´kubernetes-incubator/apiserver-builder´
brew tap kubernetes-incubator/apiserver-builder https://github.com/kubernetes-incubator/apiserver-builder.git
// now, users can install our formulae as if they were part of Homebrew's canonical repo
brew install kubernetes-apiserver-builder@0.1
// users can also apply updates via
brew update

Preferences? Thoughts?

@pwittrock
Copy link
Contributor Author

@metmajer

I like the third approach.

The project is just in its infancy, so minimizing overhead and optimizing for making it easy to push lots of releases makes sense to me. It is probably not worth creating another repo for the formula with the project at this stage.

@pwittrock pwittrock added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Sep 14, 2017
@metmajer
Copy link
Contributor

metmajer commented Sep 24, 2017

While debugging the homebrew formula, I came across the following issue:

bash-3.2$ make build
rm -rf *.deb *.rpm *.tar.gz ./release
go run ./cmd/apiserver-builder-release/main.go vendor --version 0.1-alpha.15
cmd/apiserver-builder-release/main.go:30:2: cannot find package "github.com/spf13/cobra" in any of:
	/usr/local/Cellar/go/1.9/libexec/src/github.com/spf13/cobra (from $GOROOT)
	/private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17/src/github.com/spf13/cobra (from $GOPATH)
make: *** [build] Error 1

I am wondering why go does not find the dependency package in the vendor folder where it resides? After all, isn't that the point of this directory? I am still quite new to go, but anyone in this round able to help out with an explanation? Thanks!

bash-3.2$ go version
go version go1.9 darwin/amd64

Also, the vendor directory is inside my GOPATH:

bash-3.2$ echo $GOPATH
/private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17

bash-3.2$ pwd
/private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17

bash-3.2$ ls -la
total 104
drwxr-xr-x  19 martin  staff    646 Sep 24 17:34 .
drwx------   3 martin  staff    102 Sep 24 17:34 ..
drwxr-xr-x   3 martin  staff    102 Sep 24 17:34 .brew_home
-rw-r--r--   1 martin  staff    259 Sep 24 00:23 .travis.yml
-rw-r--r--   1 martin  staff  11357 Sep 24 00:23 LICENSE
-rw-r--r--   1 martin  staff   2495 Sep 24 00:23 Makefile
-rw-r--r--   1 martin  staff    219 Sep 24 00:23 OWNERS
-rw-r--r--   1 martin  staff   2949 Sep 24 00:23 README.md
-rw-r--r--   1 martin  staff     13 Sep 24 00:23 VERSION
drwxr-xr-x   6 martin  staff    204 Sep 24 00:23 cmd
drwxr-xr-x  18 martin  staff    612 Sep 24 00:23 docs
drwxr-xr-x   9 martin  staff    306 Sep 24 00:23 example
-rw-r--r--   1 martin  staff  15851 Sep 24 00:23 glide.lock
-rw-r--r--   1 martin  staff   1825 Sep 24 00:23 glide.yaml
drwxr-xr-x   8 martin  staff    272 Sep 24 00:23 pkg
drwxr-xr-x   3 martin  staff    102 Sep 24 00:23 scripts
drwxr-xr-x   3 martin  staff    102 Sep 24 17:34 src
drwxr-xr-x   6 martin  staff    204 Sep 24 00:23 test
drwxr-xr-x   9 martin  staff    306 Sep 24 00:23 vendor

I've also checked that github.com/spf13/cobra is in the right place under vendor:

/private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17/vendor/github.com/spf13/cobra

@pwittrock
Copy link
Contributor Author

@metmajer

I think your directory structure may be off. Go expects the go to be under $GOPATH/src/<go-package-name>, so the package github.com/kubernetes-incubator/apiserver-boot should live under $GOPATH/src/github.com/kubernetes-incubator/apiserver-boot.

If your GOPATH is /private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17 then the apiserver-builder repo should be checkout out in /private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17/src/github.com/kubernetes-incubator/apiserver-builder and you should run make from there.

I am not sure why it cannot find the vendor, but it maybe that it is looking for the vendor directory under /private/tmp/apiserver-builder-20170924-4536-bsljtb/apiserver-builder-0.1-alpha.17/src/github.com/kubernetes-incubator/apiserver-builder/vendor and doesn't find it.

@metmajer
Copy link
Contributor

Thanks for your feedback @pwittrock, I'll look into your suggestions!

@pwittrock
Copy link
Contributor Author

@metmajer FYI, I have been starting to do some work to integrate Bazel support for faster builds. I also changed how the glide.tar.gz is done locally instead of through glide. I've tested the changes quite a bit, but there is still a chance I broke something.

@pwittrock
Copy link
Contributor Author

pwittrock commented Sep 29, 2017

FYI, i introduced building with bazel which is much faster once you have bazel installed.

go run cmd/apiserver-builder-release/main.go vendor --version 0.1-alpha.19  && \
go run cmd/apiserver-builder-release/main.go build --version 0.1-alpha.19 --bazel

@metmajer
Copy link
Contributor

metmajer commented Oct 5, 2017

Awesome, I'll make sure this is also reflected in the Makefile. I'm quite a bit busy atm, but will follow up very soon.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 21, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants