Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

Installation tries to write to /go/bin #49

Closed
grisaitis opened this issue Feb 10, 2016 · 6 comments · May be fixed by CompVis/latent-diffusion#56
Closed

Installation tries to write to /go/bin #49

grisaitis opened this issue Feb 10, 2016 · 6 comments · May be fixed by CompVis/latent-diffusion#56

Comments

@grisaitis
Copy link

I'm getting the following error when I run sudo make install:

...
go install nvidia-docker-plugin: open /go/bin/nvidia-docker-plugin: permission denied
go install nvidia-docker: open /go/bin/nvidia-docker: permission denied
make[1]: *** [build] Error 1
make[1]: Leaving directory `<path to nvidia-docker>/nvidia-docker/tools'
make: *** [install] Error 2

It seems like it's trying to write to a directory that doesn't exist...

I eventually fixed the issue by editing the Makefile in tools/, where I removed :/go/bin at line 27 (as of hash e7b7922).

Am I doing something wrong? Am I the only person who experienced this?

Running on Ubuntu 14.04.

@flx42
Copy link
Member

flx42 commented Feb 10, 2016

This is happening inside a container, and in this container /go/bin is a volume:
https://github.com/NVIDIA/nvidia-docker/blob/master/tools/Dockerfile.build#L23

This volume maps to [...]/nvidia-docker/tools/bin on the host. After executing the build container, this directory will contain the nvidia-docker and nvidia-docker-plugin binaries. These binaries are then installed (install(1)) the usual way to the $PREFIX path (default is /usr/bin).

I'm not sure how you encountered this issue, folder nvidia-docker/tools/bin could have weird permissions on the host. Try removing it.
Also, are you running Docker the usual way with your account being part of the docker group?

@grisaitis
Copy link
Author

grisaitis commented Feb 10, 2016 via email

@flx42
Copy link
Member

flx42 commented Feb 11, 2016

I was able to recreate a similar error message using sudo make and then make install. Of course this is backward: the first command will create tools/bin for root and the second command will fail when trying to use the mounted volume.

From your initial post, it doesn't seem to be what you did, but it confirms there is something fishy on the permissions. The permissions you mentioned look fine, but you can simply erase 'tools/bin', it's not part of the git repo.

@3XX0
Copy link
Member

3XX0 commented Feb 11, 2016

It would be interesting to know what's going on in your particular case. As a workaround you can always use the release tarball/package: https://github.com/NVIDIA/nvidia-docker/releases

@grisaitis
Copy link
Author

@flx42 I bet I did that actually - I was trying several commands and permutations of with or without sudo and make / make install. My bad for not thinking about permissions. Thanks for identifying that.

@3XX0 Good to know. I didn't install from the deb because it was from an older commit on the repo... But now I see that the commit only impacted the cudnn dockerfile.

Thanks again for your help! This was definitely user error on my part.

@clnperez
Copy link
Contributor

Sorry to resurrect a dead thread, but I thought this was worth mentioning in case someone else runs across this.

I hit the same error as issue as issue #49 and was able to get around it by disabling selinux (setenforce 0). I was going to open a new issue, and started playing around with some other things (like permissions of the tools dir) to see if there's a workaround, but after i did the build successfully once, then set selinux back to enforcing, every build I do after that has worked. I even tried deleting and re-cloning the repo to recreate, but I can't. That bugs me, but what can you do? 😉 :

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants