-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't execute sudo commands on machine that requires tty to run sudo #1482
Comments
Ptys are really really really nasty. It is easiest to just use images that don't require tty with sudo. Vagrant itself assumes that sudo requires no password (it is one of the requirements of the Vagrant user on the remote machine). |
I was able to get around the issue by commenting out the requiretty in the /etc/sudoers file (based on http://unix.stackexchange.com/questions/49077/why-does-cron-silently-fail-to-run-sudo-stuff-in-my-script). Doing that will allow vagrant to run on an Amazon Linux AMI. Given that I need to create my own AMI with Chef already installed for testing anyway, I will just make sure this change is also included on that AMI and it will work. |
I think this should be reopened; the error doesn't give the user any information about what actually failed. And while it may be easier for you to require that the target host's sudo configuration compiles with what Vagrant expects, it limits the usefulness of Vagrant. Yes, I just ran into this problem with a 3rd-party Fedora 18 AMI. :-) |
Work-around, assuming your AMI supports cloud-init: use the following for #cloud-config
write_files:
- path: /etc/sudoers.d/999-vagrant-cloud-init-requiretty
permissions: 440
content: |
Defaults:ec2-user !requiretty
References:
I still think Vagrant should do its best to honor the principle of least surprise. |
have you tried doing ssh -t host sudo command |
I agree with @blalor, this took me a couple hours to debug (until I found this page, thanks to you guys for a workaround!). A better error message would have gone a long way to reducing that. And just saying use a distro that allows this isn't great because there are a lot of folks that want to use the Amazon distro. So not supporting that will make an excellent project look broken from a users perspective (at least from the AWS side). Also, for the latest Amazon AMI (Amazon Linux AMI 2013.03.1) I could not get the "write_files" cloudinit to work. Not sure why. So in my Vagrantfile I have this:
And that files looks like:
|
@briandoconnor Amazon Linux AMI doesn't support "write_files" in its version of cloud-init. You can see the list of supported cloud-init actions here http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonLinuxAMIBasics.html#CloudInit I also had to force vagrant to close its initial connection for this to work on Amazon Linux! |
+1 for reopen. Amazon Linux AMI is too popular to be ignored/not supported out of the box |
ssh -t 'sudo service ntpd restart;exit' //this works |
+1 for reopen. Maybe you can consider adding a configuration variable, like |
+1 for reopen. Looks like you can add request_pty to the line 280: https://github.com/mitchellh/vagrant/blob/master/plugins/communicators/ssh/communicator.rb So it would end up looking like: However, when adding it to my local install it appears vagrant has problems with other steps used with SSH.
I'm assuming there's something going on with threading or how stdout is grabbed when using the sudo command on line 277 |
Actually instead of adding request_pty to open_channel, add a new line below that: |
nictrix: I came up with the same solution. However, to get around the fact that the request_pty breaks everything else I created a new ssh option and enabled the new functionality in the Vagrantfile: config.vm.provider :aws do |aws, override| I have forked the MASTER branch and commit my changes here: https://github.com/davent/vagrant changed files are: plugins/communicators/ssh/communicator.rb NOTE: I am not saying this is perfect :) and I am new to vagrant so I am sure this is not technically the correct way to do this, but it worked for me :) |
+1 to reopen. There are now forks and fix proposals that haven't been reviewed. Another approach is to not use sudo at all when the root user / key is supplied to vagrant. |
+1 to reopen. We are hitting this issue with the default CentOS cloud image On the AWS ec2 instance, commenting out this line in
|
👍 I'd like to see this reopened due to issues with the CentOS images on Rackspace Cloud as well. |
+1 to reopen |
+1 CentOS, primary images. |
+1 for standard Amazon Linux AMI support |
+1 also for making this work in Vagrant core. AWS provider is effectively broken out of the box for the standard Amazon Linux images without it. @briandoconnor workaround sort of works, but I'm finding there's a race during the first "vagrant up" that means the provision will usually fail first time around (but works after that). |
You can now do |
Awesome, appreciate it! |
That's fantastic news, thanks! |
|
config.ssh.pty is just a workaround to problem. Can't we hint vagrant to not use sudo in remote commands by introduction some config like This can come handy when using root (or any other privileged user to) configure/provision the guest machine. |
@atulatri That would ne nice. @mitchellh Can you explain why PTYs are so nasty? |
Seems like |
I fixed it by comment the line of Here is the detail explanation for this option
|
I suppose you think I used that option out of pure spite for all vagrant developers? |
The whole requiretty thing was ment as a security feature after all, wasn't it? As far as I know there's a difference between a pty and a tty. requiretty means to me, that it's required to have a tty (= e.g. physically attached) instead of a pseudo-device (= e.g. via SSH). Can someone clarify this to me? |
It is bug and fixed by redhat at March 2014.
default requiretty is problematic and breaks valid usage, confirmed by Redhat. |
Yeah. I have a realllly old vagrantfile lol. |
Looks like the sudo version on my CentOS 7 box is older than the one where bug was fixed:
This is my box: https://atlas.hashicorp.com/oculushut/boxes/vagrant-centOS64-oculushut Looks like I may need to add that !requiretty line back in then... |
Thanks to you @blalor , a two hour debugging session on a Friday night ended with an actual result. |
For anyone coming across this issue on CentOS 7, you can fix it by doing
|
Farced Vagrant and RSpec to request a pseudoterminal for boxen that require a TTY for remote access (should probably patch the machine image, but this works). hashicorp/vagrant#1482
I ran in to an issue with vagrant-aws that was documented at mitchellh/vagrant-aws#10
However as I dug into it and found the root of the issue, it appears to be with vagrant itself. The short version is that Vagrant's SSH code doesn't allow calling sudo commands against a remote machine that requires a tty to execute sudo commands. The error message returned is
sorry, you must have a tty to run sudo
. I assume this security feature is only enabled on CentOS, Amazon Linux (and probably other RHEL based AMIs) but not Ubuntu. But I haven't tested against an Ubuntu instance to verify. I expect I haven't hit this before with vagrant 1.0.x because all of the .box files I used allowed sudo commands to be run without requiring a tty. While the AMIs I am using now with the vagrant-aws plugin do have this setup.I tried to change https://github.com/mitchellh/vagrant/blob/master/plugins/communicators/ssh/communicator.rb to use Net::SSH request_pty method (http://net-ssh.rubyforge.org/ssh/v2/api/index.html). But I wasn't able to get it to work. However I suspect that is mostly to do with my limited experience with Ruby.
The text was updated successfully, but these errors were encountered: