added support for ssh tunneling using ssh's ProxyCommand option #2970

Closed
wants to merge 1 commit into
from

Conversation

Projects
None yet
9 participants
@rodlogic

This commit introduces a new variable named ansible_ssh_proxy_cmd that can be associated with individual hosts and used to tunnel task executions through a proxy/tunnel host using the ssh transport.

I have a scenario where I need to run ansible tasks against LXC containers created in an Ubuntu VirtualBox machine from an OSX notebook. So ansible needs to ssh into the LXC container through the Ubuntu VM hosting it.

With this commit, it is possible to specify a host such as:

[webservers]
app ansible_ssh_user=ubuntu ansible_connection=ssh ansible_ssh_proxy_cmd='ssh vagrant@localhost -p 2222 nc %h %p 2>/dev/null'

This will add a new option to the ssh transport:

-o 'ProxyCommand ssh vagrant@localhost -p 2222 nc %h %p 2>/dev/null' to the ssh and scp exec command in ssh.py. I have also added this option to the sftp command, but I have no idea if it will work.
@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan May 22, 2013

Contributor

This can already be set in .ssh/config (and then of course use -c ssh)?

Contributor

mpdehaan commented May 22, 2013

This can already be set in .ssh/config (and then of course use -c ssh)?

@mpdehaan mpdehaan closed this May 22, 2013

@rodlogic

This comment has been minimized.

Show comment
Hide comment
@rodlogic

rodlogic May 22, 2013

Can I set this for a subset of hosts in my hosts file?

Sure, I can set this in .ssh/config but what a PITA if I have to distribute my playbook to N developers and everyone has to remember to set this.

Can I set this for a subset of hosts in my hosts file?

Sure, I can set this in .ssh/config but what a PITA if I have to distribute my playbook to N developers and everyone has to remember to set this.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca May 22, 2013

Member

You can do this per host or with wildcards.

Brian Coca

Member

bcoca commented May 22, 2013

You can do this per host or with wildcards.

Brian Coca

@rodlogic

This comment has been minimized.

Show comment
Hide comment
@rodlogic

rodlogic May 22, 2013

Could you point me to an example or documentation on how to do this without resorting to .ssh/config or changing it globally in ansible.cfg?

Could you point me to an example or documentation on how to do this without resorting to .ssh/config or changing it globally in ansible.cfg?

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca May 22, 2013

Member

No, I was specifically referring to .ssh/config

Brian Coca

Member

bcoca commented May 22, 2013

No, I was specifically referring to .ssh/config

Brian Coca

@rodlogic

This comment has been minimized.

Show comment
Hide comment
@rodlogic

rodlogic May 22, 2013

That is a no go for me. I have to distribute this playbook(s) to any number between 1 and a 100 developers and I would like to have zero friction (git clone and run) instead of having an extra step to append a set of .ssh/config entries for the hosts that require tunneling.

As it is now, I understand this is possible with .ssh/config but this could be much better with a few lines of code.

That is a no go for me. I have to distribute this playbook(s) to any number between 1 and a 100 developers and I would like to have zero friction (git clone and run) instead of having an extra step to append a set of .ssh/config entries for the hosts that require tunneling.

As it is now, I understand this is possible with .ssh/config but this could be much better with a few lines of code.

@phil777

This comment has been minimized.

Show comment
Hide comment
@phil777

phil777 Aug 4, 2013

Please accept this pull request.

There are many cases where adding this to .ssh/config is not possible.

phil777 commented Aug 4, 2013

Please accept this pull request.

There are many cases where adding this to .ssh/config is not possible.

@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan Aug 4, 2013

Contributor

What are those?

Contributor

mpdehaan commented Aug 4, 2013

What are those?

@phil777

This comment has been minimized.

Show comment
Hide comment
@phil777

phil777 Aug 5, 2013

The 2 categories that come to my mind are:

  • when the .ssh/config is something you have no power on (like the case exposed by @rodlogic )
  • when the inventory is dynamically generated

There are many use cases, but those I'm thinking about fall in those 2 categories. In most of them, it is technically possible to have it work, like autogenerating a part of your .ssh/config, or using an ANSIBLE_SSH_ARGS env variable to point to a different config file, at the expense of providing yourself again all the SSH parameters used by ansible. There are probably hundreds of solutions, like using a ssh wrapper that automatically go to the jump host.

I think none will be as convenient as explaining how to connect to the host inside the inventory. And I think ansible is about doing complicated things conveniently, so I hope the need to connect to managed hosts through complicated means will be fully taken into account.

Thinking more about it, there are different levels of customization to the ssh connection, the most generic one being providing an option of our choice to ssh (from the inventory, and without loosing the benefit of all the other options sensibly used by ansible), the most specific being fulfilling directly the specific need of going through a jump host.

Hence, actually, I suggest to keep rejecting this pull request, but consider accepting on that would provide
ansible_ssh_parameter, that would enable to provide any parameters to the ssh program
ansible_ssh_jump_host, that would directly create the correct ssh options to go through a jump host

Things I'm not sure about:

  • how to correctly parse an ssh parameter like "-o ProxyCommand=ssh xxx yyy" so that it is split correctly on command line. I don't know inventory parsing well enough
  • how to provide all connection information for the jump host (like port or identity file to use)

Another suggestion would be to have a ansible_ssh_config that would add the -F xxx option that would enable us to have a ansible-local ssh config, without hacking with ANSIBLE_SSH_ARGS. The philosophy would be that information on the transport (ssh ports, identity files, etc.) could go to the ansible ssh config file, and inventory will focus on groups, and variables for installation.

phil777 commented Aug 5, 2013

The 2 categories that come to my mind are:

  • when the .ssh/config is something you have no power on (like the case exposed by @rodlogic )
  • when the inventory is dynamically generated

There are many use cases, but those I'm thinking about fall in those 2 categories. In most of them, it is technically possible to have it work, like autogenerating a part of your .ssh/config, or using an ANSIBLE_SSH_ARGS env variable to point to a different config file, at the expense of providing yourself again all the SSH parameters used by ansible. There are probably hundreds of solutions, like using a ssh wrapper that automatically go to the jump host.

I think none will be as convenient as explaining how to connect to the host inside the inventory. And I think ansible is about doing complicated things conveniently, so I hope the need to connect to managed hosts through complicated means will be fully taken into account.

Thinking more about it, there are different levels of customization to the ssh connection, the most generic one being providing an option of our choice to ssh (from the inventory, and without loosing the benefit of all the other options sensibly used by ansible), the most specific being fulfilling directly the specific need of going through a jump host.

Hence, actually, I suggest to keep rejecting this pull request, but consider accepting on that would provide
ansible_ssh_parameter, that would enable to provide any parameters to the ssh program
ansible_ssh_jump_host, that would directly create the correct ssh options to go through a jump host

Things I'm not sure about:

  • how to correctly parse an ssh parameter like "-o ProxyCommand=ssh xxx yyy" so that it is split correctly on command line. I don't know inventory parsing well enough
  • how to provide all connection information for the jump host (like port or identity file to use)

Another suggestion would be to have a ansible_ssh_config that would add the -F xxx option that would enable us to have a ansible-local ssh config, without hacking with ANSIBLE_SSH_ARGS. The philosophy would be that information on the transport (ssh ports, identity files, etc.) could go to the ansible ssh config file, and inventory will focus on groups, and variables for installation.

@phil777

This comment has been minimized.

Show comment
Hide comment
@phil777

phil777 Aug 5, 2013

ansible_ssh_config implemented here: #3756
Largely inspired by @rodlogic pull request.

phil777 commented Aug 5, 2013

ansible_ssh_config implemented here: #3756
Largely inspired by @rodlogic pull request.

@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan Aug 5, 2013

Contributor

I'll take a look.

There already is ANSIBLE_SSH_ARGS where you can specify all of SSH flags
and override any Ansible may set by default, FWIW.

See also ansible.cfg

On Sun, Aug 4, 2013 at 11:18 PM, phil777 notifications@github.com wrote:

ansible_ssh_config implemented here: #3756#3756
Largely inspired by @rodlogic https://github.com/rodlogic pull request.


Reply to this email directly or view it on GitHubhttps://github.com/ansible/ansible/pull/2970#issuecomment-22085222
.

Contributor

mpdehaan commented Aug 5, 2013

I'll take a look.

There already is ANSIBLE_SSH_ARGS where you can specify all of SSH flags
and override any Ansible may set by default, FWIW.

See also ansible.cfg

On Sun, Aug 4, 2013 at 11:18 PM, phil777 notifications@github.com wrote:

ansible_ssh_config implemented here: #3756#3756
Largely inspired by @rodlogic https://github.com/rodlogic pull request.


Reply to this email directly or view it on GitHubhttps://github.com/ansible/ansible/pull/2970#issuecomment-22085222
.

@phil777

This comment has been minimized.

Show comment
Hide comment
@phil777

phil777 Aug 5, 2013

An environment variable is not used the same way as a something in a configuration file. It cannot be commited into a repository. Regarding ANSIBLE_SSH_ARGS, AFAIU, it overloads all arguments, including those we don't want to change and It is global. I guess it will be the same for ansible.cfg (I've seen no doc about it).

I think information on how to connect to specific hosts belong to inventory (and here specific ssh config files) and not to user-wide or system-wide settings.

phil777 commented Aug 5, 2013

An environment variable is not used the same way as a something in a configuration file. It cannot be commited into a repository. Regarding ANSIBLE_SSH_ARGS, AFAIU, it overloads all arguments, including those we don't want to change and It is global. I guess it will be the same for ansible.cfg (I've seen no doc about it).

I think information on how to connect to specific hosts belong to inventory (and here specific ssh config files) and not to user-wide or system-wide settings.

@echohead

This comment has been minimized.

Show comment
Hide comment
@echohead

echohead Aug 6, 2013

Contributor

I came across this thread while searching for how to get ansible to play nice with a weird bastion-host in my environment.

FWIW, upon first reading of this thread, I found myself agreeing strongly with @phil777 on the following points:

  • everything must be version-controlled
  • system-wide settings are unacceptable - the repo (playbooks+inventory) must be self-contained.

After some experimentation I found $ANSIBLE_SSH_ARGS to be sufficient for me in this way:

  • add an ssh_config file alongside the inventory file in my VCS
  • add a wrapper script in my repo to set ANSIBLE_SSH_ARGS="-F $custom_ssh_config" before calling e.g. ansible-playbook

This solution allows me to have version-controlled custom ssh settings per-host, without touching system-wide settings, nor per-user settings, without changing the existing inventory file interface.

The obvious objection to this approach is the requirement for a wrapper script which sets $ANSIBLE_SSH_ARGS, but in my case, this was a no-op, as I was already using a wrapper script for other reasons.

The benefit of this approach is that it does not further increase the surface-area of the ansible interface, whose primary appeal to me was its simplicity - every one-off addition to ansible's interface brings it one step closer to chef. :)

Contributor

echohead commented Aug 6, 2013

I came across this thread while searching for how to get ansible to play nice with a weird bastion-host in my environment.

FWIW, upon first reading of this thread, I found myself agreeing strongly with @phil777 on the following points:

  • everything must be version-controlled
  • system-wide settings are unacceptable - the repo (playbooks+inventory) must be self-contained.

After some experimentation I found $ANSIBLE_SSH_ARGS to be sufficient for me in this way:

  • add an ssh_config file alongside the inventory file in my VCS
  • add a wrapper script in my repo to set ANSIBLE_SSH_ARGS="-F $custom_ssh_config" before calling e.g. ansible-playbook

This solution allows me to have version-controlled custom ssh settings per-host, without touching system-wide settings, nor per-user settings, without changing the existing inventory file interface.

The obvious objection to this approach is the requirement for a wrapper script which sets $ANSIBLE_SSH_ARGS, but in my case, this was a no-op, as I was already using a wrapper script for other reasons.

The benefit of this approach is that it does not further increase the surface-area of the ansible interface, whose primary appeal to me was its simplicity - every one-off addition to ansible's interface brings it one step closer to chef. :)

@t2d

This comment has been minimized.

Show comment
Hide comment
@t2d

t2d Nov 11, 2013

This option would be really nice. We have inventory in git, but not local .ssh-files. This would allow us to keep the complete config in git. By now I have to adjust .ssh/config by hand for every bastion host.

t2d commented Nov 11, 2013

This option would be really nice. We have inventory in git, but not local .ssh-files. This would allow us to keep the complete config in git. By now I have to adjust .ssh/config by hand for every bastion host.

@tomster

This comment has been minimized.

Show comment
Hide comment
@tomster

tomster Nov 25, 2013

Contributor

i would like to +1 @phil777 and @echohead here for accepting this feature.

for my use case the motivation is that my playbook is configuring multiple jails on one host that it needs to reach 'from outside'. from an administrative perspective the ip addresses and sshd ports of these jails are purely implementation details (that even might go away altogether once we have a ssh+chroot or ssh+jail transport) and IMHO it would be great if those details needn't leak outside the playbook as that would mean that the local ssh config would either need to be manually kept in sync with the ip/port values from the playbook or be automatically generated before each run.

Contributor

tomster commented Nov 25, 2013

i would like to +1 @phil777 and @echohead here for accepting this feature.

for my use case the motivation is that my playbook is configuring multiple jails on one host that it needs to reach 'from outside'. from an administrative perspective the ip addresses and sshd ports of these jails are purely implementation details (that even might go away altogether once we have a ssh+chroot or ssh+jail transport) and IMHO it would be great if those details needn't leak outside the playbook as that would mean that the local ssh config would either need to be manually kept in sync with the ip/port values from the playbook or be automatically generated before each run.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Nov 25, 2013

Member

you can pass an ssh/config file in the ansible_ssh_args and do this, if you
don't want to just add it to your .ssh/config (which i do just for ssh
logins themselves and reusue in ansible)

Member

bcoca commented Nov 25, 2013

you can pass an ssh/config file in the ansible_ssh_args and do this, if you
don't want to just add it to your .ssh/config (which i do just for ssh
logins themselves and reusue in ansible)

@tomster

This comment has been minimized.

Show comment
Hide comment
@tomster

tomster Nov 26, 2013

Contributor

@bcoca yes, thanks, i already gathered as much from the thread here :) but that's specifically, what i want to avoid. keeping the host/port information in two different locations is simply not optimal.

or do you have an idea how to avoid that using local ssh configs?

Contributor

tomster commented Nov 26, 2013

@bcoca yes, thanks, i already gathered as much from the thread here :) but that's specifically, what i want to avoid. keeping the host/port information in two different locations is simply not optimal.

or do you have an idea how to avoid that using local ssh configs?

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Nov 26, 2013

Member

.ssh/config is the default (and local) if you don't pass any extra args.

as for the shared confg that you can pass to ansible, it doesn't require
the config in 2 places as only 1 is useful, also its easy to generate the
one from the other (pre play).

Member

bcoca commented Nov 26, 2013

.ssh/config is the default (and local) if you don't pass any extra args.

as for the shared confg that you can pass to ansible, it doesn't require
the config in 2 places as only 1 is useful, also its easy to generate the
one from the other (pre play).

@tomster

This comment has been minimized.

Show comment
Hide comment
@tomster

tomster Nov 26, 2013

Contributor

@bcoca yes, i realize this. the duplication/redundancy i'm referring to is not between the local and global ssh config but between the ansible hosts file and the local config. the fact that it's easy to generate one from the other is irrelevant as it's precisely this kind of extra step outside of ansible that we want to avoid here :)

Contributor

tomster commented Nov 26, 2013

@bcoca yes, i realize this. the duplication/redundancy i'm referring to is not between the local and global ssh config but between the ansible hosts file and the local config. the fact that it's easy to generate one from the other is irrelevant as it's precisely this kind of extra step outside of ansible that we want to avoid here :)

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Nov 26, 2013

Member

I meant 'inside' ansible, as in pre-play(connection:local; hosts:localhost)
=> template: src=ssh_config dst=~/tmp_config, then just use the file in
ssh_args.

Member

bcoca commented Nov 26, 2013

I meant 'inside' ansible, as in pre-play(connection:local; hosts:localhost)
=> template: src=ssh_config dst=~/tmp_config, then just use the file in
ssh_args.

@tomster

This comment has been minimized.

Show comment
Hide comment
@tomster

tomster Nov 26, 2013

Contributor

@bcoca ok, now i get it :-) right, of course, i guess what you're saying is basically 'how is generating a local ssh config file any different than, say generating a sshd_config file on the server?'.

i'll try this approach ASAP, thanks for the hint!

Contributor

tomster commented Nov 26, 2013

@bcoca ok, now i get it :-) right, of course, i guess what you're saying is basically 'how is generating a local ssh config file any different than, say generating a sshd_config file on the server?'.

i'll try this approach ASAP, thanks for the hint!

@mattsoftware

This comment has been minimized.

Show comment
Hide comment
@mattsoftware

mattsoftware Jan 9, 2014

Contributor

Can I get some clarification on this issue. Newbie with ansible, so please point me in the right direction if I have this incorrect...

I have a number of hosts, different networks, different way of accessing each of them. I can put them in groups in my ansible_hosts file. Servers in the group 'internal' are accessed with ip addresses like 192.168.0.10. These work great.

Servers in the group 'aws' are accessed via SSH and the ProxyCommand option I need to pass to ssh to go through the public ip address of our aws network, and then access the internal hosts of that network which have ip addresses like 192.168.0.10

There is talk in this thread about setting an ANSIBLE_SSH_ARGS environment variable, but this wont work as it will break my internal connections. There seems to be a ansible_ssh_args config setting, but I can't seem to use that in my [aws:args] section in my ansible_hosts file (unless I am doing it incorrectly). This patch seems to supply the solution I am looking for. Is there an answer for my scenario already built into ansible, or will I experiment with this patch to see if it solves my problem?

Thank you.

Contributor

mattsoftware commented Jan 9, 2014

Can I get some clarification on this issue. Newbie with ansible, so please point me in the right direction if I have this incorrect...

I have a number of hosts, different networks, different way of accessing each of them. I can put them in groups in my ansible_hosts file. Servers in the group 'internal' are accessed with ip addresses like 192.168.0.10. These work great.

Servers in the group 'aws' are accessed via SSH and the ProxyCommand option I need to pass to ssh to go through the public ip address of our aws network, and then access the internal hosts of that network which have ip addresses like 192.168.0.10

There is talk in this thread about setting an ANSIBLE_SSH_ARGS environment variable, but this wont work as it will break my internal connections. There seems to be a ansible_ssh_args config setting, but I can't seem to use that in my [aws:args] section in my ansible_hosts file (unless I am doing it incorrectly). This patch seems to supply the solution I am looking for. Is there an answer for my scenario already built into ansible, or will I experiment with this patch to see if it solves my problem?

Thank you.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Jan 9, 2014

Member

you can do this with your .ssh/config or creating different ssh configs
which you can pass to the aws group. i think you want [aws:vars] not
[aws:args](not a thing).

Member

bcoca commented Jan 9, 2014

you can do this with your .ssh/config or creating different ssh configs
which you can pass to the aws group. i think you want [aws:vars] not
[aws:args](not a thing).

@mattsoftware

This comment has been minimized.

Show comment
Hide comment
@mattsoftware

mattsoftware Jan 9, 2014

Contributor

Thank you @bcoca
You are correct, its [aws:vars](not args). I just checked my config and I am using the correct term, just typed it incorrectly in the comment.

You say I can just pass the ssh config to the aws group. That is what I am trying to do, without any success at the moment. My ansible config is this...

[aws]
weba ansible_ssh_host=192.168.13.85
[aws:vars]
ansible_ssh_private_key_file=/home/aws/.ssh/aws-webserver.pem
ansible_ssh_user=ec2-user
ansible_ssh_args="-F /home/aws/ansible_ssh_config"

Running ansible with -vvv shows this command being used for ssh...

<192.168.13.85> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/aws/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'IdentityFile=/home/aws/.ssh/aws-webserver.pem', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=ec2-user', '-o', 'ConnectTimeout=10', '192.168.13.85', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1389308828.91-121994705079805 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1389308828.91-121994705079805 && echo $HOME/.ansible/tmp/ansible-tmp-1389308828.91-121994705079805'"]

No mention of the -F anywhere.

Is this the correct way to pass the config file through to the aws group? If you could clarify that would be awesome.

Thank you again.

(I realise this is probably not the correct forum, but also feel its useful to provide a clarification for other people reading this bug report as well.)

Contributor

mattsoftware commented Jan 9, 2014

Thank you @bcoca
You are correct, its [aws:vars](not args). I just checked my config and I am using the correct term, just typed it incorrectly in the comment.

You say I can just pass the ssh config to the aws group. That is what I am trying to do, without any success at the moment. My ansible config is this...

[aws]
weba ansible_ssh_host=192.168.13.85
[aws:vars]
ansible_ssh_private_key_file=/home/aws/.ssh/aws-webserver.pem
ansible_ssh_user=ec2-user
ansible_ssh_args="-F /home/aws/ansible_ssh_config"

Running ansible with -vvv shows this command being used for ssh...

<192.168.13.85> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/aws/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'IdentityFile=/home/aws/.ssh/aws-webserver.pem', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=ec2-user', '-o', 'ConnectTimeout=10', '192.168.13.85', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1389308828.91-121994705079805 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1389308828.91-121994705079805 && echo $HOME/.ansible/tmp/ansible-tmp-1389308828.91-121994705079805'"]

No mention of the -F anywhere.

Is this the correct way to pass the config file through to the aws group? If you could clarify that would be awesome.

Thank you again.

(I realise this is probably not the correct forum, but also feel its useful to provide a clarification for other people reading this bug report as well.)

@fschulze

This comment has been minimized.

Show comment
Hide comment
@fschulze

fschulze Jan 10, 2014

This would be really useful with Dynamic Inventory scripts. Is there another per host variable that could be used instead? Otherwise I would also +1 this.

This would be really useful with Dynamic Inventory scripts. Is there another per host variable that could be used instead? Otherwise I would also +1 this.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Jan 10, 2014

Member

soo, there is no use of inventory set ansible_ssh_args or
ansible_ssh_config, currently you can set ansible_ssh_args in ansible.cfg
or using the environment variable.

In either you can easily set to a common/version controlled ssh config
(like the default ~/.ssh/config). In this config file you can have more
than one set of configs (or general defaults) by host/hostpattern.

If this is not enough, it might make sense to enable a inventory based
ansible_ssh_args.

Member

bcoca commented Jan 10, 2014

soo, there is no use of inventory set ansible_ssh_args or
ansible_ssh_config, currently you can set ansible_ssh_args in ansible.cfg
or using the environment variable.

In either you can easily set to a common/version controlled ssh config
(like the default ~/.ssh/config). In this config file you can have more
than one set of configs (or general defaults) by host/hostpattern.

If this is not enough, it might make sense to enable a inventory based
ansible_ssh_args.

@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan Jan 26, 2014

Contributor

Please note this is not a discussion forum and comments on closed tickets are not reviewed by the project. Use the mailing list for discussion -- thanks!

Contributor

mpdehaan commented Jan 26, 2014

Please note this is not a discussion forum and comments on closed tickets are not reviewed by the project. Use the mailing list for discussion -- thanks!

jimi-c pushed a commit that referenced this pull request Dec 6, 2016

robinro pushed a commit to robinro/ansible that referenced this pull request Dec 9, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment