ssh-agent forwarding and `sudo: yes` #7235

Closed
kolypto opened this Issue Apr 30, 2014 · 27 comments

Comments

Projects
None yet
@kolypto

kolypto commented Apr 30, 2014

Issue Type:

Bug Report

Ansible Version:

ansible 1.5.4

Environment:

Both Ubuntu 14.04, 64bit

Summary:

Ansible documentation promotes the use of ssh-agent, which's a great tool, but there are some issues when using it with ForwardAgent: a sudo: yes statement discards the environment variables, and thus agent forwarding is not functioning.

This can be tweaked with sudo_flags=-HE in the config, but when a subsequent sudo: yes occurs (e.g. in a contributed role) -- the environment is still discarded.

I believe that Ansible should really take care about ssh-agent so users don't have to experiment with dark hackery, shouldn't it?

Steps To Reproduce:
$ ssh-agent bash
$ ssh-add ~/.id_rsa

$ ssh -T git@github.com
Hi kolypto! You've successfully authenticated, 
but GitHub does not provide shell access.

$ ansible-playbook -K site.yml

site.yml:

---

- hosts: all
  sudo: yes
  tasks:
    - command: ssh 
             -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no 
             -T git@github.com
Expected Results:

stderr: Hi kolypto! You've successfully authenticated,
but GitHub does not provide shell access.

Actual Results:

stderr: Permission denied (publickey).

@kolypto

This comment has been minimized.

Show comment
Hide comment
@kolypto

kolypto Apr 30, 2014

I can add, that sudo_flags=-HE solves the problem only for sudo: yes, but not for sudo: yes, sudo_user: www-data: environment gets lost

kolypto commented Apr 30, 2014

I can add, that sudo_flags=-HE solves the problem only for sudo: yes, but not for sudo: yes, sudo_user: www-data: environment gets lost

@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan Apr 30, 2014

Contributor

So executing ssh from Ansible is pretty unusual and definitely an anti-pattern. Playbooks are also not meant to set up tunnels, you may have pre-existing tunnels and use them, people set up ProxyCommand for bastion host usage, etc, but what you are doing looks a little unusual to me.

Can I ask why you aren't setting up SSH forwarding in ansible.cfg? Also what do you expect to accomplish with that last command?

Thanks!

Contributor

mpdehaan commented Apr 30, 2014

So executing ssh from Ansible is pretty unusual and definitely an anti-pattern. Playbooks are also not meant to set up tunnels, you may have pre-existing tunnels and use them, people set up ProxyCommand for bastion host usage, etc, but what you are doing looks a little unusual to me.

Can I ask why you aren't setting up SSH forwarding in ansible.cfg? Also what do you expect to accomplish with that last command?

Thanks!

@kolypto

This comment has been minimized.

Show comment
Hide comment
@kolypto

kolypto Apr 30, 2014

@mpdehaan , right, using ssh is an anti-pattern, but that's just an example. Actually, I'm having problems checking out a git repository, and git also does not have access to the environment when I'm trying to switch the user

ForwardAgent=yes is set in my ~/.ssh/config instead

kolypto commented Apr 30, 2014

@mpdehaan , right, using ssh is an anti-pattern, but that's just an example. Actually, I'm having problems checking out a git repository, and git also does not have access to the environment when I'm trying to switch the user

ForwardAgent=yes is set in my ~/.ssh/config instead

@kolypto

This comment has been minimized.

Show comment
Hide comment
@kolypto

kolypto Apr 30, 2014

I've made investigation and noticed the following:

  1. By default, Ansible resets the environment when doing sudo. This can easily be fixed with sudo_flags=-HE

  2. By playing around manually, I noticed that sudo -HE ssh can use agent keys, but sudo -HE -u www-data ssh can not.
    In fact, SSH forwards the agent socket with 0700 permissions, so other users can't actually access it:

    $ echo $SSH_AUTH_SOCK 
    /tmp/ssh-KUnPop8plG/agent.27638
    $ ls -lah $SSH_AUTH_SOCK 
    drwx------  2 kolypto kolypto 4.0K Apr 30 22:38 .
    drwxrwxrwt 10 root    root    4.0K Apr 30 22:38 ..
    srwxrwxr-x  1 kolypto kolypto    0 Apr 30 22:38 agent.27638

    That's why sudo to another user fails to use the agent keys: socket file is not accessible. The only solution is to ssh into the system as another user, so the agent socket is accessible to the user.

    There is a solution to this problem on ServerFault: ssh-agent forwarding and sudo to another user

Therefore, that's not an Ansible bug, and not a bug at all. Still, what about adding -HE options to sudo by default?

kolypto commented Apr 30, 2014

I've made investigation and noticed the following:

  1. By default, Ansible resets the environment when doing sudo. This can easily be fixed with sudo_flags=-HE

  2. By playing around manually, I noticed that sudo -HE ssh can use agent keys, but sudo -HE -u www-data ssh can not.
    In fact, SSH forwards the agent socket with 0700 permissions, so other users can't actually access it:

    $ echo $SSH_AUTH_SOCK 
    /tmp/ssh-KUnPop8plG/agent.27638
    $ ls -lah $SSH_AUTH_SOCK 
    drwx------  2 kolypto kolypto 4.0K Apr 30 22:38 .
    drwxrwxrwt 10 root    root    4.0K Apr 30 22:38 ..
    srwxrwxr-x  1 kolypto kolypto    0 Apr 30 22:38 agent.27638

    That's why sudo to another user fails to use the agent keys: socket file is not accessible. The only solution is to ssh into the system as another user, so the agent socket is accessible to the user.

    There is a solution to this problem on ServerFault: ssh-agent forwarding and sudo to another user

Therefore, that's not an Ansible bug, and not a bug at all. Still, what about adding -HE options to sudo by default?

@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan May 1, 2014

Contributor

Hi @kolypto this option is not always something users can enable so we don't want to add it by default (and some users may not want this), one option for you is to remove env_reset from your sudoers config on the remotes.

If this becomes a frequent request, we could also consider making something like sudo_flags configurable in ansible.cfg

Until then I would look at changing your sudoers configuration.

Contributor

mpdehaan commented May 1, 2014

Hi @kolypto this option is not always something users can enable so we don't want to add it by default (and some users may not want this), one option for you is to remove env_reset from your sudoers config on the remotes.

If this becomes a frequent request, we could also consider making something like sudo_flags configurable in ansible.cfg

Until then I would look at changing your sudoers configuration.

@mpdehaan mpdehaan closed this May 1, 2014

@makmanalp

This comment has been minimized.

Show comment
Hide comment
@makmanalp

makmanalp Jun 11, 2014

I just had 2 hours of hell trying to narrow down a playbook to this problem. Here's a playbook I use to test forwarding: https://gist.github.com/makmanalp/a95aa39f4b3171baeb5b

Just leaving a few keywords here for those in posterity googling this problem to find it: "ansible ssh agent forwarding doesn't work" and "ansible SSH_AUTH_SOCK".

#4331 would be really nice rather than doing it in ansible.cfg since this really is a per-environment situation. Some environments might use ssh agent forwarding and others might just use keys. The main use case seems to be git checkouts. Another hack is to do the checkout as the login user, and then use sudo to just set permissions / move to the right spot.

I just had 2 hours of hell trying to narrow down a playbook to this problem. Here's a playbook I use to test forwarding: https://gist.github.com/makmanalp/a95aa39f4b3171baeb5b

Just leaving a few keywords here for those in posterity googling this problem to find it: "ansible ssh agent forwarding doesn't work" and "ansible SSH_AUTH_SOCK".

#4331 would be really nice rather than doing it in ansible.cfg since this really is a per-environment situation. Some environments might use ssh agent forwarding and others might just use keys. The main use case seems to be git checkouts. Another hack is to do the checkout as the login user, and then use sudo to just set permissions / move to the right spot.

@kolypto

This comment has been minimized.

Show comment
Hide comment
@kolypto

kolypto Jun 12, 2014

Ok I will share my solution :)

The problem happens because of the following:

  • You sign into the system as user root, so the ssh-agent socket file belongs to that user and has 0700 permissions
  • You sudo to user B which can no longer access the file owned by root

First, put this into ansible.cfg to make sure sudo does not lose environment variables, namely, $SSH_AUTH_SOCK:

[defaults]
# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

Then, a hack in the playbook which grants permissions to the ssh-agent socket file:

- name: "(ssh-agent hack: grant access to {{ my_username }})"
  # SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
  # See: https://github.com/ansible/ansible/issues/7235
  # See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
  acl: name={{ item }} etype=user entity={{ my_username }} permissions="rwx" state=present
  with_items:
    - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
    - "{{ ansible_env.SSH_AUTH_SOCK }}"

(playbook task can be simplified by setting 0777 to the whole folder, but that's less secure)

Voila! Enjoy

kolypto commented Jun 12, 2014

Ok I will share my solution :)

The problem happens because of the following:

  • You sign into the system as user root, so the ssh-agent socket file belongs to that user and has 0700 permissions
  • You sudo to user B which can no longer access the file owned by root

First, put this into ansible.cfg to make sure sudo does not lose environment variables, namely, $SSH_AUTH_SOCK:

[defaults]
# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

Then, a hack in the playbook which grants permissions to the ssh-agent socket file:

- name: "(ssh-agent hack: grant access to {{ my_username }})"
  # SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
  # See: https://github.com/ansible/ansible/issues/7235
  # See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
  acl: name={{ item }} etype=user entity={{ my_username }} permissions="rwx" state=present
  with_items:
    - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
    - "{{ ansible_env.SSH_AUTH_SOCK }}"

(playbook task can be simplified by setting 0777 to the whole folder, but that's less secure)

Voila! Enjoy

@jamesmoriarty

This comment has been minimized.

Show comment
Hide comment
@jamesmoriarty

jamesmoriarty Jun 12, 2014

Thanks @kolypto tracked my issue down to vagrant: hashicorp/vagrant#3900

Thanks @kolypto tracked my issue down to vagrant: hashicorp/vagrant#3900

@johndgiese

This comment has been minimized.

Show comment
Hide comment
@johndgiese

johndgiese Nov 11, 2014

Also got stuck on this for a couple hours.

Also got stuck on this for a couple hours.

@makmanalp

This comment has been minimized.

Show comment
Hide comment
@makmanalp

makmanalp Dec 18, 2014

PSA: Sometimes you need to do

ssh-add -K ~/.ssh/id_rsa on OSX because OSX is dumb, and won't set up forwarding EVEN IF your ssh agent is running and ssh-add -l is correctly listing your keys.

PSA: Sometimes you need to do

ssh-add -K ~/.ssh/id_rsa on OSX because OSX is dumb, and won't set up forwarding EVEN IF your ssh agent is running and ssh-add -l is correctly listing your keys.

@rommsen

This comment has been minimized.

Show comment
Hide comment
@rommsen

rommsen Jan 26, 2015

This was driving me nuts.

For the records: The solution of @kolypto works (thanks!) BUT only if you do not have the following flag set in you ansible.cfg

[ssh_connection]
ssh_args = -o ForwardAgent=yes

If this is enabled the ssh connection seems to be renewed with every task so that the SSH_AUTH_SOCK will be different in each task.

@mpdehaan my usecase for this
we deploy our applications with a (git or bitbucket) deploy key. This key is encrypted with ansible vault.
my developers should be able to create a production version of the app on a local vagrant vm but without knowing the vault password. Therefore in this case we use agent forwarding so that they check out the repo with their public keys.

rommsen commented Jan 26, 2015

This was driving me nuts.

For the records: The solution of @kolypto works (thanks!) BUT only if you do not have the following flag set in you ansible.cfg

[ssh_connection]
ssh_args = -o ForwardAgent=yes

If this is enabled the ssh connection seems to be renewed with every task so that the SSH_AUTH_SOCK will be different in each task.

@mpdehaan my usecase for this
we deploy our applications with a (git or bitbucket) deploy key. This key is encrypted with ansible vault.
my developers should be able to create a production version of the app on a local vagrant vm but without knowing the vault password. Therefore in this case we use agent forwarding so that they check out the repo with their public keys.

@tundrax

This comment has been minimized.

Show comment
Hide comment
@tundrax

tundrax Feb 25, 2015

Was having the same problem as @rommsen. Confirmed that removing ssh_args = -o ForwardAgent=yes and using @kolypto's solution fixes the problem ONLY IF you have specified ForwardAgent in your ~/.ssh/config.

tundrax commented Feb 25, 2015

Was having the same problem as @rommsen. Confirmed that removing ssh_args = -o ForwardAgent=yes and using @kolypto's solution fixes the problem ONLY IF you have specified ForwardAgent in your ~/.ssh/config.

@KELiON

This comment has been minimized.

Show comment
Hide comment
@KELiON

KELiON Aug 19, 2015

Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:

- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
  with_items:
    - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
    - "{{ ansible_env.SSH_AUTH_SOCK }}"
  sudo: true

I see failed result:

(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!

When I try it manually, without ansible, it looks good:

setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git@github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.

I even tried to run new instance and run test ansible playbook:

#!/usr/bin/env ansible-playbook
---
- hosts: all
  remote_user: ubuntu
  tasks:
    - user: name=rails
      sudo: true
    - name: Add ssh agent line to sudoers
      lineinfile:
        dest: /etc/sudoers
        state: present
        regexp: SSH_AUTH_SOCK
        line: Defaults env_keep += "SSH_AUTH_SOCK"
      sudo: true
    - acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
      with_items:
        - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
        - "{{ ansible_env.SSH_AUTH_SOCK }}"
      sudo: true
    - name: Test that git ssh connection is working.
      command: ssh -T git@github.com
      sudo: true
      sudo_user: rails

ansible.cfg is:

[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s

[defaults]
sudo_flags=-HE
hostfile=staging

But the same result. Any ideas?

KELiON commented Aug 19, 2015

Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:

- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
  with_items:
    - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
    - "{{ ansible_env.SSH_AUTH_SOCK }}"
  sudo: true

I see failed result:

(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!

When I try it manually, without ansible, it looks good:

setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git@github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.

I even tried to run new instance and run test ansible playbook:

#!/usr/bin/env ansible-playbook
---
- hosts: all
  remote_user: ubuntu
  tasks:
    - user: name=rails
      sudo: true
    - name: Add ssh agent line to sudoers
      lineinfile:
        dest: /etc/sudoers
        state: present
        regexp: SSH_AUTH_SOCK
        line: Defaults env_keep += "SSH_AUTH_SOCK"
      sudo: true
    - acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
      with_items:
        - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
        - "{{ ansible_env.SSH_AUTH_SOCK }}"
      sudo: true
    - name: Test that git ssh connection is working.
      command: ssh -T git@github.com
      sudo: true
      sudo_user: rails

ansible.cfg is:

[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s

[defaults]
sudo_flags=-HE
hostfile=staging

But the same result. Any ideas?

@stephencheng

This comment has been minimized.

Show comment
Hide comment
@stephencheng

stephencheng Aug 23, 2015

@KELiON I am having exact same issue. And I have given up to hack this way.

@KELiON I am having exact same issue. And I have given up to hack this way.

@wojtek-oledzki

This comment has been minimized.

Show comment
Hide comment
@wojtek-oledzki

wojtek-oledzki Aug 27, 2015

@KELiON and @stephencheng this config works for me with the SSH_AUTH_SOCK hack - you have to set the ControlPath.

[defaults]
sudo_flags=-HE

[ssh_connection]
ssh_args = -F ssh/bastion.config -o ForwardAgent=yes -o ControlPath=/tmp/ssh-%r@%h:%p -o ControlMaster=auto -o ControlPersist=60s

my ssh/bastion.config contains some ec2 magic.

@KELiON and @stephencheng this config works for me with the SSH_AUTH_SOCK hack - you have to set the ControlPath.

[defaults]
sudo_flags=-HE

[ssh_connection]
ssh_args = -F ssh/bastion.config -o ForwardAgent=yes -o ControlPath=/tmp/ssh-%r@%h:%p -o ControlMaster=auto -o ControlPersist=60s

my ssh/bastion.config contains some ec2 magic.

@rommsen

This comment has been minimized.

Show comment
Hide comment
@rommsen

rommsen Jan 18, 2016

I am not sure whether to open a new bug report or not but after upgrading to ansible 2 combining

[defaults]
# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

and

[ssh_connection]
pipelining=True

always gives me: sudo: no tty present and no askpass program specified

If I remove any of them it works, but I really need both. Any suggestions anyone?

rommsen commented Jan 18, 2016

I am not sure whether to open a new bug report or not but after upgrading to ansible 2 combining

[defaults]
# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

and

[ssh_connection]
pipelining=True

always gives me: sudo: no tty present and no askpass program specified

If I remove any of them it works, but I really need both. Any suggestions anyone?

@wojtek-oledzki

This comment has been minimized.

Show comment
Hide comment
@wojtek-oledzki

wojtek-oledzki Jan 18, 2016

@rommsen pipelining=True masses up some ssh settings. Try this (not tested)

[defaults]
# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

[ssh_connection]
pipelining=True
ssh_args = -o ForwardAgent=yes -o ControlPath=/tmp/ssh-%r@%h:%p -o ControlMaster=auto -o ControlPersist=60s

@rommsen pipelining=True masses up some ssh settings. Try this (not tested)

[defaults]
# Required so `sudo: yes` does not lose the environment variables, which hold the ssh-agent socket
sudo_flags=-HE

[ssh_connection]
pipelining=True
ssh_args = -o ForwardAgent=yes -o ControlPath=/tmp/ssh-%r@%h:%p -o ControlMaster=auto -o ControlPersist=60s
@rommsen

This comment has been minimized.

Show comment
Hide comment
@rommsen

rommsen Jan 18, 2016

thanks for getting back to me @wojtek-oledzki but it is not working for me :(

rommsen commented Jan 18, 2016

thanks for getting back to me @wojtek-oledzki but it is not working for me :(

@rommsen

This comment has been minimized.

Show comment
Hide comment
@rommsen

rommsen Jan 18, 2016

The moment I set any sudo flags and have pipelining enabled I get "no tty present and no askpass program specified" (even with Default !requiretty in /etc/sudoers). I have to decide: either pipelining or flags. Both are not working together

rommsen commented Jan 18, 2016

The moment I set any sudo flags and have pipelining enabled I get "no tty present and no askpass program specified" (even with Default !requiretty in /etc/sudoers). I have to decide: either pipelining or flags. Both are not working together

@ianheggie

This comment has been minimized.

Show comment
Hide comment
@ianheggie

ianheggie Mar 25, 2016

I did NOT have to set ForwardAgent in ~/.ssh/.config (which surprised me), but neither did it cause a problem if it was set (And I double checked the socket was not created in a normal ssh connection unless ForwardAgent was set in ~/.ssh/.config). I have pipelining enabled.

I did NOT have to set ForwardAgent in ~/.ssh/.config (which surprised me), but neither did it cause a problem if it was set (And I double checked the socket was not created in a normal ssh connection unless ForwardAgent was set in ~/.ssh/.config). I have pipelining enabled.

@chroche

This comment has been minimized.

Show comment
Hide comment
@chroche

chroche May 26, 2016

Note that using SSH connection multiplexing (ControlMaster=auto) with @kolypto and @KELiON's solution above introduces a race condition when setting a time limit on the connexion (e.g., ControlPersist=60s) as the SSH authentication socket will be recreated with new permissions once the delay expires. A way around this is to use an SSH rc file instead, which will run each time an Ansible task is executed:

- name: "ssh-agent hack: grant access for user"
  copy:
    src: rc
    dest: .ssh/rc

with file "rc" containing:

[ -S "$SSH_AUTH_SOCK" ] && setfacl -R -m user:<myuser>:rwx $(dirname "$SSH_AUTH_SOCK")

chroche commented May 26, 2016

Note that using SSH connection multiplexing (ControlMaster=auto) with @kolypto and @KELiON's solution above introduces a race condition when setting a time limit on the connexion (e.g., ControlPersist=60s) as the SSH authentication socket will be recreated with new permissions once the delay expires. A way around this is to use an SSH rc file instead, which will run each time an Ansible task is executed:

- name: "ssh-agent hack: grant access for user"
  copy:
    src: rc
    dest: .ssh/rc

with file "rc" containing:

[ -S "$SSH_AUTH_SOCK" ] && setfacl -R -m user:<myuser>:rwx $(dirname "$SSH_AUTH_SOCK")
@yeago

This comment has been minimized.

Show comment
Hide comment
@yeago

yeago Jun 9, 2016

I can confirm none of this works as of 2.0, with an extra fun error that SSH_AUTH_SOCK doesn't exist in some cases. In every case I've had to either A) run ansible until it fails, use some other means to attach the key to the user in question, then run again. B) Rewrite your roles with no expectation of a functioning ForwardAgent with any tasks that become_user, sudo, etc.

yeago commented Jun 9, 2016

I can confirm none of this works as of 2.0, with an extra fun error that SSH_AUTH_SOCK doesn't exist in some cases. In every case I've had to either A) run ansible until it fails, use some other means to attach the key to the user in question, then run again. B) Rewrite your roles with no expectation of a functioning ForwardAgent with any tasks that become_user, sudo, etc.

@brutto

This comment has been minimized.

Show comment
Hide comment
@brutto

brutto Oct 28, 2016

Have the same issue with git module with passphrased private key. Ansible v2.1.1

Thanks for @bdowling @kolypto @wojtek-oledzki
There is my working workaround: ansible/ansible-modules-core#5419 (comment)

brutto commented Oct 28, 2016

Have the same issue with git module with passphrased private key. Ansible v2.1.1

Thanks for @bdowling @kolypto @wojtek-oledzki
There is my working workaround: ansible/ansible-modules-core#5419 (comment)

@terox

This comment has been minimized.

Show comment
Hide comment
@terox

terox Mar 25, 2017

After hours trying to enable the capability of "AgentForwarding" I desisted in favor of upload a deployment key. I hope that it helps:

# task.yml
- name: directory
  file:
    path: ~/.ssh
    state: directory
    mode: 0700

- name: config
  template:
    src: templates/ssh.config.j2
    dest: ~/.ssh/config
    mode: 0600

- name: keys
  copy:
    src: "{{ ssh_key.value }}"
    dest: "~/.ssh/{{ ssh_key.key }}"
    mode: 0600
  with_dict: "{{ ssh_keys }}"
  loop_control:
    loop_var: ssh_key

- name: Add and load private key to ssh-agent
  shell: "eval `ssh-agent -s` && ssh-add ~/.ssh/{{ ssh_key.key }}"
  with_dict: "{{ ssh_keys }}"
  loop_control:
    loop_var: ssh_key
{% for (entry,cfg) in ssh_config.iteritems() %}
Host {{entry}}
  {% for (k,v) in cfg.iteritems() %}
  {{k}} {{v}}
  {% endfor %}
{% endfor %}

Note: use in the same role/task.

terox commented Mar 25, 2017

After hours trying to enable the capability of "AgentForwarding" I desisted in favor of upload a deployment key. I hope that it helps:

# task.yml
- name: directory
  file:
    path: ~/.ssh
    state: directory
    mode: 0700

- name: config
  template:
    src: templates/ssh.config.j2
    dest: ~/.ssh/config
    mode: 0600

- name: keys
  copy:
    src: "{{ ssh_key.value }}"
    dest: "~/.ssh/{{ ssh_key.key }}"
    mode: 0600
  with_dict: "{{ ssh_keys }}"
  loop_control:
    loop_var: ssh_key

- name: Add and load private key to ssh-agent
  shell: "eval `ssh-agent -s` && ssh-add ~/.ssh/{{ ssh_key.key }}"
  with_dict: "{{ ssh_keys }}"
  loop_control:
    loop_var: ssh_key
{% for (entry,cfg) in ssh_config.iteritems() %}
Host {{entry}}
  {% for (k,v) in cfg.iteritems() %}
  {{k}} {{v}}
  {% endfor %}
{% endfor %}

Note: use in the same role/task.

@thuandt

This comment has been minimized.

Show comment
Hide comment
@thuandt

thuandt May 16, 2017

I also gave up with become: yes and SSH forward agent in Ansible :(

thuandt commented May 16, 2017

I also gave up with become: yes and SSH forward agent in Ansible :(

@nueces

This comment has been minimized.

Show comment
Hide comment
@nueces

nueces Aug 24, 2017

I have this task that was run perfectly, now after update ansible to the 2.3.2.0 version its don't work

- name: "ssh-agent hack: grant access to the appuser"
  acl: name={{ item }} etype=user entity=remoteuser permissions="rwx" state=present
  with_items:
    - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
    - "{{ ansible_env.SSH_AUTH_SOCK }}"

using epdb in the acl module I found that the path in the ansible_env.SSH_AUTH_SOCK don't exist and is different from the value that is returned with os.getenv('SSH_AUTH_SOCK')

$ python -c "import epdb; epdb.connect()"  
> /tmp/ansible_Cotyma/ansible_module_acl.py(306)main()
-> if not os.path.exists(path):
(Epdb) path
'/tmp/ssh-gxQQrkaImb'
(Epdb) import os
(Epdb) os.getenv("SSH_AUTH_SOCK")
'/tmp/ssh-B45j4r4Ei8/agent.5038'
(Epdb) module.params
{'use_nfsv4_acls': False, 'name': '/tmp/ssh-gxQQrkaImb', 'default': False, 'recursive': False, 'state': 'present', 'entry': None, 'etype': 'user', 'follow': True, 'path': '/tmp/ssh-gxQQrkaImb', 'entity': 'snoopy', 'permissions': 'rwx'}
(Epdb) c

{"msg": "Path not found or not accessible.", "failed": true, "invocation": {"module_args": {"recursive": false, "default": false, "name": "/tmp/ssh-gxQQrkaImb", "state": "present", "follow": true, "etype": "user", "entry": null, "path": "/tmp/ssh-gxQQrkaImb", "entity": "remoteuser", "permissions": "rwx", "use_nfsv4_acls": false}}}
Serving on port 8080
*** Connection closed by remote host ***

I think that the ansible_env.SSH_AUTH_SOCK is set in a previous connection and for that reason when a new connection for the acl module is archived the value for the SSH_AUTH_SOCK change.

Any ideas how to fix it?

nueces commented Aug 24, 2017

I have this task that was run perfectly, now after update ansible to the 2.3.2.0 version its don't work

- name: "ssh-agent hack: grant access to the appuser"
  acl: name={{ item }} etype=user entity=remoteuser permissions="rwx" state=present
  with_items:
    - "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
    - "{{ ansible_env.SSH_AUTH_SOCK }}"

using epdb in the acl module I found that the path in the ansible_env.SSH_AUTH_SOCK don't exist and is different from the value that is returned with os.getenv('SSH_AUTH_SOCK')

$ python -c "import epdb; epdb.connect()"  
> /tmp/ansible_Cotyma/ansible_module_acl.py(306)main()
-> if not os.path.exists(path):
(Epdb) path
'/tmp/ssh-gxQQrkaImb'
(Epdb) import os
(Epdb) os.getenv("SSH_AUTH_SOCK")
'/tmp/ssh-B45j4r4Ei8/agent.5038'
(Epdb) module.params
{'use_nfsv4_acls': False, 'name': '/tmp/ssh-gxQQrkaImb', 'default': False, 'recursive': False, 'state': 'present', 'entry': None, 'etype': 'user', 'follow': True, 'path': '/tmp/ssh-gxQQrkaImb', 'entity': 'snoopy', 'permissions': 'rwx'}
(Epdb) c

{"msg": "Path not found or not accessible.", "failed": true, "invocation": {"module_args": {"recursive": false, "default": false, "name": "/tmp/ssh-gxQQrkaImb", "state": "present", "follow": true, "etype": "user", "entry": null, "path": "/tmp/ssh-gxQQrkaImb", "entity": "remoteuser", "permissions": "rwx", "use_nfsv4_acls": false}}}
Serving on port 8080
*** Connection closed by remote host ***

I think that the ansible_env.SSH_AUTH_SOCK is set in a previous connection and for that reason when a new connection for the acl module is archived the value for the SSH_AUTH_SOCK change.

Any ideas how to fix it?

@kkarolis

This comment has been minimized.

Show comment
Hide comment
@kkarolis

kkarolis Oct 27, 2017

Posting this here, maybe it will help someone.

Personally gave up on agent forwarding as well as it doesn't work (if the socket permission hack is not applied) when you connect with one user, then become another non-root account. As another alternative quasi-hacky solution, one could define a github credentials helper which takes github username, password from env variables. These can be set from the calling side with ssh's SendEnv. I.e.

on ansible.cfg, set something like this:

[ssh_connection]
ssh_args=-o SendEnv=LC_GITHUB_USERNAME -o SendEnv=LC_GITHUB_PASSWORD -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r

create / modify git credentials git-credential-env-var helper like this

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import sys

ENV_GITHUB_USERNAME = 'GITHUB_USERNAME'
ENV_GITHUB_PASSWORD = 'GITHUB_PASSWORD'


def fetch_credentials_from_env():
    """Tries to fetch github credentials from env

    Assumes a use case scenario that LC variables are usually retained and accepted while
    using ssh and sudo and set as a way to pass in credentials.
    """
    username = (os.environ.get(ENV_GITHUB_PASSWORD) or
                os.environ.get('LC_' + ENV_GITHUB_USERNAME))
    password = (os.environ.get(ENV_GITHUB_PASSWORD) or
                os.environ.get('LC_' + ENV_GITHUB_PASSWORD))
    return username, password


def print_credential(**kwargs):
    for key, value in kwargs.iteritems():
        print('{}={}'.format(key, value))


def main():
    username, password = fetch_credentials_from_env()
    if not username or not password:
        sys.exit(0)
    print_credential(username=username)
    print_credential(password=password)


if __name__ == '__main__':
    main()

before calling any git code, deploy credential helper with some git_credentials_helper.yml

- name: Ensure git is installed
  apt: name=git state=present

- name: Check if credential helper is already configured
  command: 'git config --system credential.helper'
  register: credential_helper
  failed_when: false
  changed_when: false

- name: Copy credential helper 
  copy:
    src: "git-credential-env-vars"
    dest: '/usr/local/bin/git-credential-env-vars'
    mode: 0755

- name: Install new credential_helper 
  command: 'git config --system credential.helper "env-vars"'
  when: "not credential_helper.stdout_lines"

and when calling ansible-playbook, ensure you have LC_GITHUB_USERNAME and LC_GITHUB_PASSWORD set on the control machine and add repos with https protocol. If you don't want to use the LC_ hack, need to modify /etc/sshd_config as noted in stack overflow question which can be done with same credentials helper task.

Should be moderately safe, works with any number of sudo's, doesn't hide credentials in repo and it works with other use cases as well. E.g. if you have a fabric cd set up to do a rolling upgrade and it needs to pull repos as well.

kkarolis commented Oct 27, 2017

Posting this here, maybe it will help someone.

Personally gave up on agent forwarding as well as it doesn't work (if the socket permission hack is not applied) when you connect with one user, then become another non-root account. As another alternative quasi-hacky solution, one could define a github credentials helper which takes github username, password from env variables. These can be set from the calling side with ssh's SendEnv. I.e.

on ansible.cfg, set something like this:

[ssh_connection]
ssh_args=-o SendEnv=LC_GITHUB_USERNAME -o SendEnv=LC_GITHUB_PASSWORD -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r

create / modify git credentials git-credential-env-var helper like this

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import sys

ENV_GITHUB_USERNAME = 'GITHUB_USERNAME'
ENV_GITHUB_PASSWORD = 'GITHUB_PASSWORD'


def fetch_credentials_from_env():
    """Tries to fetch github credentials from env

    Assumes a use case scenario that LC variables are usually retained and accepted while
    using ssh and sudo and set as a way to pass in credentials.
    """
    username = (os.environ.get(ENV_GITHUB_PASSWORD) or
                os.environ.get('LC_' + ENV_GITHUB_USERNAME))
    password = (os.environ.get(ENV_GITHUB_PASSWORD) or
                os.environ.get('LC_' + ENV_GITHUB_PASSWORD))
    return username, password


def print_credential(**kwargs):
    for key, value in kwargs.iteritems():
        print('{}={}'.format(key, value))


def main():
    username, password = fetch_credentials_from_env()
    if not username or not password:
        sys.exit(0)
    print_credential(username=username)
    print_credential(password=password)


if __name__ == '__main__':
    main()

before calling any git code, deploy credential helper with some git_credentials_helper.yml

- name: Ensure git is installed
  apt: name=git state=present

- name: Check if credential helper is already configured
  command: 'git config --system credential.helper'
  register: credential_helper
  failed_when: false
  changed_when: false

- name: Copy credential helper 
  copy:
    src: "git-credential-env-vars"
    dest: '/usr/local/bin/git-credential-env-vars'
    mode: 0755

- name: Install new credential_helper 
  command: 'git config --system credential.helper "env-vars"'
  when: "not credential_helper.stdout_lines"

and when calling ansible-playbook, ensure you have LC_GITHUB_USERNAME and LC_GITHUB_PASSWORD set on the control machine and add repos with https protocol. If you don't want to use the LC_ hack, need to modify /etc/sshd_config as noted in stack overflow question which can be done with same credentials helper task.

Should be moderately safe, works with any number of sudo's, doesn't hide credentials in repo and it works with other use cases as well. E.g. if you have a fabric cd set up to do a rolling upgrade and it needs to pull repos as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment