Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apt upgrade full raises a "Could not get lock /var/lib/dpkg/lock-frontend" #51663

Open
antonioribeiro opened this issue Feb 3, 2019 · 49 comments
Labels
affects_2.7 This issue/PR affects Ansible v2.7 bug This issue/PR relates to a bug. has_pr This issue has an associated PR. module This issue/PR relates to a module. P3 Priority 3 - Approved, No Time Limitation python3 support:core This issue/PR relates to code supported by the Ansible Engineering Team.

Comments

@antonioribeiro
Copy link

antonioribeiro commented Feb 3, 2019

SUMMARY

When trying to apt: upgrade: full frequently it errors with "Could not get lock /var/lib/dpkg/lock-frontend". I already tried to delete the lock file before running upgrade, but it still raises the error. To fix it I just have to run ansible-playbook again and it usually works.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

apt

ANSIBLE VERSION
ansible 2.7.6
  config file = None
  configured module search path = ['/Users/antoniocarlos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.7.2 (default, Feb  2 2019, 18:43:53) [Clang 10.0.0 (clang-1000.11.45.5)]
CONFIGURATION
<nothing is printed with this command>
OS / ENVIRONMENT

Ansible environment
macOS 10.14

Target OS
Ubuntu 18.04
It's a new host, clean 18.04 install, and Ansible is the only thing installing packages on it.

STEPS TO REPRODUCE
Delete the lock file

I usually get a green on this one, meaning the file was not present:

- name: Remove apt lock file
  file:
    state: absent
    path: "/var/lib/dpkg/lock-frontend"
  become: true
  tags: apt
Ugrade all packages

The one who fails

- name: Update all packages to the latest version
  apt:
    upgrade: full
  become: true
  tags: apt

image

TASK [apt : Remove apt lock file] *******************************************************************************************************************************************
task path: /Users/antoniocarlos/code/xxxxxxx/xxxxxxxx/roles/apt/tasks/main.yml:61
ok: [xx.xxx.xxx.xx] => {"changed": false, "path": "/var/lib/dpkg/lock-frontend", "state": "absent"}

TASK [apt : Update all packages to the latest version] **********************************************************************************************************************
task path: /Users/antoniocarlos/code/xxxxxxx/xxxxxxxx/roles/apt/tasks/main.yml:67
fatal: [xx.xxx.xxx.xx]: FAILED! => {"changed": false, "msg": "'/usr/bin/aptitude full-upgrade' failed: E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\nE: Could not regain the system lock!  (Perhaps another apt or dpkg is running?)\nE: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\nW: Could not lock the cache file; this usually means that dpkg or another apt tool is already installing packages.  Opening in read-only mode; any changes you make to the states of packages will NOT be preserved!\n", "rc": 255, "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nReading extended state information...\nInitializing package states...\nWriting extended state information...\nBuilding tag database...\nNo packages will be installed, upgraded, or removed.\n0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.\nNeed to get 0 B of archives. After unpacking 0 B will be used.\nWriting extended state information...\nReading package lists...\nBuilding dependency tree...\nReading state information...\nReading extended state information...\nInitializing package states...\nBuilding tag database...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Reading extended state information...", "Initializing package states...", "Writing extended state information...", "Building tag database...", "No packages will be installed, upgraded, or removed.", "0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.", "Need to get 0 B of archives. After unpacking 0 B will be used.", "Writing extended state information...", "Reading package lists...", "Building dependency tree...", "Reading state information...", "Reading extended state information...", "Initializing package states...", "Building tag database..."]}
	to retry, use: --limit @/Users/antoniocarlos/code/xxxxxxx/xxxxxxxx/playbook.retry
@ansibot ansibot added affects_2.7 This issue/PR affects Ansible v2.7 bug This issue/PR relates to a bug. needs_triage Needs a first human triage before being processed. python3 support:core This issue/PR relates to code supported by the Ansible Engineering Team. labels Feb 3, 2019
@antonioribeiro
Copy link
Author

Here's it in the next run, deleting the lock file and able to upgrade full:

image

@s-hertel s-hertel added P3 Priority 3 - Approved, No Time Limitation and removed needs_triage Needs a first human triage before being processed. labels Feb 12, 2019
@AHassanSOS
Copy link
Contributor

I just got the same error. I tried to install Ansible remotely and got that error.

@Dejan992
Copy link

Dejan992 commented Mar 7, 2019

I have the same error. I've seen this error before and just killing ec2 instance helped me fix that, but with ansible-playbook i can't do that.

@ludydoo
Copy link

ludydoo commented Jun 20, 2019

I found that this happens when I set different hostnames for the same "ip"

in hosts:
[mygroup]
node1 ansible_ssh_host=10.10.10.10
node2 ansible_ssh_host=10.10.10.10


  • hosts:
    tasks:
    • apt
      upgrade: full

@eff917
Copy link

eff917 commented Sep 13, 2019

Just a hunch: update_cache: true runs apt update, and the task fails on the upgrade. So my guess is the module does not check for lock after update when update_cache is true.

@jasongitmail
Copy link

The bug seems related to using update_cache: true like @eff917 said.

I had to run sudo mv /var/lib/dpkg/lock /var/lib/dpkg/lock.bak to remove the lock, then Ansible proceeded successfully and apt-get also worked again on the server. Of course, you could also use: sudo rm /var/lib/dpkg/lock.

I had success with this, but still TBD whether this solves it consistently:

- name: Update APT Cache
  apt:
    update_cache: yes
    force_apt_get: yes

- name: Remove apt lock file
  file:
    state: absent
    path: "/var/lib/dpkg/lock"

- name: Upgrade all packages to the latest version
  apt:
    name: "*"
    state: latest
    force_apt_get: yes

On Ubuntu 16. Ansible 2.8.5.

@JoshuaEdwards1991
Copy link

JoshuaEdwards1991 commented Nov 15, 2019

You could try

- name: Wait for sudo
  become: yes
  shell:  while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1; do sleep 5; done;

@eff917
Copy link

eff917 commented Nov 21, 2019

@JoshuaEdwards1991 our playbooks are full with different lockfile waits (dpkg.lock, dpkg-frontend.lock, etc.) this problem is inside the apt task, when update is true, it doesn't check locks halfway through the task. The current solution is to split the single task into apt-update, wait-for lockfile, apt-upgrade...

@juneeighteen
Copy link

Just ran into this issue today:
This is broken

- name: Update apt Cache
    apt: 
      update_cache: yes
      name: "{{ apt_packages }}"

Looking at comments above, this is a viable workaround for us:

  • Breaking the task into three pieces (apt cache, wait for lock, install)
  • Adding force_apt_get:yes
  • become:true on the host level.
- name: Update apt Cache
    apt: 
      update_cache: yes
      force_apt_get: yes
  - name: Wait for APT Lock
    shell:  while fuser /var/lib/dpkg/lock >/dev/null 2>&1; do sleep 5; done;
  - name: Install Apt Packages
    apt:
      name: "{{ apt_packages }}"
      state: present
      force_apt_get: yes

@bcampoli
Copy link

I am blocked by this, all my builds on fresh instances with apt module are failing.

@juneeighteen
Copy link

@bcampoli I found out, further to our issue, that it's only happening when we build on AWS instances. Our Docker instances don't have the same issue. I hate that I did this, but we added a pause statement before the APT code above. Even waiting for the lock using while fuser didn't work for us. Two minutes later, I can run all the code and see no issues.

- name: Wait 2 minutes for APT to complete on AWS instances
    pause:
      minutes: 2

@ansibot ansibot added the has_pr This issue has an associated PR. label May 14, 2020
@richard-viney
Copy link

richard-viney commented Jun 27, 2020

We had a similar issue with Ansible package installs randomly and non-deterministically failing on Ubuntu 18.04 running on EC2 instances. We tried explicitly uninstalling the unattended-upgrades packages prior to Ansible being run, but that didn't seem to help.

Ended up adding the following workaround which improved things quite a lot:

- name: install packages
  apt:
    name:
      - ...
      - ...
  # There has been an intermittent issue with this task where it would fail and print the error:
  # 
  #     Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process
  #     using it?
  #
  # The reason for this is unclear. It's not from unattended-upgrades as that has already been
  # uninstalled when creating the base image. The workaround for now is to simply retry this task
  # several times in the event that it fails, with a small delay between each attempt.
  register: result
  until: result is not failed
  retries: 5
  delay: 5

A proper fix would be great.

@zsxing99
Copy link

zsxing99 commented Aug 3, 2020

    - name: Upgrade all apt packages
      become: true
      become_method: sudo
      apt:
        name: "*"
        state: latest

I used this command and it worked on GCE. Hope this helps.

@Alex2357
Copy link

Alex2357 commented Sep 11, 2020

Hi everyone! This is very annoying issue with Ubuntu. Playbooks which use apt simply very fragile. The problem with unnatended upgrades in ubuntu. See details here https://itsfoss.com/could-not-get-lock-error/
Initial my idea was to turn it off upgrades. But then I need to manage uppgrades when to do that. So I did not like the idea of turning it off. It seems to me that it is better to kill the current upgrade session then completely to turn it off.

I also was thinking to add sudo killall apt apt-get before executing any apt operations, but also do not like it.
Another option is to put retry. What I have noticed in many cases after 5-10 minutes play book works one upgrade is completed.
So I will give it a try to a retry

Ideally, if Canonical could provide ability to stop upgrades on request in a proper manner. I'd like to have a command some like StopUpgradesFor 42 and it stops upgrades for 42 minutes and then in 42 minutes it starts upgrades again if nobody called StopUpgradesFor again.

@tonycpsu
Copy link

Here is what I ended up coming up with that seems to handle all of the edge cases. Some of this is borrowed from others in this issue, some from other attempts to solve this problem I found in my travels:

- name: Disable periodic updates
  block:
    - name: Set all periodic update options to 0
      replace:
        path: /etc/apt/apt.conf.d/10periodic
        regexp: "1"
        replace: "0"
    - name: Set all auto update options to 0
      replace:
        path: /etc/apt/apt.conf.d/20auto-upgrades
        regexp: "1"
        replace: "0"
    - name: Disable unattended upgrades
      lineinfile:
        path: /etc/apt/apt.conf.d/10periodic
        regexp: "^APT::Periodic::Unattended-Upgrade"
        line: 'APT::Periodic::Unattended-Upgrade "0";'
        create: yes
    - name: Stop apt-daily.* systemd services
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
        - unattended-upgrades
        - apt-daily
        - apt-daily.timer
        - apt-daily-upgrade
        - apt-daily-upgrade.timer
    - name: Disable apt-daily.* systemd services
      systemd:
        name: "{{service}}"
        enabled: no
        masked: yes
      with_items:
        - apt-daily.service
        - apt-daily.timer
        - apt-daily-upgrade.service
        - apt-daily-upgrade.timer
      loop_control:
        loop_var: service
    - name: Uninstall unattended upgrades
      apt:
        name: unattended-upgrades
        state: absent
    - name: Prevent unattended upgrades from being installed
      dpkg_selections:
        name: unattended-upgrades
        selection: hold

@Alex2357
Copy link

@tonycpsu how do you solve upgrades then if you have disabled?

My problem is that I use ansible to configure ubuntu machines at home. All works fine except this annoying issue. I do not wanna turn it off.
I also have problems with Virtualbox VMs, I also run playbooks against them. What happens is that I start VM based on some old snapshot and then it starts updating. So you need to wait a lot of time to make this happen.

@tonyawad88
Copy link

tonyawad88 commented Sep 15, 2020

Same issue here as described by author.
Using the following in my playbook to run against a barebon ubuntu 18.04:

  tasks:

    - name: Step 1 - Update Ubuntu package list
      apt:
        update_cache: yes

    - name: Step 2 - Update all packages to the latest version
      apt:
        upgrade: dist

It crashes at "Step 2..." with the following error:
FAILED! => {"changed": false, "msg": "'/usr/bin/apt-get dist-upgrade ' failed: E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n", "rc": 100, "stdout": "", "stdout_lines": []}
If we wait for a couple minutes then retry the command again it works fine...

Something worth noting, Ubuntu seems to have these scheduled activities which trigger on first boot:
sudo systemctl list-timers
waiting for it to complete, then including the wait for lock and lock-frontend seems to be a nice workaround.

JaavLex added a commit to JaavLex/ansible-alex that referenced this issue Sep 24, 2020
- Is now functionnal
- For it to work, we had to remove the 'lock' file in the path '/var/lib/dpkg' SOURCE: ansible/ansible#51663
- Now updates ansible machines
@jayantkaushalapp
Copy link

Just to be sure, check if there is a cloud-init running on your instance, for me, there was a cloud-init set which was locking the update manager for any other operation.

@ansibot
Copy link
Contributor

ansibot commented Oct 11, 2020

@antonioribeiro: Greetings! Thanks for taking the time to open this issue. In order for the community to handle your issue effectively, we need a bit more information.

Here are the items we could not find in your description:

  • component name

Please set the description of this issue with an appropriate template from:
https://github.com/ansible/ansible/tree/devel/.github/ISSUE_TEMPLATE

click here for bot help

@ravulakiran

This comment has been minimized.

@bcoca
Copy link
Member

bcoca commented Apr 1, 2021

related #74095

@ansible ansible deleted a comment from themar7777 Apr 1, 2021
@ansible ansible deleted a comment from themar7777 Apr 1, 2021
@JonTheNiceGuy
Copy link
Contributor

@ravulakiran I'm afraid I can't help you in your specific case - you're posting on issues that aren't relevant and without full detail. This issue relates specifically to where the apt process has a lock state because another process is running an apt process. Your initial question did relate to this, but you're now layering more complexity and non-relevant content over the top.

To compound issues, where you have posted problems that you are actually having, you're posting just the output of the failed task, you're not providing a simple, testable case that someone can guide you on. In addition, dropping into an existing issue (like this one) and asking unrelated questions makes it difficult to find context that the actual project owners, employees, volunteers or interested parties can use to resolve the initial issue.

I would suggest either

  1. Raising a new issue, providing a simple example of what worked (and what didn't), including the full playbook you've used, the output, and explaining what you wanted to happen
  2. Posing the question on a Q&A site like StackOverflow
  3. Posting the playbook and output on a pastebin somewhere (gist.github.com is a good option, if you don't have your own preferred site) and then asking for help in the IRC channel or the Google Group.

I should note, I'm not involved in the project at all, aside from being interested enough in issues to comment on them, and provide code or documentation to resolve a few small issues I can see being helpful. I certainly don't have any sway with the project nor can I moderate conversations, other than dropping a response like this in.

@smacz42
Copy link

smacz42 commented Jul 12, 2021

It looks like per that PR linked above this is getting fixed in 2.12. Is there any chance of backporting it as a bugfix?

@TristisOris
Copy link

awx 19, ansible 2.9.13
hosts: ubuntu 20

this code works for centos, but not for ubuntu.
can't apt update without
/etc/sudoers
LOGIN ALL=(ALL) NOPASSWD:ALL

"msg": "'/usr/bin/apt-get upgrade --with-new-pkgs ' failed: E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\n",

#become: yes
become_user: root
become_method: sudo
tasks:

  • name: Debian | Update
    when: ansible_facts['os_family'] == "Debian"
    apt:
    update_cache: yes
    name: '*'
    state: latest
    cache_valid_time: 86400
  • name: Debian | Upgrade
    when: ansible_facts['os_family'] == "Debian"
    apt:
    upgrade: full
    autoclean: yes

@JonTheNiceGuy
Copy link
Contributor

@TristisOris You must have become: yes either here, in your inventory or similar to become root... or you need to be accessing the root account on your host.

@TristisOris
Copy link

@JonTheNiceGuy i tried all options, but it didnt' works on my hosts without sudoers edit.
"msg": "Missing sudo password",

Some my centos7 works only with became: yes, others only with become_user: root. I have no idea what the difference.

@JonTheNiceGuy
Copy link
Contributor

@TristisOris This is not linked to the issue above, however, you need to supply one of;

  • The variable ansible_become_pass, stored as a variable (vaulted or not - but it's insecure without) OR
  • The switch --ask-become-pass (which has the short version, -K)

The first option allows you to use this password in an automated manner, however, it may be stored in an insecure way (e.g. not using the vault). The second option requires you to provide the sudo password each time you run the script, but is more secure.

Again, this is NOT linked to the issue above, so if you have any further issues with your playbook, can I suggest using one of the support methods (e.g. IRC or mailing list) to get more interactive help.

@wibru
Copy link

wibru commented Sep 20, 2021

All the solutions proposed above didn't convince me because I wanted to keep unattended-upgrade, and I didn't want to build a dirty workaround on all apt tasks implying a lot of rewriting on all my ansible roles/playbooks.
After digging a while, I discovered that the default systemd apt timer from unattended upgrades had a setting implying the update as soon as possible (at boot) if the timer start time was missed.

I solved the problem by building an image with the proper systemd setting disabled:

$ systemctl cat apt-daily-upgrade.timer
# /etc/systemd/system/apt-daily-upgrade.timer
[Unit]
Description=Daily apt upgrade and clean activities
After=apt-daily.timer

[Timer]
OnCalendar=*-*-* 6:00
RandomizedDelaySec=60m
Persistent=false

[Install]
WantedBy=timers.target

I disabled the Persistent feature which is only useful on a desktop/laptop setup, not on a server kept online.

   Persistent=
       Takes a boolean argument. If true, the time when the service
       unit was last triggered is stored on disk. When the timer is
       activated, the service unit is triggered immediately if it
       would have been triggered at least once during the time when
       the timer was inactive. Such triggering is nonetheless
       subject to the delay imposed by RandomizedDelaySec=. This is
       useful to catch up on missed runs of the service when the
       system was powered down. Note that this setting only has an
       effect on timers configured with OnCalendar=. Defaults to
       false.

       Use systemctl clean --what=state ...  on the timer unit to
       remove the timestamp file maintained by this option from
       disk. In particular, use this command before uninstalling a
       timer unit. See systemctl(1) for details.

from man 5 systemd.timer

@smacz42
Copy link

smacz42 commented Oct 28, 2021

Has anyone tried this workaround yet?

-o DPkg::Lock::Timeout=3

Found here: https://blog.sinjakli.co.uk/2021/10/25/waiting-for-apt-locks-without-the-hacky-bash-scripts/

@smacz42
Copy link

smacz42 commented Nov 3, 2021

LIke @bcoca said, #74095 has been merged into devel, so we'll be getting this in 2.12 I believe.

For the time being, I also found that my digitalocean droplets were kicking off cloud-init, which was doing calls to apt in the background and causing a lock. I was able to wait for it by doing this:

    - name: Wait for cloud-init to complete
      shell: journalctl --boot _COMM=cloud-init | grep 'Cloud-init.*finished at'
      register: cloud_init_install
      retries: 60
      delay: 5
      until: cloud_init_install is success

Which looked like this:

Nov 03 04:52:47 oc2118e-ourcompose-com--2021-11-03-00-46 cloud-init[1131]: Cloud-init v. 21.2-3-g899bfaa9-0ubuntu2~18.04.1 finished at Wed, 03 Nov 2021 04:52:47 +0000. Datasource DataSourceDigitalOcean.  Up 341.84 seconds

Hope that helps someone!

@wibru
Copy link

wibru commented Feb 14, 2022

In plublic cloud automation, I ended up with this playbook running first, no need to parse logs:

---
- hosts: all
  gather_facts: false
  tasks:
    - name: Wait for system to become reachable
      wait_for_connection:
        timeout: 600
  
    - name: Wait for cloud-init / user-data to finish
      command: cloud-init status --wait
      changed_when: false

But it could be not sufficient with systems with unattended-upgrades like Ubuntu which could fire an apt upgrade at boot.
The only way I found to work around that is my comment #51663 (comment)

@sebastianmacarescu
Copy link

I run the following bash script using packer on first boot then run ansible for provisioning:
fix_apt_upgrades.sh

#!/bin/bash

# Fix for https://github.com/ansible/ansible/issues/51663
UNIT='apt-daily-upgrade.timer'
DIR="/etc/systemd/system/${UNIT}.d"
mkdir -p $DIR
echo -e "[Timer]\nPersistent=false" > ${DIR}/override.conf
systemctl daemon-reload

# Wait for cloud-init
cloud-init status --wait

# Wait for ubuntu system wide updates for at least 15 seconds; cloud-init may not wait for these
i="0"
while [ $i -lt 15 ] 
do 
if [ $(fuser /var/lib/dpkg/lock) ]; then 
  # Reset timer if dpkg is locked
  i="0" 
fi 
sleep 1 
i=$[$i+1] 
done

Packer code:

provisioner "shell" {
    script = "${path.root}/fix_apt_upgrades.sh"
    execute_command = "sudo sh -c '{{ .Vars }} {{ .Path }}'"
}
// Run ansible provisioner here

Thanks @wibru for the solution

@chunkingz
Copy link

@JonTheNiceGuy i tried all options, but it didnt' works on my hosts without sudoers edit. "msg": "Missing sudo password",

Some my centos7 works only with became: yes, others only with become_user: root. I have no idea what the difference.

I used yours and it finally worked for me.

I am connecting to a remote ubuntu VM hosted on azure, I use -u azureuser in the command line and in the playbook I have become: true

All I needed to do was add become_user:root and it finally passed.

@wibru
Copy link

wibru commented Apr 17, 2022

@JonTheNiceGuy i tried all options, but it didnt' works on my hosts without sudoers edit. "msg": "Missing sudo password",
Some my centos7 works only with became: yes, others only with become_user: root. I have no idea what the difference.

I used yours and it finally worked for me.

I am connecting to a remote ubuntu VM hosted on azure, I use -u azureuser in the command line and in the playbook I have become: true

All I needed to do was add become_user:root and it finally passed.

  1. become: true asks for privileges escalation (like calling sudo mycommand)
  2. become_user: root on a task or play execute it as root

with 1, you run the task as the ansible_user but ask for privileges escalation. Depend on the become_method, sudo by default with ubuntu linux.
with 2, the task is executed as root

ansible documentation on privilege escalation

@NedkoHristov
Copy link

Reopening this issue.

@walterrowe
Copy link

walterrowe commented Jun 27, 2023

also experiencing this issue on new AWS Ubuntu 20 instances

task ...

- name: Update package cache so Docker packages install will succeed
  apt:
    update_cache: yes
    pkg:
      - docker-ce
      - docker-ce-cli
      - containerd.io

ansible tower error ...

{
    "stderr_lines": [
        "E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7690 (apt-get)",
        "E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?"
    ],
    "changed": false,
    "_ansible_no_log": false,
    "cache_updated": true,
    "stdout": "",
    "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\"      install 'docker-ce' 'docker-ce-cli' 'containerd.io'' failed: E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7690 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n",
    "stderr": "E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7690 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n",
    "rc": 100,
    "invocation": {
        "module_args": {
            "autoremove": false,
            "force": false,
            "force_apt_get": false,
            "update_cache": true,
            "pkg": [
                "docker-ce",
                "docker-ce-cli",
                "containerd.io"
            ],
            "only_upgrade": false,
            "default_release": null,
            "cache_valid_time": 0,
            "dpkg_options": "force-confdef,force-confold",
            "upgrade": null,
            "policy_rc_d": null,
            "package": [
                "docker-ce",
                "docker-ce-cli",
                "containerd.io"
            ],
            "autoclean": false,
            "purge": false,
            "allow_unauthenticated": false,
            "state": "present",
            "deb": null,
            "install_recommends": null
        }
    },
    "stdout_lines": [],
    "cache_update_time": 1687877445
}

will try adding a cloud-init status --wait to the playbook ...

- name: Wait for cloud-init / user-data to finish
  command: cloud-init status --wait
  changed_when: false

- name: Update package cache so Docker packages install will succeed
  apt:
    update_cache: yes
    pkg:
      - docker-ce
      - docker-ce-cli
      - containerd.io

@cesarjorgemartinez
Copy link

Hi,

I tested multiple things to better disable and the form that I think better is:
sudo systemctl disable unattended-upgrades.service
sudo systemctl stop unattended-upgrades.service
sudo systemctl mask unattended-upgrades.service

Regards

@modem7
Copy link

modem7 commented Nov 6, 2023

will try adding a cloud-init status --wait to the playbook ...

- name: Wait for cloud-init / user-data to finish
  command: cloud-init status --wait
  changed_when: false

- name: Update package cache so Docker packages install will succeed
  apt:
    update_cache: yes
    pkg:
      - docker-ce
      - docker-ce-cli
      - containerd.io

I've attempted the cloud-init wait, but unfortunately that only waits for cloud init, which isn't always the cause.

Another block I've added which has been useful is:

    - name: Wait for cloud-init / user-data to finish
      command: cloud-init status --wait
      when: "'lxc' not in group_names"
      changed_when: false
      no_log: true

    - name: Wait for cloud init to finish
      community.general.cloud_init_data_facts:
        filter: status
      register: res
      until: "res.cloud_init_data_facts.status.v1.stage is defined and not res.cloud_init_data_facts.status.v1.stage"
      when: "'lxc' not in group_names"
      retries: 50
      delay: 5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects_2.7 This issue/PR affects Ansible v2.7 bug This issue/PR relates to a bug. has_pr This issue has an associated PR. module This issue/PR relates to a module. P3 Priority 3 - Approved, No Time Limitation python3 support:core This issue/PR relates to code supported by the Ansible Engineering Team.
Projects
None yet
Development

No branches or pull requests