Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip running not working #28

Closed
sandergroenen opened this issue Feb 13, 2020 · 2 comments
Closed

Skip running not working #28

sandergroenen opened this issue Feb 13, 2020 · 2 comments

Comments

@sandergroenen
Copy link

sandergroenen commented Feb 13, 2020

Behaviour

When adding a cron service with skip-running=true the next iteration of the service starts even though the prior service is still running

Steps to reproduce this issue

1.Add a service in yml file with cronjob

 autoscript-cronjob:
    <<: *php
  command: /usr/local/bin/php /var/www/html/nmb_importtool/v2.00/autoScript/autoScript.php date=20200212 task=full_minus_autotext maintenance_setting=none
    deploy:
      mode: replicated
      replicas: 0
      labels:
        - "swarm.cronjob.enable=true"
        - "swarm.cronjob.schedule=* * * * *"
        - "swarm.cronjob.skip-running=true"
        - "swarm.cronjob.replicas=1"
      restart_policy:
        condition: none
      placement:
        constraints:
          - node.labels.app == 1
          - node.role == manager
  1. Deploy stack through docker stack deploy
  2. Observe swarm-cronjob log it will state something like this:
Thu, 13 Feb 2020 15:25:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:26:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:27:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:28:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:29:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Expected behaviour

not to start the cron service again as it is still running

Actual behaviour

the service starts again killing the old service that was still running and starts a new one

Configuration

  • Target Docker version (the host/cluster you manage) : 19.03.2, build 6a30dfc

  • Platform (windows/linux) : linux redhat 7.7

  • System info (type uname -a) : 3.10.0-957.el7.x86_64 The docker service create reporting no-such Docker Image #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Target Swarm version :

Docker info

Server:
Containers: 19
Running: 6
Paused: 0
Stopped: 13
Images: 25
Server Version: 19.03.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: owd30wzrmpy14f8r5a95rjwmh
Is Manager: true
ClusterID: owy8kklxey6kv9f3pmcr1ikg2
Managers: 1
Nodes: 7
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.200.221.79
Manager Addresses:
10.200.221.79:10.200.221.79
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.7 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 23.37GiB
Name: NMB-DC-CRG-V004
ID: GALW:KDQW:UREM:54O4:OBIY:3YQH:XMQW:WHCD:XWJS:WBI6:ZBDY:GQI4
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
10.200.221.79:5000
127.0.0.0/8
Live Restore Enabled: false

Logs


Thu, 13 Feb 2020 15:38:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=l5543ugudsrh1eqdt4aw9070k

Thu, 13 Feb 2020 15:38:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=kcwwj7geik91w8xy4p58idvpf

Thu, 13 Feb 2020 15:38:00 EAT INF Start job service=webdev_autoscript-cronjob status= tasks_active=0

Thu, 13 Feb 2020 15:38:00 EAT DBG Event triggered newstate=completed oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Number of cronjob tasks: 1

Thu, 13 Feb 2020 15:38:00 EAT DBG Event triggered newstate=updating oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Number of cronjob tasks: 1

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=starting task_id=953fdagojn0beocz402y9zr62

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=8tk4h8mbq4dgz75u8erz1yq21

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=l5543ugudsrh1eqdt4aw9070k

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=kcwwj7geik91w8xy4p58idvpf

Thu, 13 Feb 2020 15:39:00 EAT INF Start job service=webdev_autoscript-cronjob status=updating tasks_active=0

Thu, 13 Feb 2020 15:39:00 EAT DBG Event triggered newstate=updating oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Number of cronjob tasks: 1

Thu, 13 Feb 2020 15:39:00 EAT DBG Event triggered newstate=updating oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Number of cronjob tasks: 1```
@sandergroenen
Copy link
Author

i was too soon writing this report. It turned out the cron service inherited a health check from the php service through yml merge feature and since the command was altered from starting a apache process in the actual php service to a php command in the cron job the health check just determined the container was not active...

@dberardo-com
Copy link

maybe because i am not a native english speaker, but what is the skip-running flag even intended to do? this is totally unclear to me from the doc: "Do not start a job if the service is currently running."

which service is meant here ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants