Issue with disable schedule job on cluster #2073

Merged
merged 6 commits into from Nov 30, 2016

Projects

None yet

3 participants

@jtobard
Contributor
jtobard commented Sep 14, 2016

Fix a problem with cluster mode, when you disable schedule a job on a different cluster node, the job still runs as scheduled on the owner node. Now when the job its executed, we detect if was disabled and prevent the execution and unschedule it.

jtobard added some commits Aug 14, 2016
@gschueler gschueler commented on an outdated diff Sep 14, 2016
...rails-app/jobs/rundeck/quartzjobs/ExecutionJob.groovy
@@ -281,6 +281,13 @@ class ExecutionJob implements InterruptableJob {
}
context.getScheduler().deleteJob(context.jobDetail.key)
return initMap
+ }else{
+ //verify run on this node but scheduled disabled
+ if(!initMap.scheduledExecution.scheduleEnabled){
@gschueler
gschueler Sep 14, 2016 Contributor

perhaps it should test scheduledExecution.shouldScheduleExecution() this will also fix it if the schedule is removed or the execution is disabled

@gschueler
Contributor

fyi; the travis build failure is something i'm trying to fix, i will restart it it should work. or you can try rebasing on the master branch, i tried to add a fix to the docker tests for that travis issue

+ [adhocRemoteString: 'test buddy', argString: '-delay 12 -monkey cheese -particle']
+ )]
+ ),
+ scheduled: false,
@gschueler
gschueler Sep 14, 2016 Contributor

You can parameterize the scheduled, executionEnabled and scheduleEnabled values to test all of them being set to false

+
+ then:
+ 1 * quartzScheduler.deleteJob(ajobKey)
+
@gschueler
gschueler Sep 14, 2016 Contributor

Here you could use:

where:
isscheduled | isexecenabled | isscheduleenabled
false | true | true
true | false | true
true | true | false
@@ -1518,6 +1518,7 @@ class ScheduledExecutionService implements ApplicationContextAware, Initializing
return [success: false, scheduledExecution: scheduledExecution,
errorCode: 'api.error.job.toggleSchedule.notScheduled' ]
}
+ scheduledExecution.serverNodeUUID = frameworkService.isClusterModeEnabled()?frameworkService.serverUUID:null
@gschueler
gschueler Sep 14, 2016 Contributor

this change should also be tested if we expect the "update execution flags" to change the schedule ownership.

e.g. If it was owned by another node, enabling the schedule would now reschedule the job on the current node, correct? I think that is the correct behavior, but it should be tested.

@jtobard jtobard changes requested, added test for ScheduledExecutionService and param…
…etrized "scheduled job was stoped by another cluster node, so should be deleted from quartz scheduler" test
01a0c32
@philippevidal80

Have found the same behaviour on our 2.6.9-1 Rundeck Cluster with 2 nodes through the pop-up shortcut edition of a job.

The box "disable execution" and "disable schedule" are checked both on the 2 nodes but job continue to be executed at shceduled time on owned node.

Moreover, the indication of the schedule and node in charge of the job stay on the jobs list at the place of "Never" on the not owned node.

@gschueler gschueler added this to the 2.7.0 milestone Nov 18, 2016
@gschueler gschueler merged commit 01a0c32 into rundeck:master Nov 30, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment