Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom BackoffLimit & concurrencyPolicy for SCDF Tasks are not passed to PODS while executing in Openshift environment #4155

Closed
Srkanna opened this issue Sep 19, 2020 · 2 comments

Comments

@Srkanna
Copy link

Srkanna commented Sep 19, 2020

I'm trying to set a backoffLimit & concurrencyPolicy for batch jobs which are executed in Openshift environment via SCDF. Currently I'm setting these two at the global server config level. The resource limits, imagePullPolicy configurations are being passed to the CronJob but not backoffLimit and concurrencyPolicy.

I'm experiencing this in 2.6.1 and earlier versions as well. Below is the server-config.yaml.

  cloud:
    dataflow:
      task:
        platform:
          kubernetes:
            accounts:
              dev:
                limits:
                    memory: 1024Mi
                    cpu: 1
                entry-point-style: exec
                image-pull-policy: always
                backoffLimit: 1
                maxCrashLoopBackOffRestarts: 1
                concurrencyPolicy: forbid
  datasource:
    url: ${oracle-root-url}
    username: ${oracle-root-username}
    password: ${oracle-root-password}
    driver-class-name: oracle.jdbc.OracleDriver
    testOnBorrow: true
    validationQuery: "SELECT 1"
  flyway:
    enabled: false
  jpa:
    hibernate:
      use-new-id-generator-mappings: true

Both backoffLimit and maxCrashLoopBackOffRestarts are not passed to POD configuration. I still see PODS are getting restarted 6 times instead of 1 time after a failure. Below is the CronJob.yaml which I extracted from the Openshift cluster console after creating the schedule in SCDF for a batch job.

kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: batchjob1
  namespace: dev-batch
  selfLink: /apis/batch/v1beta1/namespaces/dev-batch/cronjobs/batchjob1
  uid: bef709dc-fa3a-11ea-933e-001a4a1a0116
  resourceVersion: '144552724'
  creationTimestamp: '2020-09-19T05:41:20Z'
  labels:
    spring-cronjob-id: batchjob1
spec:
  schedule: '*/10 * * * *'
  concurrencyPolicy: Allow
  suspend: false
  jobTemplate:
    metadata:
      creationTimestamp: null
    spec:
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
            - name: batchjob1
              image: >-
                docker-registry.default.svc:5000/batch/batch-job:0.0.4
              args:
                - '--spring.datasource.username=BATCH_APP'
                - '--spring.cloud.task.name=batchjob1'
                - >-
                  --spring.datasource.url=jdbc:oracle:thin:@URL
                - '--spring.datasource.driverClassName=oracle.jdbc.OracleDriver'
                - '--spring.datasource.password=password'
                - '--spring.batch.job.names=Job1'
              env:
                - name: SPRING_CLOUD_APPLICATION_GUID
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.uid
              resources:
                limits:
                  cpu: '1'
                  memory: 1Gi
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              imagePullPolicy: Always
          restartPolicy: Never
          terminationGracePeriodSeconds: 30
          dnsPolicy: ClusterFirst
          serviceAccountName: default
          serviceAccount: default
          securityContext: {}
          schedulerName: default-scheduler
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
status: {}

Kindly let me know your inputs. @ilayaperumalg @sabbyanandan

@ilayaperumalg
Copy link
Contributor

Hi @Srkanna,

This looks like a bug. Moving this to Spring Cloud Deployer Kubernetes. Thanks for reporting.

@ilayaperumalg
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants