You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a backend node application using 1 queue. Every REST request comes, I create 200 jobs and do some processing on each job. After the execution is done, the result of all jobs are logged into an excel file.
I deployed this application to Openshift(Kubernetes in the backend). It works perfectly with 1 pod. However, if I increase the number of pod more than 1, Kubernetes starts distribute the jobs to not only the first pod but also to different pods. I am wondering if anyone faces the same issue and knows the way how to force all of the jobs to be executed only in 1 pod using bull? Or if it the execution runs in multiple pods, what should be the proper way to catch the complete event after the execution of all jobs are done from multiple pods?
I attached my sample code as below.
Hi everyone,
I have a backend node application using 1 queue. Every REST request comes, I create 200 jobs and do some processing on each job. After the execution is done, the result of all jobs are logged into an excel file.
I deployed this application to Openshift(Kubernetes in the backend). It works perfectly with 1 pod. However, if I increase the number of pod more than 1, Kubernetes starts distribute the jobs to not only the first pod but also to different pods. I am wondering if anyone faces the same issue and knows the way how to force all of the jobs to be executed only in 1 pod using bull? Or if it the execution runs in multiple pods, what should be the proper way to catch the complete event after the execution of all jobs are done from multiple pods?
I attached my sample code as below.
Looking forward for some helps.
Chan.
The text was updated successfully, but these errors were encountered: