Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maximum of 100 jobs can be in processing state #242

Open
ViRaL95 opened this issue Dec 17, 2019 · 0 comments
Open

Maximum of 100 jobs can be in processing state #242

ViRaL95 opened this issue Dec 17, 2019 · 0 comments

Comments

@ViRaL95
Copy link

ViRaL95 commented Dec 17, 2019

I get the following error when queueing an async stats job. I am calling the following API call a few thousand times. Each call I make contains a chunk of 20 line items, and metrics (engagement, billing, video, media, web_conversion, mobile_conversion, life_time_value_mobile_conversion); This call is made for 350 accounts.

I have never encountered this error before, previously I was able to queue the same number of jobs successfully. So I guess my question is why is this occurring and what are the solutions?

Error:

twitter_ads.error.BadRequest: <BadRequest object at 0x7fcd4ab284c8 code=400 details=[{'code': 'TOO_MANY_JOBS', 'message': 'A maximum of 100 jobs can be in processing state'}]>

Code:

queued_job = self.entity_type.\
                             queue_async_stats_job(account, chunk, self.metrics,
                                                   start_time=self.start_date,
                                                   end_time=self.end_date,
                                                   granularity=GRANULARITY.DAY)

My current workaround is adding the 400 error code to the retry_on_status option like this:

    client = Client(
                consumer_key=params["CONSUMER_KEY"],
                consumer_secret=params["CONSUMER_KEY_SECRET"],
                access_token=params["ACCESS_TOKEN"],
                access_token_secret=params["ACCESS_TOKEN_SECRET"],
                options={
                    'handle_rate_limit': True,
                    'retry_max': 3,
                    'retry_delay': 5000,
                    'retry_on_status': [400, 500, 503, 504]
                })

I assume the issue is occurring because there are too many jobs on the queue? Perhaps this error didnt occur before because usually while queuing new jobs, most of the previously queued jobs had finished? If this is the case, is there any way to check the number of jobs on the queue? I couldnt find anything in the source code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant