-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'job.data' passed in to processor callback is sometimes empty. #2461
Comments
Whats your maxmemory policy configuration in Redis? |
|
I have never heard about this issue before, and I cannot see how this would be possible if you really are sending data in every event. Have you checked to put a log before adding the job to BullMQ to verify it really is always an object and not undefined? |
When we see our code, there seems to be no room where we could pass in a non-object value to bull's |
@manast
And I have additional information that I thought is irrelevant to the issue. But I'll show it just in case.
I didn't include this information because I thought one queue instance can never be affected by other queue instances. |
I don't see a reason to which job.data should be empty, unless whatever comes back from toBullJob(evt) has nothing within it's |
NOTE: toBullJob is actually an async function(I've updated the original description) |
If that's the code, then I agree, it looks as if it should always be defined. But it is also quite straightforward in the Bull internals, so I don't know why this is happening to you. As mentioned, this is the first time we hear about data being empty, so there must be something special in your setup. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I'm also getting this. I can confirm that data is being added after I add a job, but when it is picked up and processed, the data object is empty |
@hgeldenhuys it is quite unlikely that this is an issue with Bull. |
Hey @manast , where else would this be do you think? I'm using NestJS, I'm processing bureau files. It all worked well for about 40k files, until I started processing 2GB files, inside redis even the data attribute is gone: The missing lock too is not accurate, I think it masks the real issue, and in the queue interface, it looks like 53 years elapsed on some jobs: //config
const settings = {
lockDuration: 600_000 * 4,
lockRenewTime: 600_000 * 2,
stalledInterval: 0,
maxStalledCount: 0,
}
const redis = {
host: constants().REDIS_HOST,
port: constants().REDIS_PORT,
password: constants().REDIS_PASSWORD,
username: constants().REDIS_USERNAME,
}
@Module({
imports: [
BullModule.forRoot({ redis, settings }),
BullModule.registerQueue({ name: 'level0-monthly', redis, settings }),
//...
const lineJobs: Level0Jobs = {
id: summary.id,
batch,
srn: summary.srn,
test: summary.test,
total: summary.lines,
filename: summary.filename,
frequency: summary.frequency,
type: 'data',
lines,
}
const job = await queue.add(lineJobs, { removeOnComplete: this.BULL_REMOVE_ON_COMPLETE }) |
This is speculative, but this might be related to Redis, at some point it's not available for a moment while swapping memory files, and all hell breaks loose. So Bull doesn't get the jobs properly back but overlays the failure attributes on top of an empty object. Maybe Bull should just check that the timestamp is available before assuming the job's object is okay? |
@hgeldenhuys it looks like the job has been removed while still being processed. Data can be removed by Redis if you do not have the correct maxmemory policy configured... I am not sure what you mean with 2Gb files but I hope you are not sending 2Gb as data to a job, that is not a very good idea for many reasons. https://docs.bullmq.io/guide/going-to-production#max-memory-policy |
No, it was a 2GB file split in lines of data. The implication was just that I was pushing a lot more data through. I'll check out the max-memory-policy, but the missing timestamp results in a 1970 default date when you do you stall calculation, and then a false missing lock is raised, which is why many people think increasing the lock duration variables will fix it but doesn't. This should be handled differently as it puts people on the wrong troubleshooting path, unless it could be documented. I don't think the jobs are being deleted, because there are still a lot of intact jobs waiting, but as soon as they get processed the missing lock logic kicks in and hides the real issue (which is probably due to redis' memory swap) |
It was Redis, I've set the maxmemory higher and changed the maxmemory-policy and no longer get the missing lock issue |
Description
As in the title,
job.data
passed in to processor callback is an empty object though our code ALWAYS pass in an object with values. This happens ocassionally in production environment, not in local environment.Here's high level overview of the code:
Since we always pass in a nonempty object with values to
queue.add
function, we expect thatprocessorCb
is always passed in THAT nonempty object. Does anyone have an idea of the cause of this problem?Minimal, Working Test code to reproduce the issue.
(An easy to reproduce test case will dramatically decrease the resolution time.)
Bull version
3.22.12
Additional information
We have 3 Node instances in a production environment. All of them accesses to one Redis instance.
The text was updated successfully, but these errors were encountered: