Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reusing redisClient fails to create jobs #119

Closed
pbreah opened this issue May 20, 2018 · 10 comments
Closed

reusing redisClient fails to create jobs #119

pbreah opened this issue May 20, 2018 · 10 comments

Comments

@pbreah
Copy link

pbreah commented May 20, 2018

Before anything: thanks for such amazing queue lib.

I have this issue: when I reference an existing redisClient to the queue redis configuration object like so:

const redis = require('redis');
const redisOptions = {
  host: process.env.REDIS_HOST || '127.0.0.1',
  port: process.env.REDIS_PORT || 6379,
  password: 'my-password',
  db: 0,
  options: {},
  retry_strategy(options) {
    return Math.min(options.attempt * 100, 3000);
  },
};

const redisClient = redis.createClient(redisOptions);
const queue = new Queue('my-queue', {
    redis: redisClient,
    isWorker: false,
    activateDelayedJobs: true,
    removeOnSuccess: true,
  }).on('ready', () => {
    console.log('ready');
  });

queue.createJob({
  'data': { 'here': 1 }
  })
  .backoff('fixed', 300000)
  .retries(3)
  .setId('my-id')
  .save();

The queue only saves jobs to Redis when I create a brand new connection using the queue connection parameters (but using an existing redisClient doesn't work). My redisClient works, as I can pull data from Redis normally, it's only the queue that is not able to save jobs on Redis when using my own redisClient.

Using:

{
  "dependencies": {
    "bee-queue": "^1.2.2",
    "redis": "^2.8.0"
  }
}
@LewisJEllis
Copy link
Member

LewisJEllis commented May 22, 2018

Thanks for the report and snippet. I investigated a bit and I am not able to reproduce. Can you try the below snippet and let me know how it behaves:

const Queue = require('bee-queue');
const redis = require('redis');

const redisClient = redis.createClient();

const producer1 = new Queue('my-queue', {
  redis: redisClient,
  isWorker: false,
});

const producer2 = new Queue('my-queue', {
  redis: redisClient,
  isWorker: false,
});

setInterval(() => {
  producer1.createJob({ hello: 'world1' }).save();
  producer2.createJob({ hello: 'world2' }).save();
}, 1000);

const worker = new Queue('my-queue', { redis: redisClient });

worker.process(async (job) => {
  console.log('processed job', job.id, job.data);
});

This works for me as I would expect and after 2.01 seconds has printed:

processed job 1 { hello: 'world1' }
processed job 2 { hello: 'world2' }
processed job 3 { hello: 'world1' }
processed job 4 { hello: 'world2' }

I know in your sample you were using various options on both the Redis client and the queue/job, but those should be largely inconsequential here so I pared it down to just the core of sharing connections and enqueuing/processing jobs. If you find that some particular option or pattern causes trouble, let me know and I can take a closer look.

@skeggse
Copy link
Member

skeggse commented Jul 10, 2018

Closing due to inactivity. Feel free to reopen.

@alexisohayon
Copy link

Hello @skeggse ! Before all, thanks for the good work <3
I'm having similar issue, it already happened that the server lost connection to Redis, which caused server crash & reboot, and in between, loosing some events that were not pushed into the queue. I tried the retry_strategy and when I simulate the deconnect / reconnect to Redis, I have
"ReplyError: NOSCRIPT No matching script. Please use EVAL." when new events are pushed to the queue. I read about this in other issues but it should've been fixed, so I'm not sure why I'm getting this?

thanks!

@skeggse
Copy link
Member

skeggse commented Aug 13, 2018

Sounds like you have the makings of a test-case? Would you be open to making a PR with a test that demonstrates this functionality? I can take a look, but it'd be much easier if I can just npm test and see the test failure :)

@alexisohayon
Copy link

@skeggse Yeah I understand, not really practical for me neither as you have to manually restart your redis in local environment to reproduce the error...
But I think I found a workaround (or maybe I should've done it this way from the beginning...):

(from an external module)

module.exports = {
    createJob: async (data) => {
        const jobQueue = new Queue('queue-name', {
          redis: redisOptions
        })
        await jobQueue.ready()
        return jobQueue.createJob(data).timeout(10000).retries(2).save()
    }
}

(from the endpoint that needs to add the job)

const { createJob } = require('modules/queues')

await createJob(data)

So like this, it seems to be working but I'm not sure if it's a good idea to "recreate" the queue at each call. Wdyt?

Thanks again :)

@skeggse
Copy link
Member

skeggse commented Aug 13, 2018

You'll definitely want to avoid re-creating the queue. This:

you have to manually restart your redis in local environment to reproduce the error...

Is new information, and makes sense - SCRIPT LOAD does not load scripts into some persistent storage on Redis' side. When Redis restarts, it the scripts won't be loaded, and bee-queue doesn't know that the server reloaded.

@LewisJEllis: I know we're moving to ioredis; would it be easy to listen to a reconnect event, check the Server/run_id value (or check for scripts being loaded), and reload the scripts if necessary? (I'm mostly not familiar with ioredis' events)

@alexisohayon
Copy link

I see :/ Any way to reload the script directly in the retry_strategy, when redis reconnects?

@skeggse
Copy link
Member

skeggse commented Aug 13, 2018

You could maybe do something like

const bqScripts = require('bee-queue/lib/lua');

// inside some event handler
await bqScripts.buildCache(redisClient);

This will check whether each script exists, and if it doesn't, it'll load it.

@alexisohayon
Copy link

Nice! Gonna try this right now :)

@alexisohayon
Copy link

Alright, so now it's working good after reconnect, but I'm still getting errors for jobs that were added during the connection loss... I'm not sure what to do in such case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants