Skip to content
Thanasis Polychronakis edited this page May 9, 2013 · 15 revisions

This is the schema of how Kickq stores data on Redis. Each Header represents a key-family (table). The default namespace for Kickq is kickq, you can change that using the redisNamespace option.

kickq:id

Type: String

A single key record to use for getting unique job ids.

kickq:job:[ job id ]

Type: Hash

The Job item. Each item contains these hash keys:

  • id string The id.
  • name string The name of the job.
  • createTime number JS timestamp.
  • updateTime number JS timestamp.
  • state string state.
  • itemData string JSON serialized the job item data object, (as defined in JobItem constructor).

kickq:queue:[ job name ]

Type: List member: [job id]

Each Job Name has its own queue. Here is a sample data dump using redis-cli:

redis > LRANGE "kickq:queue:send emails" 0 -1
1) "46"
2) "68"
3) "79"
4) "99"

redis > LRANGE "kickq:queue:convert videos" 0 -1
1) "12"
2) "65"

These queues are used to maintain FIFO processing of jobs on a per Job Name basis. The queues are job-state agnostic. What this means is a Job may eventually get canceled or removed, if that state-changing event occurs, there will be no attempt to remove the job ids from the queue. The state of each job in the queue, will be determined when each job is popped.

kickq:time-index

Type: Sorted Set score: timestamp Create time member: [job id]

An index of job items by creation time.

kickq:state:[state]

Type: Set member: [ job id ]

An index of job items by state. Each state contains a set of job ids.

kickq:scheduled

Type: Sorted Set score: timestamp Scheduled time member: [ job id ]

All delayed, ghost and scheduled jobs will be stored in this key. The range of the set will be a js timestamp. This timestamp will represent the time this job has to move to the process queue.

kickq:scheduled-purge

Type: Sorted Set score: timestamp Scheduled time member: [ job id ]

Where jobs go to die. All finished jobs line up here for purging.