Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Version: 1.2-current CUPS.org User: jlovell
NextJobId is incorrect after a cupsd crash.
In the case where the cache is out of date maybe NextJobId should come from load_request_root() rather than load_next_job_id()...
Steps to reproduce:
Starting with a clean system:
Queue job 1: $ lp -H hold main.c request id is Lexmark_Z53-1 (1 file(s))
Cause the job cache to be written:
Queue job 2: $ lp -H hold job.c request id is Lexmark_Z53-2 (1 file(s))
SIGKILL cupsd & restart it (with an out-of-date cache):
Queue what should be job 3: $ lp -H hold printers.c request id is Lexmark_Z53-2 (1 file(s))
See two job-2 entries: $ lpq Lexmark_Z53 is not ready Rank Owner Job File(s) Total Size 1st jlovell 1 main.c 57344 bytes 2nd jlovell 2 job.c 89088 bytes 3rd jlovell 2 printers.c 88064 bytes
Confirm the error in the spool dir:
total 304 -rw------- 1 root lp 646 Apr 21 14:16 c00001 -rw------- 1 root lp 654 Apr 21 14:20 c00002 -rw-r----- 1 root lp 56944 Apr 21 14:16 d00001-001 -rw-r----- 1 root lp 87196 Apr 21 14:20 d00002-001 drwxrwx--T 2 root lp 68 Apr 21 13:07 tmp
FWIW this comes from: rdar://problem/4501801 scheduled jobs are always given job ID "job 1"
Thanks!
The text was updated successfully, but these errors were encountered:
CUPS.org User: mike
How about the dates on the spool directory and cache file?
ls -ld /var/spool/cups ls -l /var/cache/cups/job.cache
Sorry, something went wrong.
CUPS.org User: jlovell
ls shows them as equal but gdb reveals the dir is newer:
drwx--x--- 11 root lp 374 Apr 21 22:07 /var/spool/cups -rw-r--r-- 1 root lp 229 Apr 21 22:07 /var/tmp/cups/job.cache
(gdb) p dirinfo.st_mtimespec.tv_sec - fileinfo.st_mtimespec.tv_sec $4 = 12 (gdb) bt #0 cupsdLoadAllJobs () at job.c:812 #1 0x0002674c in cupsdReadConfiguration () at conf.c:1045 #2 0x000031d8 in main (argc=2, argv=0xbffff88c) at main.c:402
OK, I found the problem - we were using NextJobId from the job.cache file, even if it was less than the highest job ID already seen.
I've added a check in load_next_job_id() for this...
No branches or pull requests
Version: 1.2-current
CUPS.org User: jlovell
NextJobId is incorrect after a cupsd crash.
In the case where the cache is out of date maybe NextJobId should come from load_request_root() rather than load_next_job_id()...
Steps to reproduce:
Starting with a clean system:
killall cupsd
rm -f /var/spool/cups/{c,d}* /var/tmp/cups/job.cache
cupsd
Queue job 1:
$ lp -H hold main.c
request id is Lexmark_Z53-1 (1 file(s))
Cause the job cache to be written:
killall cupsd
cupsd
Queue job 2:
$ lp -H hold job.c
request id is Lexmark_Z53-2 (1 file(s))
SIGKILL cupsd & restart it (with an out-of-date cache):
killall -9 cupsd
cupsd
Queue what should be job 3:
$ lp -H hold printers.c
request id is Lexmark_Z53-2 (1 file(s))
See two job-2 entries:
$ lpq
Lexmark_Z53 is not ready
Rank Owner Job File(s) Total Size
1st jlovell 1 main.c 57344 bytes
2nd jlovell 2 job.c 89088 bytes
3rd jlovell 2 printers.c 88064 bytes
Confirm the error in the spool dir:
ls -l /var/spool/cups/
total 304
-rw------- 1 root lp 646 Apr 21 14:16 c00001
-rw------- 1 root lp 654 Apr 21 14:20 c00002
-rw-r----- 1 root lp 56944 Apr 21 14:16 d00001-001
-rw-r----- 1 root lp 87196 Apr 21 14:20 d00002-001
drwxrwx--T 2 root lp 68 Apr 21 13:07 tmp
FWIW this comes from:
rdar://problem/4501801 scheduled jobs are always given job ID "job 1"
Thanks!
The text was updated successfully, but these errors were encountered: