Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Status of remote IPP jobs cannot be discovered #2732

Closed
michaelrsweet opened this issue Mar 3, 2008 · 10 comments
Closed

Status of remote IPP jobs cannot be discovered #2732

michaelrsweet opened this issue Mar 3, 2008 · 10 comments
Milestone

Comments

@michaelrsweet
Copy link
Collaborator

@michaelrsweet michaelrsweet commented Mar 3, 2008

Version: -feature
CUPS.org User: twaugh.redhat

Currently it is impossible to know whether a job sent to a remote IPP queue is actually still in progress, or if it failed.

The solution is for the IPP backend to act as a proxy for remote IPP jobs. It needs to maintain a small database of local job IDs to remote job URIs, and to add the following actions:

  1. While watching the remote job, if the remote job stops, fetch the remote job-state-reasons and set them locally using an 'ATTR:' stderr output line, and then exit with CUPS_BACKEND_STOP.
  2. When the backend is started, if the job ID is already in the database, try to restart the remote job (and re-submit the job if that fails) and then continue to watch the remote job as normal.

This way, it will actually be possible to monitor the state of a remote job.

(See http://cyberelk.net/tim/2008/02/29/print-job-failure-alerts/ for one example of why this is needed: but otherwise just look at http://localhost:631/jobs/ when a remote job has failed.)

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Mar 3, 2008

CUPS.org User: mike

Moving to -feature. This will not be implemented in 1.3.x, and probably not in 1.4.x. Furthermore, the IPP backend is used for real printers, too, so we may need to revisit whether we continue using the same backend for both IPP printers and shared printing.

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Mar 3, 2008

CUPS.org User: twaugh.redhat

I'm not sure why you are making the distinction between remote CUPS queues and remote IPP queues on real printers -- the same problem could be exhibited on either, and fixed in the same way for both.

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Mar 3, 2008

CUPS.org User: mike

No, actually most IPP printers do not keep track of old jobs, and particularly if the printer crashes the old job ID is lost.

In addition, we have a lot of "hacks" in place to work around bugs in various vendors' IPP implementations. Changing the IPP backend so significantly will require a lot of testing (== lots of time) and careful design, not just bolting on a job ID cache and quick exit to stop the queue which will require a manual restart...

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Mar 4, 2008

CUPS.org User: jsmeix.suse

If I understand it correctly it means that a
running backend is needed to get the state
of a remote IPP job?

Does this also mean that the backend must
keep on running until the remote job is finally
processed (to notice errors while processing)?

How should this work if there are many other jobs
in the remote queue?

Additionally the backend should exit by default
as soon as possible to allow maximum throughput.

I think it is not teh backend but only the cupsd
which should maintain a database of sent IPP jobs
so that "lpstat -W completed" can show job info.

Currently "lpstat -W completed" shows only the info
regarding the local system.

What is actually missing is something like a "traceroute"
functionality (i.e. a "tracejob" functionality) which
queries the remote system about the remote job info.

Therefore cupsd must also keep the remote job IDs and
the remote host so that it can generate a matching
query for the remote host.

By default no "tracejob" should be done because it could
take arbitrary time (think about several subsequent
remote cupsds which forward the job until it reaches
its final destination) and arbitraty stuff like network
timeout could delay the response.

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Mar 4, 2008

CUPS.org User: mike

If I understand it correctly it means that a
running backend is needed to get the state
of a remote IPP job?

Yes

Does this also mean that the backend must
keep on running until the remote job is finally
processed (to notice errors while processing)?

Yes

How should this work if there are many other jobs
in the remote queue?

Each client only sends one job at a time to a queue.

Additionally the backend should exit by default
as soon as possible to allow maximum throughput.

No! Doing so means that a user either cannot move a pending job to another queue or we have to add a lot of complicated and potentially unreliable caching code with a backend that then has to support cancel functionality without queuing.

The current implementation prevents a client from hogging a printer for more than a single job and also supports traditional DNS load balancing (where a single hostname maps to multiple IPs).

I think it is not teh backend but only the cupsd
which should maintain a database of sent IPP jobs
so that "lpstat -W completed" can show job info.

Adding this to the scheduler might be possible, but at this point I am not prepared to add that layer of complexity to the scheduler when there will be little apparent benefit with significant change in behavior.

Currently "lpstat -W completed" shows only the info
regarding the local system.

Right, this is by design.

What is actually missing is something like a "traceroute"
functionality (i.e. a "tracejob" functionality) which
queries the remote system about the remote job info.

"lpstat -p" actually does this.

Therefore cupsd must also keep the remote job IDs and
the remote host so that it can generate a matching
query for the remote host.

No. cupsd shouldn't be doing the remote lookups for the client - that introduces a lot of performance issues as well as adding complexity to cupsd.

cupsd is meant as a lightweight server - all long-term operations are farmed out to helper programs. This helps to keep cupsd small and simple, and makes the implementation of the long-term operations simpler as well.

By default no "tracejob" should be done because it could
take arbitrary time (think about several subsequent
remote cupsds which forward the job until it reaches
its final destination) and arbitraty stuff like network
timeout could delay the response.

Right, which is why we don't do this in the scheduler - it would be a recipe for an easy denial-of-service attack. Even helper programs can be problematic (too many helper apps running...), which is why I say any "tracing" functionality needs to happen in the client apps (lpstat, lpq) where any DoS is limited to that program and not the whole printing system.

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Mar 4, 2008

CUPS.org User: jsmeix.suse

Sorry for causing confusion.

I fully agree that "tracing" functionality needs
to happen in the client apps (lpstat, lpq).

The "it" in "so that it can generate a matching
query for the remote host" was wrong.

Actually I didn't mean that the cupsd itself generates
the query but that "lpstat" can generate the query
based upon the info which is stored by the cupsd
(usually in a /var/spool/cups/c* file).

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Jul 28, 2009

CUPS.org User: twaugh.redhat

There is no way for clients to discover the job-id or job-uri of the remote job corresponding to a given local job-id or job-uri.

This means it is impossible for a client to do what you suggest.

The original problem remains:

if the remote job's state is IPP_JOB_STOPPED this is not reflected in the job-state of the local job, so the user cannot know that some action needs to be taken.

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Jul 28, 2009

CUPS.org User: mike

Tim - the job-uuid attribute can be used to track jobs across servers. Currently there is no way to do a Get-Job-Attributes request with job-uuid, but you can use Get-Jobs and then look for the corresponding job-uuid value.

Tentatively assigning to 1.5 milestone, but that may change...

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented May 6, 2011

CUPS.org User: mike

Not for 1.5.

@michaelrsweet

This comment has been minimized.

Copy link
Collaborator Author

@michaelrsweet michaelrsweet commented Jan 13, 2012

CUPS.org User: mike

Sorry, we have decided this issue will not be addressed in a CUPS release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant
You can’t perform that action at this time.