Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch API #261

Merged
merged 104 commits into from Jan 26, 2016
Merged

Batch API #261

merged 104 commits into from Jan 26, 2016

Conversation

dgaubert
Copy link
Contributor

Daniel García Aubert and others added 30 commits December 3, 2015 15:00
…job queue. Limited one job running at the same time per queue instead of using a job's counter to limit it.
…pen wwith any job. Implemented test without a silly timeout
}

console.log('Exit gracefully');
process.exit(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realised about this. Not a big fan of delegating the control of SIGTERM to batch entity.

I would prefer to have app.js knowing about SIGTERM and calling batch.drain() and waiting for the callback to process.exit. Batch shouldn't know about process, IMO.

What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree! I'm going to move to app.js right away!

…ble from main app module to expose drain mechanism

var queue = self.jobQueuePool.getQueue(host);

this.jobCanceller.drain(job_id, function (err) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that worries me about this approach is having uninterruptible queries.

I mean, I think the logic here is good: we try to cancel the query:

  1. If we succeed to cancel, we re-enqueue it, mark it as pending, and wait until is consumed again.
  2. If we fail to cancel, we keep it running and we don't change the status.

However the query might end at some point but the job will be kept as ... running (?) status.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need one status more. unknown to indicate to user that it wasn't possible to know what happened with the job due to cancel his job failed while draining. In this scenario, user will have to review if his job was done successfully or not.

Do you have a better approach?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a better approach. I agree with you on having unknown for these scenarios as the current alternative 👍

@rochoa
Copy link
Contributor

rochoa commented Jan 26, 2016

:shipit: !

dgaubert added a commit that referenced this pull request Jan 26, 2016
@dgaubert dgaubert merged commit f6e3cdf into master Jan 26, 2016
@rochoa rochoa deleted the batch-api branch January 26, 2016 13:43
rochoa added a commit that referenced this pull request Jan 29, 2016
New features:

 * Set `Last-Modified` header based on affected tables (#101)
 * Batch API (#261):
   - New endpoint to create, read, update and delete long-running queries (jobs).
   - Batch service to process jobs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants