New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hangfire 2.0 roadmap #929

odinserj opened this Issue Jul 6, 2017 · 0 comments


1 participant
Copy link

odinserj commented Jul 6, 2017

Programming Model

In short: generalization of a background job, more Task-like semantics for background jobs.

  • Pending, Completed, Faulted, Canceled states for regular background jobs.
  • Result of an antecedent job is available in continuations.
  • Continuations can be run synchronously, i.e. on the same worker without any storage call.
  • Recurring jobs (and batches, name will be changed) are regular jobs with just a special processing.
  • Continuously-running processes are exposed as regular jobs also.
  • Support for Job.WaitAny and Job.WaitAll background job types.
  • Queues can be used to isolate different code bases without creating additional storage.
  • Internals know nothing about user types, to allow shared monitoring.
  • Full async/await programming model support for background jobs via dedicated threads.
  • IoC container scope starts much earlier in the processing pipeline.


  • Even simpler understanding of background jobs due to well-known Task-like states.
  • Better composition for jobs with support for results, ability to use map-reduce model.
  • Simplified continuously-running processes without need for distributed locks.
  • Support for microservices architecture by queue-based isolation.
  • More performance for recurring jobs without any iterations.

Storage Model

In short: support for coarse-grained operations, idempotent writes to support eventually-consistent storages.

  1. Two general aspects of a storage are message queues and background jobs, other types are removed.
  2. Storage transactions are exposed directly to user code to support atomic and batched operations.
  3. All storage operations are coarse-grained to support batch processing.
  4. Each command in a storage transaction is idempotent to support eventually-consistent storages.
  5. Storages that support automatic sharding can use it by implementing custom transaction log.
  6. Built-in support for time-based scheduling in message queues for faster delayed jobs.
  7. Storage abstractions know nothing about user types, data is serialized before passing to thee storage API.
  8. Full support for asynchronous I/O for storages in every query.
  9. Binary data and payloads for variable-length fields to reduce payload size.


  • Better network and storage utilization by using batched operations.
  • Get rid of any bottlenecks to support nearly linear scaling for background processing.
  • Simplified storage implementations for eventually consistent storages without implementing rollbacks via Two-Phase Commit protocol.
  • Better performance for scheduled (time-based) processing.
  • Redis Cluster and Cassandra can be utilized with any number of nodes with almost linear processing scale.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment