Browse files

Documentation updates

  • Loading branch information...
1 parent 456980e commit a756a70119d38b21802cafab7bc5f8b70a4365da @pitr-ch pitr-ch committed Dec 27, 2016
Showing with 489 additions and 180 deletions.
  1. +48 −31 doc/
  2. +180 −42 doc/
  3. +0 −2 doc/promises.init.rb
  4. +258 −91 doc/
  5. +3 −14 lib/concurrent/edge/promises.rb
@@ -1,44 +1,61 @@
-# Description
-Promises is a new framework unifying former `Concurrent::Future`,
+Promises is a new framework unifying former tools `Concurrent::Future`,
`Concurrent::Promise`, `Concurrent::IVar`, `Concurrent::Event`,
-`Concurrent.dataflow`, `Delay`, `TimerTask` . It extensively uses the new
-synchronization layer to make all the methods *lock-free* (with the exception
-of obviously blocking operations like `#wait`, `#value`, etc.). As a result it
-lowers a danger of deadlocking and offers better performance.
-It provides tools as other promise libraries, users coming from other languages
-and other promise libraries will find the same tools here (probably named
-differently though). The naming convention borrows heavily from JS promises.
+`Concurrent.dataflow`, `Delay`, and `TimerTask` of concurrent-ruby. It
+extensively uses the new synchronization layer to make all the methods
+*lock-free* (with the exception of obviously blocking operations like `#wait`,
+`#value`, etc.). As a result it lowers danger of deadlocking and offers
+better performance.
+It provides similar tools as other promise libraries do, users coming from
+other languages and other promise libraries will find the same tools here
+(probably named differently though). The naming conventions were borrowed
+heavily from JS promises.
-This framework however is not just a re-implementation of other promise
-library, it takes inspiration from many other promise libraries, adds new
-ideas, and integrates with other abstractions like actors and channels.
-Therefore it is much more likely that user fill find a suitable solution for
-his problem in this library, or if needed he will be able to combine parts
-which were designed to work together well (rather than having to combine
-fragilely independent tools).
-> *Note:* The channel and actor integration is younger and will stay in edge for
-> a little longer than core promises.
-> *TODO*
-> - What is it?
-> - What is it for?
-> - Main classes {Future}, {Event}
-> - Explain pool usage :io vs :fast, and `_on` `_using` suffixes.
-> - Why is this better than other solutions, integration actors and channels
+This framework, however, is not just a re-implementation of other promise
+library, it draws inspiration from many other promise libraries, adds new
+ideas, and is integrated with other abstractions like actors and channels.
+Therefore it is likely that user will find a suitable solution for a problem in
+this framework. If the problem is simple user can pick one suitable
+abstraction, e.g. just promises or actors. If the problem is complex user can
+combine parts (promises, channels, actors) which were designed to work together
+well to a solution. Rather than having to combine fragilely independent tools.
+This framework allows its users to:
+- Process tasks asynchronously
+- Chain, branch, and zip the asynchronous tasks together
+ - Therefore, to create directed acyclic graph (hereafter DAG) of tasks
+- Create delayed tasks (or delayed DAG of tasks)
+- Create scheduled tasks (or delayed DAG of tasks)
+- Deal with errors through rejections
+- Reduce danger of deadlocking
+- Control the concurrency level of tasks
+- Simulate thread-like processing without occupying threads
+ - It allows to create tens of thousands simulations on one thread
+ pool
+ - It works well on all Ruby implementations
+- Use actors to maintain isolated states and to seamlessly combine
+ it with promises
+- Build parallel processing stream system with back
+ pressure (parts, which are not keeping up, signal to the other parts of the
+ system to slow down).
+**The guide is best place to start with promises, see**
# Main classes
The main public user-facing classes are {Concurrent::Promises::Event} and
{Concurrent::Promises::Future} which share common ancestor
+> {include:Concurrent::Promises::AbstractEventFuture}
> {include:Concurrent::Promises::Event}
> {include:Concurrent::Promises::Future}
@@ -7,7 +7,7 @@ FactoryMethods. They are not designed for inheritance but rather for
-Concurrent::Promises::FactoryMethods.instance_methods false
The module can be included or extended where needed.
@@ -438,6 +438,27 @@ future.fulfill 1 rescue $!
future.fulfill 2, false
+## How are promises executed?
+Promises use global pools to execute the tasks. Therefore each task may run on
+different thread which implies that users have to be careful not to depend on
+Thread local variables (or they have to set at the begging of the task and
+cleaned up at the end of the task).
+Since the tasks are running on may different threads of the thread pool, it's
+better to follow following rules:
+- Use only data passed in through arguments or values of parent futures, to
+ have better control over what are futures accessing.
+- The data passed in and out of futures are easier to deal with if they are
+ immutable or at least treated as such.
+- Any mutable and mutated object accessed by more than one threads or futures
+ must be thread safe, see {Concurrent::Array}, {Concurrent::Hash}, and
+ {Concurrent::Map}. (Value of a future may be consumed by many futures.)
+- Futures can access outside objects, but they has to be thread-safe.
+> *TODO: This part to be extended*
# Advanced
## Callbacks
@@ -470,6 +491,25 @@ Promises.future_on(:fast) { 2 }.
+## Run (simulated process)
+Similar to flatting is running. When `run` is called on a future it will flat
+indefinitely as long the future fulfils into a `Future` value. It can be used
+to simulate a thread like processing without actually occupying the thread.
+count = lambda do |v|
+ v += 1
+ v < 5 ? Promises.future_on(:fast, v, &count) : v
+ map { Promises.future_on(:fast, 0, &count).run.value! }.
+ all? { |v| v == 5 }
+Therefore the above example finished fine on the the `:fast` thread pool even
+though it has much less threads than there is the simulated process.
# Interoperability
## Actors
@@ -500,10 +540,47 @@ The `ask` method returns future.
+## ProcessingActor
+> *TODO: Documentation to be added in few days*
+## Channel
+There is an implementation of channel as well. Lets start by creating a
+channel with capacity 2 messages.
+ch1 = 2
-## Channels
+We push 3 messages, it can be observed that the last future representing the
+push is not fulfilled since the capacity prevents it. When the work which fills
+the channel depends on the futures created by push it can be used to create
+back pressure – the filling work is delayed until the channel has space for
+more messages.
-> *TODO: To be added*
+pushes = { |i| ch1.push i }
+A selection over channels can be created with select_channel factory method. It
+will be fulfilled with a first message available in any of the channels. It
+returns a pair to be able to find out which channel had the message available.
+ch2 = 2
+result = Concurrent::Promises.select_channel(ch1, ch2)
+Promises.future { 1+1 }.then_push_channel(ch1)
+result = (
+ Concurrent::Promises.fulfilled_future('%02d') &
+ Concurrent::Promises.select_channel(ch1, ch2)).
+ then { |format, (channel, value)| format format, value }
# Use-cases
@@ -573,7 +650,7 @@ results = { computer.ask [:run, -> { sleep 0.1; :result }] }
-## Too many threads / fibers
+## Solving the Thread count limit by thread simulation
Sometimes an application requires to process a lot of tasks concurrently. If
the number of concurrent tasks is high enough than it is not possible to create
@@ -606,7 +683,7 @@ Promises.future(0, &body).run.value! # => 5
This solution works well an any Ruby implementation.
-> TODO add more complete example
+> *TODO: More examples to be added.*
## Cancellation
@@ -771,55 +848,116 @@ end #!)
-## Long stream of tasks
+## Long stream of tasks, applying back pressure
+Lets assume that we queuing an API for a data and the queries can be faster
+than we are able to process them. This example shows how to use channel as a
+buffer and how to apply back pressure to slow down the queries.
+require 'json' #
+channel = 6
+source, token = Concurrent::Cancellation.create
+def query_random_text(token, channel)
+ Promises.future do
+ # for simplicity the query is omitted
+ # url = 'some api'
+ # Net::HTTP.get(URI(url))
+ sleep 0.1
+ { 'message' =>
+ 'Lorem ipsum rhoncus scelerisque vulputate diam inceptos'
+ }.to_json
+ end.then(token) do |value, token|
+ # The push to channel is fulfilled only after the message is successfully
+ # published to the channel, therefore it will not continue querying until
+ # current message is pushed.
+ channel.push(value) |
+ # It could wait on the push indefinitely if the token is not checked
+ # here with `or` (the pipe).
+ token.to_future
+ end.flat_future.then(token) do |_, token|
+ # query again after the message is pushed to buffer
+ query_random_text(token, channel) unless token.canceled?
+ end
+words = []
+words_throttle = 1
+def count_words_in_random_text(token, channel, words, words_throttle)
+ channel.pop.then do |response|
+ string = JSON.load(response)['message']
+ # processing is slower than querying
+ sleep 0.2
+ words_count = string.scan(/\w+/).size
+ end.then_throttled_by(words_throttle, words) do |words_count, words|
+ # safe since throttled to only 1 task at a time
+ words << words_count
+ end.then(token) do |_, token|
+ # count words in next message
+ unless token.canceled?
+ count_words_in_random_text(token, channel, words, words_throttle)
+ end
+ end
+query_processes = do
+ Promises.future(token, channel, &method(:query_random_text)).run
+word_counter_processes = do
+ Promises.future(token, channel, words, words_throttle,
+ &method(:count_words_in_random_text)).run
+sleep 0.5
+Let it run for a while then cancel it and ensure that the runs all fulfilled
+(therefore ended) after the cancellation. Finally print the result.
-> TODO Channel
-## Parallel enumerable ?
+Compared to using threads directly this is highly configurable and compostable
## Periodic task
-> TODO revisit, use cancellation, add to library
+By combining `schedule`, `run` and `Cancellation` periodically executed task
+can be easily created.
-def schedule_job(interval, &job)
- # schedule the first execution and chain restart of the job
- Promises.schedule(interval, &job).chain do |fulfilled, continue, reason|
- if fulfilled
- schedule_job(interval, &job) if continue
- else
- # handle error
- reason
- # retry sooner
- schedule_job(interval, &job)
- end
- end
+repeating_scheduled_task = -> interval, token, task do
+ Promises.
+ # Schedule the task.
+ schedule(interval, token, &task).
+ # If successful schedule again.
+ # Alternatively use chain to schedule always.
+ then {, token, task) }
-queue =
-count = 0
-interval = 0.05 # small just not to delay execution of this example
-schedule_job interval do
- queue.push count
- count += 1
- # to continue scheduling return true, false will end the task
- if count < 4
- # to continue scheduling return true
- true
- else
- # close the queue with nil to simplify reading it
- queue.push nil
- # to end the task return false
- false
+cancellation, token = Concurrent::Cancellation.create
+task = -> token do
+ 5.times do
+ token.raise_if_canceled
+ # do stuff
+ print '.'
+ sleep 0.01
- # read the queue
-arr, v = [], nil; arr << v while (v = queue.pop) #
- # arr has the results from the executed scheduled tasks
+result = Promises.future(0.1, token, task, &repeating_scheduled_task).run
+sleep 0.2
@@ -3,5 +3,3 @@
def do_stuff
-# Concurrent.use_stdlib_logger Logger::DEBUG
Oops, something went wrong.

0 comments on commit a756a70

Please sign in to comment.