Permalink
Browse files

Updated README and example to mention BufferedDrainer.

  • Loading branch information...
nvie committed May 25, 2010
1 parent c44c4ee commit 1d65916c8720ba238ea934d9049fe72405c73238
Showing with 9 additions and 21 deletions.
  1. +2 −5 README.rst
  2. +7 −16 examples/buffer_results.py
View
@@ -66,11 +66,8 @@ the value for the `read_event_cb` parameter::
my_drainer.start()
The granularity currently is a single line. If you want to read predefined
-chunks of data, please fork this repo and implement a `Drainer` subclass
-yourself. If you want a callback that isn't invoked after each line read, but
-after an arbitrary time or amount of lines, you have to implement this
-yourself. (It shouldn't be too hard, though. See the `examples` directory
-for inspiration.)
+chunks (lines) of data, use `BufferedDrainer` instead. See
+examples/buffer_results.py for an example.
Aborting processes
------------------
View
@@ -20,25 +20,16 @@ def destroy_cruncher():
files = []
-def crunch(files):
+def crunch(lines):
print 'Setting up cruncher...'
setup_cruncher()
- while len(files) > 0:
- f = files.pop(0)
- print '- Crunching file %s...' % f.strip()
- do_something_expensive(f)
+ for line, is_err in lines:
+ if is_err: # ignore errors
+ continue
+ print '- Crunching file %s...' % line.strip()
+ do_something_expensive(line)
print 'Releasing cruncher...'
destroy_cruncher()
-def add_to_buffer(line, is_err):
- if is_err:
- # ignore all errors
- return
- files.append(line)
-
- # start crunch synchronously after 20 items have been read
- if len(files) >= 20:
- crunch(files)
-
-d = drainers.Drainer(['find', '.', '-type', 'f'], read_event_cb=add_to_buffer)
+d = drainers.BufferedDrainer(['find', '.', '-type', 'f'], read_event_cb=crunch, chunk_size=20)
d.start()

0 comments on commit 1d65916

Please sign in to comment.