Parallel processing for humans
multiprocessing based framework that uses codeblocks.py atomization layer.
Main area of swarm application is web crawlers, mass parsers or email senders with complex internal behaviour.
import logging import sys import time import random from swarm import BaseSwarm as Swarm, define_swarm, atomic, swarm class config: # number of pool workers. Defaults to `multiprocessing.cpu_count()` COMPETITORS = 10 # just set log level and and enjoy # `swarm.log.[debug|info|error]('Message')` inside pool workers LOG_LEVEL = logging.INFO calculator = Swarm(__name__) calculator.config.from_object(config) define_swarm.start() @calculator.sync def squares_of_range(x): #current_process() == <MainProcess> for i in range(1, x): #code inside `atomic()` context pushed into pool asyncronously with atomic() << _: # current_process() == <PoolWorker-X> # transparently takes `i` variable from outer world # and sends all generated items to items queue of top @sync function yield i*i # internally required modules are also imported into PoolWorker time.sleep(round(random.random(),3)) #message that exceeds log level does not leave process bounds swarm.log.debug('Calculated result for %s'%i) if i > 3: swarm.log.error('Oh no! %s is too much for me!'%i) #this function will only finish after all `atomic()`s finished define_swarm.finish() if __name__ == '__main__': for x in squares_of_range(int(sys.argv)): print '>', x
- nested atomic()-s
- recursion of atomic() calling functions
- transparent serialization of closures
- explicit namespace management with codeblocks.py namespace management
- werkzeug like context stacks. Manager and atom level context stacks
- multiprocessing as parallelization backend
- wide use of blinker signals for infill customization
Currently this is alpha version.