This particular Workling fork provides an SQS Client. See instructions below.
Workling gives your Rails App a simple API that you can use to make code run in the background, outside of the your request.
You can configure how the background code will be run. Currently, workling supports Starling, BackgroundJob and Spawn Runners. Workling is a bit like Actve* for background work: you can write your code once, then swap in any of the supported background Runners later. This keeps things flexible.
The easiest way of getting started with workling is like this:
script/plugin install git://github.com/purzelrakete/workling.git
script/plugin install git://github.com/tra/spawn.git
If you're on an older Rails version, there's also a subversion mirror wor workling (I'll do my best to keep it synched) at:
script/plugin install http://svn.playtype.net/plugins/workling/
This is pretty easy. Just put cow_worker.rb
into into app/workers
, and subclass Workling::Base
:
# handle asynchronous mooing.
class CowWorker < Workling::Base
def moo(options)
cow = Cow.find(options[:id])
logger.info("about to moo.")
cow.moo
end
end
Make sure you have exactly one hash parameter in your methods, workling passes the job :uid into here. Btw, in case you want to follow along with the Mooing, grab 'cows-not-kittens' off github, it's an example workling project. Look at the branches, there's one for each Runner.
Next, you'll want to call your workling in a controller. Your controller might looks like this:
class CowsController < ApplicationController
# milking has the side effect of causing
# the cow to moo. we don't want to
# wait for this while milking, though,
# it would be a terrible waste ouf our time.
def milk
@cow = Cow.find(params[:id])
CowWorker.asynch_moo(:id => @cow.id)
end
end
Notice the asynch_moo
call to CowWorker
. This will call the moo
method on the CowWorker
in the background, passing any parameters you like on. In fact, workling will call whatever comes after asynch_ as a method on the worker instance.
All worker classes must inherit from this class, and be saved in app/workers
. The Worker is loaded once, at which point the instance method create
is called.
Calling async_my_method
on the worker class will trigger background work. This means that the loaded Worker instance will receive a call to the method my_method(:uid => "thisjobsuid2348732947923")
.
If an exception is raised in your Worker, it will not be propagated to the calling code by workling. This is because the code is called asynchronously, meaning that exceptions may be raised after the calling code has already returned. If you need your calling code to handle exceptional situations, you have to pass the error into the return store.
Workling does log all exceptions that propagate out of the worker methods.
Furthermore you can provide custom exception handling by defining the notify_exception(exception, method, options) in your worker class. If it is present it will be called for every exception
RAILS_DEFAULT_LOGGER
is available in all workers. Workers also have a logger method which returns the default logger, so you can log like this:
logger.info("about to moo.")
The workling daemon can be invoked using the command
script/workling_client run
This will make it run in the development environment, with the default settings unless overridden in your development.rb (see below)
For production use the script takes a couple of options
script/workling_client <daemon_options> -- <app_options>
The daemon_options configure the runtime environment of the daemon and can be one of the following options:
--app-name APP_NAME
--dir DIR
--monitor
--ontop
The app-name option is useful if you want to run worklings for multiple Rails apps on the same machine. Each app will need to have its unique name. The monitor option should not be specified if you are using Monit or god. The dir options specifies where the logs and the pid files are saved.
The app_options allow you to specify the configuration of the workling interfaces via the command line:
--client CLIENT
--invoker INVOKER
--routing ROUTING
--load-path LOADPATH
--environment ENVIRONMENT
The client, invoker and routing params take the full class names of the relevant plugin classes. load-path allows overriding where workling looks for workers and environment should be self explanatory.
The following is a sample of how workling_client can be used with god and AMQP:
God.watch do |w|
script = "cd #{RAILS_ROOT} && workling_client"
w.name = "myapp-workling"
w.start = "#{script} start -a myapp-workling -- -e production -i Workling::Remote::Invokers::EventmachineSubscriber -c Workling::Clients::AmqpClient"
w.restart = "#{script} restart -a myapp-workling -- -e production -i Workling::Remote::Invokers::EventmachineSubscriber -c Workling::Clients::AmqpClient"
w.stop = "#{script} stop -a myapp-workling"
w.pid_file = "#{RAILS_ROOT}/log/myapp-workling.pid"
end
Workling automatically detects and uses Spawn, if installed. Spawn basically forks Rails every time you invoke a workling. To see what sort of characteristics this has, go into script/console, and run this:
>> fork { sleep 100 }
=> 1060 (the pid is returned)
You'll see that this executes pretty much instantly. Run 'top' in another terminal window, and look for the new ruby process. This might be around 30 MB. This tells you that using spawn as a runner will result low latency, but will take at least 30MB for each request you make.
You cannot run your workers on a remote machine or cluster them with spawn. You also have no persistence: if you've fired of a lot of work and everything dies, there's no way of picking up where you left off.
To use Workling with Spawn, you can safely delete the config/workling.yml file generated by 'script/plugin install'.
Also in your environment.rb file the following are useful
# Run all jobs to be executed in the foreground
# Workling::Remote.dispatcher = Workling::Remote::Runners::NotRemoteRunner.new
# Execute jobs in a forked process using Spawn
Workling::Remote::Runners::SpawnRunner.options = { :method => :spawn }
Workling::Remote.dispatcher = Workling::Remote::Runners::SpawnRunner.new
As of 27. September 2008, the recommended Starling setup is as follows:
gem sources -a http://gems.github.com/
sudo gem install starling-starling
mkdir /var/spool/starling
The robot Co-Op Memcached Gem version 1.5.0 has several bugs, which have been fixed in the fiveruns-memcache-client gem. The starling-starling gem will install this as a dependency. Refer to the fiveruns README to see what the exact fixes are.
The Rubyforge Starling gem is also out of date. Currently, the most authorative Project is starling-starling on github (27. September 2008).
Workling will now automatically detect and use Starling, unless you have also installed Spawn. If you have Spawn installed, you need to tell Workling to use Starling by putting this in your environment.rb:
Workling::Remote.dispatcher = Workling::Remote::Runners::StarlingRunner.new
Here's what you need to get up and started in development mode. Look in config/workling.yml to see what the default ports are for other environments.
sudo starling -d -p 22122
script/workling_client start
Workling copies a file called workling.yml into your applications config directory. The config file tells Workling on which port Starling is listening.
Notice that the default production port is 15151. This means you'll need to start Starling with -p 15151 on production.
You can also use this config file to pass configuration options to the memcache client which workling uses to connect to starling. use the key 'memcache_options' for this.
In addition, you can use this config to set the type of marshaling that the amqp client uses. Use the 'ymj' key for this. Current options are 'yaml' to use YAML.dump and YAML.load and 'marshal' to use Marshal.dump and Marshal.load. If no value is set, it defaults to 'marshal'.
You can also set sleep time for each Worker. See the key 'listeners' for this. Put in the modularized Class name as a key.
development:
listens_on: localhost:22122
sleep_time: 2
reset_time: 30
listeners:
Util:
sleep_time: 20
memcache_options:
namespace: myapp_development
ymj: yaml
production:
listens_on: localhost:22122, localhost:221223, localhost:221224
sleep_time: 2
reset_time: 30
ymj: marshal
Note that you can cluster Starling instances by passing a comma separated list of values to
Sleep time determines the wait time between polls against polls. A single poll will do one .get on every queue (there is a corresponding queue for each worker method).
If there is a memcache error, the Poller will hang for a bit to give it a chance to fire up again and reset the connection. The wait time can be set with the key reset_time.
Starling comes with it's own script, starling_top. If you want statistics specific to workling, run:
script/starling_status.rb
You might wonder what exactly starling does. Here's a little snippet you can play with to illustrate how it works:
4 # Put messages onto a queue:
5 require 'memcache'
6 starling = MemCache.new('localhost:22122')
7 starling.set('my_queue', 1)
8
9 # Get messages from the queue:
10 require 'memcache'
11 starling = MemCache.new('localhost:22122')
12 loop { puts starling.get('my_queue') }
13
RabbitMQ is a reliable, high performance queue server written in erlang. If you're doing high volume messaging and need a high degree of reliability, you should definitely consider using RabbitMQ over Starling.
A lot of Ruby people have been talking about using RabbitMQ as their Queue of choice. Soundcloud.com are using it, as is new bamboo founder Johnathan Conway, who is using it at his video startup http://www.vzaar.com/. He says:
RabbitMQ – Now this is the matrons knockers when it comes to kick ass, ultra fast and scalable messaging. It simply rocks, with performance off the hook. It’s written in Erlang and supports the AMPQ protocol.
If you're on OSX, you can get started with RabbitMQ by following the installation instructions here. To get an idea of how to directly connect to RabbitMQ using ruby, have a look at this article.
Once you've installed RabbitMQ, install the ruby amqp library:
gem sources -a http://gems.github.com/ (if necessary)
sudo gem install tmm1-amqp
then configure configure your application to use Amqp by adding this:
Workling::Remote.invoker = Workling::Remote::Invokers::EventmachineSubscriber
Workling::Remote.dispatcher = Workling::Remote::Runners::ClientRunner.new
Workling::Remote.dispatcher.client = Workling::Clients::AmqpClient.new
Then start the workling Client:
1 ./script/workling_client start
You're good.
RudeQueue is a Starling-like Queue that runs on top of your database and requires no extra processes. Use this if you don't need very fast job processing and want to avoid managing the extra process starling requires.
Install the RudeQ plugin like this:
1 ./script/plugin install git://github.com/matthewrudy/rudeq.git
2 rake queue:setup
3 rake db:migrate
Configure Workling to use RudeQ. Add this to your environment:
Workling::Clients::MemcacheQueueClient.memcache_client_class = RudeQ::Client
Workling::Remote.dispatcher = Workling::Remote::Runners::ClientRunner.new
Now start the Workling Client:
1 ./script/workling_client start
You're good.
If you don't want to bother with seperate processes, are not worried about latence or memory footprint, then you might want to use Bj to power workling.
Install the Bj plugin like this:
1 ./script/plugin install http://codeforpeople.rubyforge.org/svn/rails/plugins/bj
2 ./script/bj setup
Workling will now automatically detect and use Bj, unless you have also installed Starling. If you have Starling installed, you need to tell Workling to use Bj by putting this in your environment.rb:
Workling::Remote.dispatcher = Workling::Remote::Runners::BackgroundjobRunner.new
NOTE: this code is highly experimental. It was implemented in order to facilitate the async communication between multiple Rails app running in the same domain/datacentre. However it is no longer in production use, since the setup proved to be both too complex to maintain and quite unreliable. The AMQP support turned out to be much more appropriate. I'm putting this here as a starting base for someone interested in using XMPP in such a way. Contact me at derfred on github if you want pointers.
this client requires the xmpp4r gem
in the config/environments/development.rb file (or production.rb etc)
Workling::Remote::Runners::ClientRunner.client = Workling::Clients::XmppClient.new
Workling::Remote.dispatcher = Workling::Remote::Runners::ClientRunner.new # dont use the standard runner
Workling::Remote.invoker = Workling::Remote::Invokers::LoopedSubscriber # does not work with the EventmachineSubscriber Invoker
furthermore in the workling.yml file you need to set up the server details for your XMPP server
development:
listens_on: "localhost:22122"
jabber_id: "sub@localhost/laptop"
jabber_server: "localhost"
jabber_password: "sub"
jabber_service: "pubsub.derfredtop.local"
for details on how to configure your XMPP server (ejabberd) check out the following howto:
http://keoko.wordpress.com/2008/12/17/xmpp-pubsub-with-ejabberd-and-xmpp4r/
finally you need to expose your worker methods to XMPP nodes like so:
class NotificationWorker < Workling::Base
expose :receive_notification, :as => "/home/localhost/pub/sub"
def receive_notification(input)
# something here
end
end
If you're running on Amazon EC2, you may want to leverage SQS (Simple Queue Service) to benefit from this highly scalable queue implementation without having to install any software.
The SQS Client namespaces queues with an optional prefix as well as with the Rails environment, allowing us to distinguish between production and staging queues, for example. Queues are automatically created the first time they are accessed.
Configuring Workling to use SQS is very straightforward and requires no additional software, with the exception of the RightAws gem.
Install the RightAws gem:
1. sudo gem install right_aws
Configure Workling to use the SqsClient. Add this to your environment:
Workling::Remote.dispatcher = Workling::Remote::Runners::ClientRunner.new
Workling::Remote.dispatcher.client = Workling::Clients::SqsClient.new
Add your AWS key id and secret key to workling.yml:
production:
sqs_options:
aws_access_key_id: <your AWS access key id>
aws_secret_access_key: <your AWS secret access key>
You can optionally override the following settings, although the defaults will likely be sufficient:
# Queue names consist of an optional prefix, followed by the environment
# and the name of the key.
prefix: foo_
# The number of SQS messages to retrieve at once. The maximum and default
# value is 10.
messages_per_req: 10
# The SQS visibility timeout for retrieved messages. Defaults to 30 seconds.
visibility_timeout: 30
Now start the Workling Client:
1 ./script/workling_client start
You're good.
SQS messages need to be explicitly deleted from the queue. Otherwise, they will reappear after the visibility timeout. The SQS client currently deletes a message immediately before handing it to a worker, assuming that it will be processed successfully. A more robust implementation (which would require additional hooks in the Workling framework) would be to defer the deletion until after the message was successfully processed, allowing us to retry the message processing in case of an error.
Your worklings can write back to a return store. This allows you to write progress indicators, or access results from your workling. As above, this is fairly slim. Again, you can swap in any return store implementation you like without changing your code. They all behave like memcached. For tests, there is a memory return store, for production use there is currently a starling return store. You can easily add a new return store (over the database for instance) by subclassing Workling::Return::Store::Base
. Configure it like this in your test environment:
Workling::Return::Store.instance = Workling::Return::Store::MemoryReturnStore.new
Setting and getting values works as follows. Read the next paragraph to see where the job-id comes from.
Workling.return.set("job-id-1", "moo")
Workling.return.get("job-id-1") => "moo"
Here is an example worker that crawls an addressbook and puts results into a return store. Workling makes sure you have a :uid in your argument hash - set the value into the return store using this uid as a key:
require 'blackbook'
class NetworkWorker < Workling::Base
def search(options)
results = Blackbook.get(options[:key], options[:username], options[:password])
Workling.return.set(options[:uid], results)
end
end
call your workling as above:
@uid = NetworkWorker.asynch_search(:key => :gmail, :username => "foo@gmail.com", :password => "bar")
you can now use the @uid to query the return store:
results = Workling.return.get(@uid)
of course, you can use this for progress indicators. just put the progress into the return store.
enjoy!
There are two new base classes you can extend to add new brokers. I'll describe how this is done usin amqp as an example. The code i show is already a part of workling.
Clients help workling to connect to job brokers. To add an AmqpClient, we need to extend from Workling::Client::Base
and implement a couple of methods.
require 'workling/clients/base'
require 'mq'
#
# An Ampq client
#
module Workling
module Clients
class AmqpClient < Workling::Clients::Base
# starts the client.
def connect
@amq = MQ.new
end
# stops the client.
def close
@amq.close
end
# request work
def request(queue, value)
@amq.queue(queue).publish(value)
end
# retrieve work
def retrieve(queue)
@amq.queue(queue)
end
# subscribe to a queue
def subscribe(queue)
@amq.queue(queue).subscribe do |value|
yield value
end
end
end
end
end
Were's using the eventmachine amqp client for this, you can find it up on github. connect
and close
do exactly what it says on the tin: connecting to rabbitmq and closing the connection.
request
and retrieve
are responsible for placing work on rabbitmq. The methods are passed the correct queue, and a value that contains the worker method arguments. If you need control over the queue names, look at the RDoc for Workling::Routing::Base. In our case, there's no special requirement here.
Finally, we implement a subscribe
method. Use this if your broker supports callbacks, as is the case with amqp. This method expects to a block, which we pass into the amqp subscribe method here. The block will be called when a message is available on the queue, and the result is yielded into the block.
Having subscription callbacks is very nice, because this way, we don't need to keep calling get
on the queue to see if something new is waiting.
So now we're done! That's all you need to add RabbitMQ to workling. Configure it in your application as descibed below.
There's still potential to improve things though. Workling 0.4.0 introduces the idea of invokers. Invokers grab work off a job broker, using a client (see above). They subclass Workling::Remote::Invokers::Base. Read the RDoc for a description of the methods.
Workling comes with a couple of standard invokers, like the BasicPoller. This invoker simply keeps hitting the broker every n seconds, checking for new work and executing it immediately. The ThreadedInvoker does the same, but spawns a Thread for every Worker class the project defines.
So Amqp: it would be nice if we had an invoker that makes use of the subscription callbacks. Easily done, lets have a look:
require 'eventmachine'
require 'workling/remote/invokers/base'
#
# Subscribes the workers to the correct queues.
#
module Workling
module Remote
module Invokers
class EventmachineSubscriber < Workling::Remote::Invokers::Base
def initialize(routing, client_class)
super
end
#
# Starts EM loop and sets up subscription callbacks for workers.
#
def listen
EM.run do
connect do
routes.each do |queue|
@client.subscribe(queue) do |args|
run(queue, args)
end
end
end
end
end
def stop
EM.stop if EM.reactor_running?
end
end
end
end
end
Invokers have to implement two methods, listen
and stop
. Listen starts the main listener loop, which is responsible for starting work when it becomes available.
In our case, we need to start an EM loop around listen
. This is because the Ruby AMQP library needs to run inside of an eventmachine reactor loop.
Next, inside of listen
, we need to iterate through all defined routes. There is a route for each worker method you defined in your application. The routes double as queue names. For this, you can use the helper method routes
. Now we attach a callback to each queue. We can use the helper method run
, which executes the worker method associated with the queue, passing along any supplied arguments.
That's it! We now have a more effective Invoker.
The following people contributed code to workling so far. Many thanks :) If I forgot anybody, I aplogise. Just drop me a note and I'll add you to the project so that you can amend this!
Anybody who contributes fixes (with tests), or new functionality (with tests) which is pulled into the main project, will also be be added to the project.
- Andrew Carter (ascarter)
- Chris Gaffney (gaffneyc)
- Matthew Rudy (matthewrudy)
- Larry Diehl (reeze)
- grantr (francios)
- David (digitalronin)
- Dave Dupré
- Douglas Shearer (dougal)
- Nick Plante (zapnap)
- Brent
- Evan Light (elight)
Copyright (c) 2008 play/type GmbH, released under the MIT license