Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

Initial commit

  • Loading branch information...
commit ff01dcef7ba84502b9c32087afc8fb9f63dbc6e1 0 parents
@ferd authored
4 .gitignore
@@ -0,0 +1,4 @@
+*.swp
+*.beam
+*.dump
+*.COVER.*
193 README.markdown
@@ -0,0 +1,193 @@
+# Dispcount #
+
+Dispcount is an attempt at making more efficient resource dispatching than usual Erlang pool approaches based on a single process receiving messages from everyone and possibly getting overloaded when demand is too high, or at least seeing slower and slower response times.
+
+## When should I use dispcount? ##
+
+There have been a few characteristics assumed to be present for the design of dispcount:
+
+- resources are limited, but the demand for them is superior to their availability.
+- requests for resources are *always* incoming
+- because of the previous point, it is possible and prefered to simply not queue requests for busy resources, but instantly return. Newer requests will take their spot
+- low latency to know whether or not a resource is available is more important than being able to get all queries to run.
+
+If you cannot afford to ignore a query and wish to eventually serve every one of them, dispcount might not be for you.
+
+## How to build ##
+
+ `$ ./rebar compile`
+
+## Running tests ##
+
+Run the small Common Test suite with:
+
+ `$ rebar ct`
+
+## How to use dispcount ##
+
+First start the application:
+
+ `application:start(dispcount).`
+
+When resources need to be dispatched, a dispatcher has to be started:
+
+ ok = dispcount:start_dispatch(
+ ref_dispatcher,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,10}]
+ )
+
+The general form is:
+
+ ok = dispcount:start_dispatch(
+ DispatcherName,
+ {CallbackMod, Arg},
+ [{restart,Type},{shutdown,Timeout},
+ {maxr,X},{maxt,Y},{resources,Num}]
+ )
+
+The `restart`, `shutdown`, `maxr`, and `maxt` values allow to configure the supervisor that will take care of that dispatcher. The `resources` value lets you set how many 'things' you want available. If you were handling HTTP sockets, you could use 200 connections by putting `{resources,200}`. Skip to the next section to see how to write your own dispatcher callback module.
+
+The dispatcher is then put under the supervision structure of the dispcount application. To be able to further access resources, you need to fetch information related to the dispatcher:
+
+ {ok, Info} = dispcount:dispatcher_info(ref_dispatcher)
+
+This is because we want to reduce the number of calls to configuration spots and centralized points in a node. As such, you should call this function in the supervisor of whatever is going to call the dispatcher, and share the value to all children if possible. That way, a basic blob of content is going to be shared without any cost to all processes.
+
+Using this `Info` value, calls to checkout resources can be made:
+
+ case dispcount:checkout(Info) of
+ {ok, CheckinReference, Resource} ->
+ timer:sleep(10),
+ dispcount:checkin(Info, CheckinReference, Resource);
+ {error, busy} ->
+ give_up
+ end
+
+And that's the gist of it.
+
+## Writing a dispatcher callback module ##
+
+Each dispatcher to allow to lend resources is written as a callback for a custom behaviour. Here's an example (tested) callback module that simply returns references:
+
+ -module(ref_dispatch).
+ -behaviour(dispcount).
+ -export([init/1, checkout/2, checkin/2, handle_info/2, dead/1,
+ terminate/2, code_change/3]).
+
+ init([]) ->
+ {ok, make_ref()}.
+
+This one works a bit like a gen\_server. You have arguments passed and return `{ok,State}`. The state will then be carried around for the subsequent calls.
+
+The next function is `checkout`:
+
+ checkout(_From, Ref) ->
+ {ok, Ref, undefined}.
+
+By default, the behaviour takes care of making sure only valid requests for a checkout (resources aren't busy) are going through. The `_From` variable is the pid of the requester of a resource. This is useful if you need to change things like a socket's controlling process or a port's controller. Then, you only need to return a resource by doing `{ok, Resource, NewState}`, and th ecaller will see `{ok, Reference, Resource}`. The `Reference` is a token added in by dispcount and is needed to chick the resource back in. Other things to return are `{error, Reason, NewState}`, which will return `{error, Reason}` to the caller.
+
+Finally, you can return `{stop, Reason, NewState}` to terminate the resource watcher. Note that this is risky because of how things work (see the relevant section for this later in this README).
+
+To check resources back in, the behaviour needs to implement the following:
+
+ checkin(Ref, undefined) ->
+ {ok, Ref};
+ checkin(SomeRef, Ref) ->
+ {ignore, Ref}.
+
+In this case, what happens is that we make sure that the resource that is being sent back to us is the right one. The first function clause makes sure that we only receive a reference after we've distributed one, and we then accept that one. If we receive extraneous references (maybe someone called the `checkin/3` function twice?), we ignore the result.
+
+The second clause here is entirely optional and defensive programming. Note that checking a resource in is a synchronous operation.
+
+The next call is the `dead/1` function:
+
+ dead(undefined) ->
+ {ok, make_ref()}.
+
+`dead(State)` is called whenever the process that checked out a given resource has died. This is because dispcount automatically monitors them so you don't need to do it yourself. If it sees the resource owner died, it calls thhat function.
+
+ This lets you create a new instance of a resource to distribute later on, if required or possible. As an example, if we were to use a permanent connection to a database as a resource, then this is where we'd set a new connection up and then keep going as if nothing went wrong.
+
+You can also receive unexpected messages to your process, if you felt like implementing your own side-protocols or whatever:
+
+ handle_info(_Msg, State) ->
+ {ok, State}.
+
+And finally, you benefit from a traditional OTP `terminate/2` function, and the related `code_change/3`.
+
+ terminate(_Reason, _State) ->
+ ok.
+
+ code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
+
+Here's a similar callback module to handle HTTP sockets (untested):
+
+ -module(http_dispatch).
+ -behaviour(dispcount).
+ -export([init/1, checkout/2, checkin/2, handle_info/2, dead/1, terminate/2, code_change/3]).
+
+ -record(state, {resource, given=false, port}).
+
+ init([{port,Num}]) ->
+ {ok,Socket} = gen_tcp:connect({127,0,0,1}, Num, [binary]),
+ {ok, #state{resource=Socket, port=Num}}.
+
+ %% just in case, but that should never happen anyway :V I'm paranoid!
+ checkout(_From, State = #state{given=true}) ->
+ {error, busy, State};
+ checkout(From, State = #state{resource=Socket}) ->
+ gen_tcp:controlling_process(Socket, From),
+ {ok, Socket, State#state{given=true}}.
+
+ checkin(Socket, State = #state{resource=Socket, given=true}) ->
+ {ok, State#state{given=false}};
+ checkin(_Socket, State) ->
+ %% The socket doesn't match the one we had -- an error happened somewhere
+ {ignore, State}.
+
+ dead(State) ->
+ %% aw shoot, someone lost our resource, we gotta create a new one:
+ {ok, NewSocket} = gen_tcp:connect({127,0,0,1}, State#state.port, [binary]),
+ {ok, State#state{resource=NewSocket,given=false}}.
+ %% alternatively:
+ %% {stop, Reason, State}
+
+ handle_info(_Msg, State) ->
+ %% something unexpected with the TCP connection if we set it to active,once???
+ {ok, State}.
+
+ terminate(_Reason, _State) ->
+ %% let the GC clean the socket.
+ ok.
+
+ code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
+
+## How does it work ##
+
+What killed most of the pool and dispatch systems I used before was the amount of messaging required to make things work. When many thousands of processes require information from a central point at once, performance would quickly degrade as soon as the protocol had some messaging involved at its core.
+
+We'd see mailbox queue build-up, busy schedulers, and booming memory. Dispcount tries to solve the problem by using the ubiquitous Erlang optimization tool: ETS tables.
+
+The core concept of dispcount is based on two ETS tables: a dispatch table (write-only) and a worker matchup table (read-only). Two tables because what costs the most performance with ETS in terms of concurrency is switching between reading and writing.
+
+In each of the table, `N` entries are added: one for each resource available, matching with a process that manages that resource (a *watcher*). Persistent hashing of the resources allows to dispatch queries uniformly to all of these watchers. Once you know which watcher your request is dedicated to, the dispatch table is called into action.
+
+The dispatch table manages to allow both reads and writes while remaining write-only. The trick is to use the `ets:update_counter` functions, which atomically increment a counter and return the value, although the operation is only writing and communicating a minimum of information.
+
+The gist of the idea is that you can only get the permission to message the watcher if you're the first one to increment the counter. Other processes that try to do it just instantly give up. This guarantees that a single caller at a time has the permission to message a given worker, a bit like a mutex, but implemented efficiently (for Erlang, that is).
+
+Then the lookup table comes in action; because we have the permission to message a watcher, we look up its pid, and then send a message.
+
+Whenever we check a resource back in or the process that acquired it dies, the counter is reset to 0 and a new request can come in and take its place.
+
+Generally, this allows us to move the bottleneck of similar applications away from a single process and its mailbox, to an evenly distributed number of workers. Then the next bottleneck will be the ETS tables (both set with read and write concurrency options), which are somewhat less likely to be as much of a hot spot.
+
+## What's left to do? ##
+
+- More complete testing suite.
+- Adding a function call to allow the transfer of ownership from a process to another one to avoid messing with monitoring in the callback module.
+- Testing to make sure the callback modules can be updated with OTP relups and appups. This is so far untested.
5 include/state.hrl
@@ -0,0 +1,5 @@
+-record(config, {dispatch_name :: atom(),
+ num_watchers = 25 :: pos_integer(),
+ watcher_type = ets :: 'named' | 'ets',
+ dispatch_table :: ets:tid() | 'undefined',
+ worker_table :: ets:tid() | 'undefined'}).
BIN  rebar
Binary file not shown
31 rebar.config
@@ -0,0 +1,31 @@
+%% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*-
+%% ex: ts=4 sw=4 ft=erlang et
+%% This is a sample rebar.conf file that shows examples of some of rebar's
+%% options.
+
+%% == Core ==
+
+%% Additional library directories to add to the code path
+{lib_dirs, []}.
+
+%% == Erlang Compiler ==
+
+%% Erlang compiler options
+{erl_first_files, ["dispcount"]}.
+{erl_opts, [debug_info, {i, "include"}, {d,'DEBUG'}]}.
+
+%% == Common Test ==
+
+%% Option to pass extra parameters when launching Common Test
+{ct_extra_params, "-boot start_sasl -pa ebin/"}.
+
+%% == Dependencies ==
+
+%% Where to put any downloaded dependencies. Default is "deps"
+{deps_dir, "deps"}.
+
+%% What dependencies we have, dependencies can be of 3 forms, an application
+%% name as an atom, eg. mochiweb, a name and a version (from the .app file), or
+%% an application name, a version and the SCM details on how to fetch it (SCM
+%% type, location and revision). Rebar currently supports git, hg, bzr and svn.
+{deps, []}.
10 src/dispcount.app.src
@@ -0,0 +1,10 @@
+{application, dispcount, [
+ {description, "A dispatching library for resources and "
+ "task limiting based on shared counters"},
+ {vsn, "0.1.0"},
+ {applications, [kernel, stdlib]},
+ {registered, []},
+ {mod, {dispcount,[]}},
+ {modules, [dispcount, dispcount_supersup, dispcount_sup, dispcount_util,
+ dispcount_watcher, dispcount_serv]}
+]}.
49 src/dispcount.erl
@@ -0,0 +1,49 @@
+-module(dispcount).
+-behaviour(application).
+-export([start/2,stop/1]).
+-export([start_dispatch/3, stop_dispatch/1, dispatcher_info/1, checkout/1, checkin/3]).
+-export([behaviour_info/1]).
+
+%% eventually switch to -callback if it becomes backwards compatible
+behaviour_info(callbacks) ->
+ [{init,1},
+ {checkout, 2},
+ {checkin, 2},
+ {handle_info,2},
+ {dead,1},
+ {terminate,2},
+ {code_change,3}];
+behaviour_info(_Other) ->
+ undefined.
+
+-spec start(normal, _) -> {ok, pid()}.
+start(normal, _Args) ->
+ dispcount_supersup:start_link().
+
+-spec stop(_) -> ok.
+stop(_State) ->
+ ok.
+
+-spec stop_dispatch(Name::atom()) -> ok.
+stop_dispatch(Name) ->
+ dispcount_supersup:stop_dispatch(Name).
+
+-spec start_dispatch(Name::atom(), {module(), _}, term()) -> ok | already_defined.
+start_dispatch(Name, Mod={M,A}, DispatchOpts) when is_atom(M) ->
+ Res = dispcount_supersup:start_dispatch(Name, Mod, DispatchOpts),
+ %% wait for all tables to be there. A bit messy, but it can be done:
+ dispcount_serv:wait_for_dispatch(Name, infinity),
+ Res.
+
+%% Should be called as infrequently as possible
+-spec dispatcher_info(Name::atom()) -> term().
+dispatcher_info(Name) ->
+ dispcount_serv:get_info(Name).
+
+-spec checkout(term()) -> {ok, term(), term()} | {error, term()}.
+checkout(Info) ->
+ dispcount_watcher:checkout(Info).
+
+-spec checkin(term(), term(), term()) -> ok.
+checkin(Info, CheckinRef, Resource) ->
+ dispcount_watcher:checkin(Info, CheckinRef, Resource).
109 src/dispcount_serv.erl
@@ -0,0 +1,109 @@
+%% In charge of relaying info about the supervisor when called.
+-module(dispcount_serv).
+-behaviour(gen_server).
+-include("state.hrl").
+
+-export([start_link/4, wait_for_dispatch/2, get_info/1]).
+-export([init/1, handle_call/3, handle_cast/2, handle_info/2,
+ terminate/2, code_change/3]).
+
+%%%%%%%%%%%%%%%%%
+%%% INTERFACE %%%
+%%%%%%%%%%%%%%%%%
+-spec start_link(Parent::pid(), Name::atom(), {module(),[term()]}, [term(),...]) -> {ok, pid()}.
+start_link(Parent, Name, {M,A}, Opts) ->
+ gen_server:start_link(?MODULE, {Parent, Name, {M,A}, Opts}, []).
+
+-spec wait_for_dispatch(Name::atom(), infinity | pos_integer()) -> ok.
+wait_for_dispatch(Name, Timeout) ->
+ gen_server:call(get_name(Name), wait_for_tables, Timeout).
+
+-spec get_info(Name::atom()) -> #config{}.
+get_info(Name) ->
+ gen_server:call(get_name(Name), get_info).
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%%% GEN_SERVER CALLBACKS %%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+init({Parent, Name, {M,A}, Opts}) ->
+ %% This one needs to go fast because we're gonna mess up the synchronous
+ %% starts of servers for the sake of the pool. For this reason, we'll
+ %% temporarily use this process to receive all requests and just forward
+ %% them when the time has come, maybe.
+ ConfTmp = init_tables(Opts),
+ Conf = ConfTmp#config{dispatch_name=Name, num_watchers=proplists:get_value(resources,Opts,10)},
+ SupSpec =
+ {{simple_one_for_one, proplists:get_value(maxr, Opts, 1), proplists:get_value(maxt, Opts, 60)},
+ [{watchers,
+ {dispcount_watcher, start_link, [Conf,{M,A}]},
+ proplists:get_value(restart,Opts,permanent),
+ proplists:get_value(shutdown,Opts,5000),
+ worker,
+ [dispcount_watcher,M]}]}, % <- check to make sure this can survive stuff
+ ChildSpec = {watchers_sup, {watchers_sup, start_link, [SupSpec]},
+ permanent, infinity, supervisor, [watchers_sup]},
+ self() ! continue_init,
+ register(get_name(Name), self()),
+ {ok, {Parent, ChildSpec, Conf}}.
+
+handle_call(get_info, _From, S = #config{}) ->
+ {reply, {ok, S}, S};
+handle_call(wait_for_tables, _From, S = #config{num_watchers=N, dispatch_table=Tid}) ->
+ %% there should be N + 1 entries in the dispatch table
+ case ets:info(Tid, size) of
+ X when X =:= N+1 ->
+ {reply, ok, S};
+ _ ->
+ timer:sleep(1),
+ handle_call(wait_for_tables, _From, S)
+ end;
+handle_call(_Call, _From, State) ->
+ {noreply, State}.
+
+handle_cast(_Cast, State) ->
+ {noreply, State}.
+
+handle_info(continue_init, {Parent, ChildSpec, Conf}) ->
+ {ok, Sup} = supervisor:start_child(Parent, ChildSpec),
+ ok = start_watchers(Sup, Conf),
+ {noreply, Conf};
+handle_info(_Info, State) ->
+ {noreply, State}.
+
+code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
+
+terminate(_Reason, _State) ->
+ ok.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%
+%%% PRIVATE & HELPERS %%%
+%%%%%%%%%%%%%%%%%%%%%%%%%
+init_tables(Opts) ->
+ case proplists:get_value(watcher_type, Opts, ets) of
+ ets -> %% here
+ Dispatch = ets:new(dispatch_table, [set, public, {write_concurrency,true}]),
+ Worker = ets:new(worker_table, [set, public, {read_concurrency,true}]),
+ true = ets:insert(Dispatch, {ct,0}),
+ #config{watcher_type = ets,
+ dispatch_table = Dispatch,
+ worker_table = Worker};
+ named -> %% here
+ Dispatch = ets:new(dispatch_table, [set, public, {write_concurrency,true}]),
+ true = ets:insert(Dispatch, {ct,0}),
+ #config{watcher_type = named,
+ dispatch_table = Dispatch,
+ worker_table = undefined};
+ Other ->
+ erlang:error({bad_option,{watcher_type,Other}})
+ end.
+
+start_watchers(Sup, #config{num_watchers=Num}) ->
+ [start_watcher(Sup, Id) || Id <- lists:seq(1,Num)],
+ ok.
+
+start_watcher(Sup, Id) ->
+ supervisor:start_child(Sup, [Id]).
+
+get_name(Name) ->
+ list_to_atom(atom_to_list(Name) ++ "_serv").
17 src/dispcount_sup.erl
@@ -0,0 +1,17 @@
+-module(dispcount_sup).
+-behaviour(supervisor).
+-export([start_link/3, init/1]).
+
+-spec start_link(Name::atom(), module(), [term()]) -> {ok, pid()}.
+start_link(Name, Mod, InitOpts) ->
+ supervisor:start_link({local, Name}, ?MODULE, {Name,Mod,InitOpts}).
+
+init({Name,Mod,InitOpts}) ->
+ %% dispcount_sup is started by dispcount_serv
+ {ok, {{one_for_all, 1, 60}, % once a minute is pretty generous
+ [%{watchers_sup, {dispcount_sup, start_link, [Name, Mod, InitOpts]},
+ % permanent, infinity, supervisor, [dispcount_sup]},
+ {info_server,
+ {dispcount_serv, start_link, [self(), Name, Mod, InitOpts]},
+ permanent, infinity, worker, [dispcount_serv]}
+ ]}}.
32 src/dispcount_supersup.erl
@@ -0,0 +1,32 @@
+-module(dispcount_supersup).
+-behaviour(supervisor).
+-export([start_dispatch/3, stop_dispatch/1, start_link/0, init/1]).
+
+%%%%%%%%%%%%%%%%%
+%%% INTERFACE %%%
+%%%%%%%%%%%%%%%%%
+-spec start_link() -> {ok, pid()}.
+start_link() ->
+ supervisor:start_link({local,?MODULE}, ?MODULE, []).
+
+-spec start_dispatch(Name::atom(), {module(),[term()]}, Opts::[term()]) -> ok.
+start_dispatch(Name, Mod, Opts) ->
+ case supervisor:start_child(?MODULE, [Name, Mod, Opts]) of
+ {ok, _} -> ok;
+ {error,{already_started,_}} -> already_started
+ end.
+
+-spec stop_dispatch(Name::atom()) -> ok.
+stop_dispatch(Name) ->
+ case whereis(Name) of
+ Pid when is_pid(Pid) ->
+ supervisor:terminate_child(?MODULE, Pid);
+ _ ->
+ ok
+ end.
+
+init([]) ->
+ {ok, {{simple_one_for_one, 1, 60},
+ [{dispcount_sup,
+ {dispcount_sup, start_link, []},
+ permanent, infinity, supervisor, [dispcount_sup]}]}}.
155 src/dispcount_watcher.erl
@@ -0,0 +1,155 @@
+-module(dispcount_watcher).
+-behaviour(gen_server).
+-include("state.hrl").
+
+-record(state, {callback :: module(),
+ callback_state :: term(),
+ config :: #config{},
+ id :: pos_integer(),
+ ref :: reference() | undefined}).
+
+-export([start_link/3, checkout/1, checkin/3]).
+-export([init/1, handle_call/3, handle_cast/2,
+ handle_info/2, code_change/3, terminate/2]).
+
+%%%%%%%%%%%%%%%%%%%%%%%%
+%%% PUBLIC INTERFACE %%%
+%%%%%%%%%%%%%%%%%%%%%%%%
+-spec start_link(#config{}, {module(), term()}, pos_integer()) -> {ok, pid()} | {error, _} | ignore.
+start_link(Conf, Callback={_,_}, Id) ->
+ gen_server:start_link(?MODULE, {Id, Conf, Callback}, []).
+
+-spec checkout(#config{}) -> {ok, Ref::term(), Resource::term()} | {error, Reason::term()}.
+checkout(Conf) ->
+ checkout(self(), Conf).
+
+-spec checkout(pid(), #config{}) -> {ok, Ref::term(), Resource::term()} | {error, Reason::term()}.
+checkout(ToPid,#config{num_watchers=Num, watcher_type=Type, dispatch_table=DTid, worker_table=WTid}) ->
+ case {Type, is_free(DTid, Id = dispatch_id(Num))} of
+ {ets, true} ->
+ [{_,Pid}] = ets:lookup(WTid, Id),
+ gen_server:call(Pid, {get,ToPid});
+ {named, true} ->
+ gen_server:call(get_name(Id), {get,ToPid});
+ {_, false} ->
+ {error, busy}
+ end.
+
+-spec checkin(#config{}, Ref::term(), Resource::term()) -> ok.
+checkin(#config{}, {Pid,Ref}, Resource) ->
+ %% we cheated, using a Pid for the CheckRef. Dirty optimisation!
+ gen_server:cast(Pid, {put, Ref, Resource}).
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%%% GEN_SERVER CALLBACKS %%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+init({Id,C=#config{watcher_type=ets,dispatch_table=DTid,worker_table=WTid},{M,A}}) ->
+ ets:insert(WTid, {Id, self()}),
+ ets:insert(DTid, {Id, 0}),
+ init(Id,C,M,A);
+init({Id,C=#config{watcher_type=named,dispatch_table=Tid},{M,A}}) ->
+ register(get_name(Id), self()),
+ ets:insert(Tid, {Id, 0}),
+ init(Id,C,M,A).
+
+handle_call({get, Pid}, _From, S=#state{callback=M, callback_state=CS, ref=undefined}) ->
+ try M:checkout(Pid, CS) of
+ {ok, Res, NewCS} ->
+ MonRef = erlang:monitor(process, Pid),
+ {reply, {ok, {self(),MonRef}, Res}, S#state{callback_state=NewCS, ref=MonRef}};
+ {error, Reason, NewCS} ->
+ {reply, {error, Reason}, S#state{callback_state=NewCS}};
+ {stop, Reason, NewCS} ->
+ M:terminate(Reason, NewCS),
+ {stop, Reason, S}
+ catch
+ Type:Reason ->
+ {stop, {Type,Reason}, S}
+ end;
+handle_call({get, _Pid}, _From, State) -> % busy
+ {reply, {error, busy}, State};
+handle_call(_Call, _From, State) ->
+ {noreply, State}.
+
+handle_cast({put, Ref, Res},
+ S=#state{callback=M, callback_state=CS, config=Conf, id=Id, ref=Ref}) ->
+ try M:checkin(Res, CS) of
+ {ok, NewCS} ->
+ #config{dispatch_table=DTid} = Conf,
+ erlang:demonitor(Ref, [flush]),
+ set_free(DTid, Id),
+ {noreply, S#state{ref=undefined,callback_state=NewCS}};
+ {ignore, NewCS} ->
+ {noreply, S#state{callback_state=NewCS}};
+ {stop, Reason, NewCS} ->
+ M:terminate(Reason, NewCS),
+ {stop, Reason, S}
+ catch
+ Type:Reason ->
+ {stop, {Type,Reason}, S}
+ end;
+handle_cast({put, _Ref, _Res}, State) -> % nomatch on refs
+ {noreply, State};
+handle_cast(_Cast, State) ->
+ {noreply, State}.
+
+handle_info({'DOWN', Ref, process, _Pid, _Reason},
+ S=#state{ref=Ref, callback=M, callback_state=CS, config=Conf, id=Id}) ->
+ try M:dead(CS) of
+ {ok, NewCS} ->
+ #config{dispatch_table=DTid} = Conf,
+ set_free(DTid, Id),
+ {noreply, S#state{ref=undefined,callback_state=NewCS}};
+ {stop, Reason, NewCS} ->
+ M:terminate(Reason, NewCS),
+ {stop, Reason, S}
+ catch
+ Type:Reason ->
+ {stop, {Type,Reason}, S}
+ end;
+handle_info(Msg, S=#state{callback=M, callback_state=CS}) ->
+ try M:handle_info(Msg, CS) of
+ {ok, NewCS} ->
+ {noreply, S#state{callback_state=NewCS}};
+ {stop, Reason, NewCS} ->
+ M:terminate(Reason, NewCS),
+ {stop, Reason, S}
+ catch
+ Type:Reason ->
+ {stop, {Type,Reason}, S}
+ end.
+
+%% How do we handle things for the callback module??
+code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
+
+terminate(_Reason, _State) ->
+ ok.
+
+%%%%%%%%%%%%%%%%%%%%%%%
+%%% HELPERS/PRIVATE %%%
+%%%%%%%%%%%%%%%%%%%%%%%
+get_name(Id) ->
+ list_to_atom("#"++atom_to_list(?MODULE)++integer_to_list(Id)).
+
+init(Id,Conf,M,A) ->
+ case M:init(A) of
+ {ok, S} ->
+ {ok, #state{callback=M,callback_state=S,config=Conf,id=Id}};
+ X -> X
+ end.
+
+dispatch_id(Num) ->
+ erlang:phash2({now(),self()}, Num) + 1.
+
+is_free(Tid, Id) ->
+ %% We optionally keep a tiny message queue in there,
+ %% which should cause no overhead but be fine to deal
+ %% with short spikes.
+ case ets:update_counter(Tid, Id, {2,1}) of
+ 1 -> true;
+ _ -> false
+ end.
+
+set_free(Tid, Id) ->
+ ets:insert(Tid, {Id,0}).
18 src/notes.txt
@@ -0,0 +1,18 @@
+-- two options available: registered-based or ets-based dispatching
+
+-- get/put and base init are all generic operations to be split into a behaviour
+
+-- the initial supervision needs to start N of them at once and be able to return the config on demand
+
+- the pool isn't an app, but must be possible to fit
+ under a supervision tree. Although it should be possible for it to have its own supervision tree. If you depend on it, you use it.
+
+[dispcount_supersup]
+ |
+start({Mod,Args,Opts})
+ |
+ [dispcount_sup]
+ | \
+ | [watchers_sup]
+ | |
+ [serv] [watcher :: custom module]
12 src/watchers_sup.erl
@@ -0,0 +1,12 @@
+-module(watchers_sup).
+-behaviour(supervisor).
+-export([start_link/1, init/1]).
+
+-spec start_link({{supervisor:strategy(), pos_integer(), pos_integer()}, [supervisor:child_spec()]}) -> {ok, pid()}.
+start_link(Spec) ->
+ supervisor:start_link(?MODULE, Spec).
+
+%% the spec is coming from dispcount_serv, tunneled through
+%% dispcount_sup.
+init(Spec) ->
+ {ok, Spec}.
108 test/dispcount_SUITE.erl
@@ -0,0 +1,108 @@
+-module(dispcount_SUITE).
+-include_lib("common_test/include/ct.hrl").
+-export([all/0, init_per_suite/1, end_per_suite/1,
+ init_per_testcase/2, end_per_testcase/2]).
+-export([starting/1, stopping/1, overload/1, dead/1]).
+
+all() -> [starting, stopping, overload, dead].
+
+init_per_suite(Config) ->
+ application:start(dispcount),
+ Config.
+
+end_per_suite(_Config) ->
+ ok.
+
+init_per_testcase(overload, Config) ->
+ ok = dispcount:start_dispatch(
+ ref_overload_dispatcher,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,2}]
+ ),
+ {ok, Info} = dispcount:dispatcher_info(ref_overload_dispatcher),
+ [{info, Info} | Config];
+init_per_testcase(dead, Config) ->
+ ok = dispcount:start_dispatch(
+ ref_dead_dispatcher,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,1}]
+ ),
+ {ok, Info} = dispcount:dispatcher_info(ref_dead_dispatcher),
+ [{info, Info} | Config];
+init_per_testcase(_, Config) ->
+ Config.
+
+end_per_testcase(overload, Config) ->
+ dispcount:stop_dispatch(ref_overload_dispatcher);
+end_per_testcase(dead, Config) ->
+ dispcount:stop_dispatch(ref_dead_dispatcher);
+end_per_testcase(_, Config) ->
+ ok.
+
+starting(_Config) ->
+ ok = dispcount:start_dispatch(
+ ref_dispatcher,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,10}]
+ ),
+ {ok, Info} = dispcount:dispatcher_info(ref_dispatcher),
+ case dispcount:checkout(Info) of
+ {ok, CheckinReference, Resource} ->
+ timer:sleep(10),
+ dispcount:checkin(Info, CheckinReference, Resource);
+ {error, busy} ->
+ give_up
+ end.
+
+stopping(_Config) ->
+ ok = dispcount:start_dispatch(
+ stop_dispatch,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,1}]
+ ),
+ already_started = dispcount:start_dispatch(
+ stop_dispatch,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,1}]
+ ),
+ dispcount:stop_dispatch(stop_dispatch),
+ ok = dispcount:start_dispatch(
+ stop_dispatch,
+ {ref_dispatch, []},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,1}]
+ ),
+ dispcount:stop_dispatch(stop_dispatch).
+
+overload(Config) ->
+ %% should be two workers max. Loop until we reach overload,
+ %% then a bit more to make sure nothing is available (damn hashing makes
+ %% things non-deterministic), then free resources and check that we
+ %% can access more.
+ Info = ?config(info, Config),
+ %% the list comprehension monad, hell yes! Skip all busy calls and see that
+ %% only two resources are acquired
+ Resources = [{Ref, Res} || _ <- lists:seq(1,20), {ok, Ref, Res} <- [dispcount:checkout(Info)]],
+ 2 = length(Resources),
+ [] = [{Ref, Res} || _ <- lists:seq(1,100), {ok, Ref, Res} <- [dispcount:checkout(Info)]],
+ %% turning ressources in
+ [dispcount:checkin(Info, Ref, Res) || {Ref, Res} <- Resources],
+ %% then we're able to get more in.
+ timer:sleep(100),
+ Resources2 = [{Ref, Res} || _ <- lists:seq(1,20), {ok, Ref, Res} <- [dispcount:checkout(Info)]],
+ 2 = length(Resources2).
+
+dead(Config) ->
+ %% The dispatcher with this test has 1 ressource available.
+ Info = ?config(info, Config),
+ %% resource owners should be monitored automatically and handled when stuff dies.
+ spawn(fun() -> dispcount:checkout(Info), timer:sleep(500) end),
+ timer:sleep(100),
+ {error, busy} = dispcount:checkout(Info),
+ timer:sleep(500),
+ {ok, _Ref, _Res} = dispcount:checkout(Info).
39 test/http_dispatch.erl
@@ -0,0 +1,39 @@
+-module(http_dispatch).
+-behaviour(dispcount).
+-export([init/1, checkout/2, checkin/2, handle_info/2, dead/1, terminate/2, code_change/3]).
+
+-record(state, {resource, given=false, port}).
+
+init([{port,Num}]) ->
+ {ok,Socket} = gen_tcp:connect({127,0,0,1}, Num, [binary]),
+ {ok, #state{resource=Socket, port=Num}}.
+
+checkout(_From, State = #state{given=true}) ->
+ {error, busy, State};
+checkout(From, State = #state{resource=Socket}) ->
+ gen_tcp:controlling_process(Socket, From),
+ {ok, Socket, State#state{given=true}}.
+
+checkin(Socket, State = #state{resource=Socket, given=true}) ->
+ {ok, State#state{given=false}};
+checkin(_Socket, State) ->
+ %% The socket doesn't match the one we had -- an error happened somewhere
+ {ignore, State}.
+
+dead(State) ->
+ %% aw shoot, someone lost our resource, we gotta create a new one:
+ {ok, NewSocket} = gen_tcp:connect({127,0,0,1}, State#state.port, [binary]),
+ {ok, State#state{resource=NewSocket,given=false}}.
+ %% alternatively:
+ %% {stop, Reason, State}
+
+handle_info(_Msg, State) ->
+ %% something unexpected with the TCP connection if we set it to active,once???
+ {ok, State}.
+
+terminate(_Reason, _State) ->
+ %% let the GC clean the socket.
+ ok.
+
+code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
27 test/ref_dispatch.erl
@@ -0,0 +1,27 @@
+-module(ref_dispatch).
+-behaviour(dispcount).
+-export([init/1, checkout/2, checkin/2, handle_info/2, dead/1,
+ terminate/2, code_change/3]).
+
+init([]) ->
+ {ok, make_ref()}.
+
+checkout(_From, Ref) ->
+ {ok, Ref, undefined}.
+
+checkin(Ref, undefined) ->
+ {ok, Ref};
+checkin(SomeRef, Ref) ->
+ {ignore, Ref}.
+
+dead(undefined) ->
+ {ok, make_ref()}.
+
+handle_info(_Msg, State) ->
+ {ok, State}.
+
+terminate(_Reason, _State) ->
+ ok.
+
+code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
Please sign in to comment.
Something went wrong with that request. Please try again.