Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

Merge branch 'master' into r14

  • Loading branch information...
commit eae00a3c825feac73307712c45c1fe2483a9135a 2 parents 2f935e4 + a4e5076
@ferd authored
View
1  .gitignore
@@ -4,3 +4,4 @@
*.COVER.*
logs/*
ebin/*
+deps/*
View
24 Makefile
@@ -0,0 +1,24 @@
+REBAR=./rebar
+
+all:
+ @$(REBAR) get-deps compile
+
+get-deps:
+ @$(REBAR) get-deps
+
+edoc:
+ @$(REBAR) doc
+
+test:
+ @rm -rf .eunit
+ @mkdir -p .eunit
+ @$(REBAR) skip_deps=true eunit
+
+clean:
+ @rm -rf deps/ ebin/ logs/
+
+build_plt:
+ @$(REBAR) build-plt
+
+dialyzer:
+ @$(REBAR) dialyze
View
18 README.markdown
@@ -6,13 +6,15 @@ Dispcount is an attempt at making more efficient resource dispatching than usual
There have been a few characteristics assumed to be present for the design of dispcount:
-- resources are limited, but the demand for them is superior to their availability.
+- resources are limited, but the demand for them is far superior to their availability.
- requests for resources are *always* incoming
- because of the previous point, it is possible and prefered to simply not queue requests for busy resources, but instantly return. Newer requests will take their spot
- low latency to know whether or not a resource is available is more important than being able to get all queries to run.
If you cannot afford to ignore a query and wish to eventually serve every one of them, dispcount might not be for you. Otherwise, you'll need to queue them yourself because all it does is grant you a resource or tell you it's busy.
+Also note that the dispatching of resources is done on a hashing basis and doesn't guarantee that all resources are to be allocated before showing a 'busy' response. As mentioned earlier, dispcount makes the assumption that there are limited resources and the demand is superior to their availability; the more requests for resources there are, the better the distribution should be. See 'how does it work' for more details.
+
## How to build ##
`$ ./rebar compile`
@@ -106,7 +108,7 @@ The next call is the `dead/1` function:
dead(undefined) ->
{ok, make_ref()}.
-`dead(State)` is called whenever the process that checked out a given resource has died. This is because dispcount automatically monitors them so you don't need to do it yourself. If it sees the resource owner died, it calls thhat function.
+`dead(State)` is called whenever the process that checked out a given resource has died. This is because dispcount automatically monitors them so you don't need to do it yourself. If it sees the resource owner died, it calls that function.
This lets you create a new instance of a resource to distribute later on, if required or possible. As an example, if we were to use a permanent connection to a database as a resource, then this is where we'd set a new connection up and then keep going as if nothing went wrong.
@@ -140,9 +142,15 @@ Here's a similar callback module to handle HTTP sockets (untested):
{error, busy, State};
checkout(From, State = #state{resource=Socket}) ->
gen_tcp:controlling_process(Socket, From),
- {ok, Socket, State#state{given=true}}.
+ %% We give our own pid back so that the client can make this
+ %% callback module the controlling process again before
+ %% handing it back.
+ {ok, {self(), Socket}, State#state{given=true}}.
checkin(Socket, State = #state{resource=Socket, given=true}) ->
+ %% This assumes the client made us the controlling process again.
+ %% This might be done via a client lib wrapping dispcount calls of
+ %% some sort.
{ok, State#state{given=false}};
checkin(_Socket, State) ->
%% The socket doesn't match the one we had -- an error happened somewhere
@@ -174,7 +182,7 @@ We'd see mailbox queue build-up, busy schedulers, and booming memory. Dispcount
The core concept of dispcount is based on two ETS tables: a dispatch table (write-only) and a worker matchup table (read-only). Two tables because what costs the most performance with ETS in terms of concurrency is switching between reading and writing.
-In each of the table, `N` entries are added: one for each resource available, matching with a process that manages that resource (a *watcher*). Persistent hashing of the resources allows to dispatch queries uniformly to all of these watchers. Once you know which watcher your request is dedicated to, the dispatch table is called into action.
+In each of the table, `N` entries are added: one for each resource available, matching with a process that manages that resource (a *watcher*). Persistent hashing of the resources allows to dispatch queries uniformly to all of these watchers. Once you know which watcher your request is dedicated to, the dispatch table is called into action. The persistent hashing does mean that it is not possible to guarantee that all free resources will be allocated before 'busy' messages start showing, but only that at a sufficiently high level of demand, the distribution should be roughly equal to all watchers, obtaining a full dispatching of resources.
The dispatch table manages to allow both reads and writes while remaining write-only. The trick is to use the `ets:update_counter` functions, which atomically increment a counter and return the value, although the operation is only writing and communicating a minimum of information.
@@ -192,6 +200,6 @@ The error you see is likely `{start_spec,{invalid_shutdown,infinity}}`. This is
## What's left to do? ##
-- More complete testing suite.
- Adding a function call to allow the transfer of ownership from a process to another one to avoid messing with monitoring in the callback module.
- Testing to make sure the callback modules can be updated with OTP relups and appups. This is so far untested.
+- Allowing dynamic resizing of pools.
View
6 rebar.config
@@ -12,7 +12,7 @@
%% Erlang compiler options
{erl_first_files, ["dispcount"]}.
-{erl_opts, [debug_info, {i, "include"}, {d,'DEBUG'}]}.
+{erl_opts, [debug_info, {i, "include"}]}.
%% == Common Test ==
@@ -28,4 +28,6 @@
%% name as an atom, eg. mochiweb, a name and a version (from the .app file), or
%% an application name, a version and the SCM details on how to fetch it (SCM
%% type, location and revision). Rebar currently supports git, hg, bzr and svn.
-{deps, []}.
+{deps, [
+ {proper, "1.0", {git, "https://github.com/manopapad/proper.git", "master"}}
+]}.
View
37 test/dispcount_SUITE.erl
@@ -2,9 +2,11 @@
-include_lib("common_test/include/ct.hrl").
-export([all/0, init_per_suite/1, end_per_suite/1,
init_per_testcase/2, end_per_testcase/2]).
--export([starting/1, stopping/1, overload/1, dead/1, error/1]).
+-export([starting/1, stopping/1, overload/1, dead/1, error/1,
+ restart/1]).
-all() -> [starting, stopping, overload, dead, error].
+all() -> [starting, stopping, overload, dead, error,
+ restart].
init_per_suite(Config) ->
application:start(dispcount),
@@ -40,14 +42,28 @@ init_per_testcase(error, Config) ->
),
{ok, Info} = dispcount:dispatcher_info(ref_error_dispatcher),
[{info, Info} | Config];
+init_per_testcase(restart, Config) ->
+ Ref = make_ref(),
+ ok = dispcount:start_dispatch(
+ ref_restart_dispatcher,
+ {ref_dispatch_restart, [Ref]},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,100},{maxt,1},{resources,1}]
+ ),
+ {ok, Info} = dispcount:dispatcher_info(ref_restart_dispatcher),
+ [{info, Info},{ref,Ref} | Config];
init_per_testcase(_, Config) ->
Config.
-end_per_testcase(overload, Config) ->
+end_per_testcase(overload, _Config) ->
dispcount:stop_dispatch(ref_overload_dispatcher);
-end_per_testcase(dead, Config) ->
+end_per_testcase(dead, _Config) ->
dispcount:stop_dispatch(ref_dead_dispatcher);
-end_per_testcase(_, Config) ->
+end_per_testcase(error, _Config) ->
+ dispcount:stop_dispatch(ref_error_dispatcher);
+end_per_testcase(restart, _Config) ->
+ dispcount:stop_dispatch(ref_restart_dispatcher);
+end_per_testcase(_, _Config) ->
ok.
starting(_Config) ->
@@ -125,3 +141,14 @@ error(Config) ->
{error, denied} = dispcount:checkout(Info),
%% if we get {error, busy}, this is an error.
{error, denied} = dispcount:checkout(Info).
+
+restart(Config) ->
+ %% One resource available.
+ Info = ?config(info, Config),
+ Res = ?config(ref, Config),
+ put(crash, true),
+ %% Crashing the handler should make it possible to restart it
+ {'EXIT',_} = (catch dispcount:checkout(Info)),
+ timer:sleep(500),
+ put(crash, false),
+ {ok, _Ref, Res} = dispcount:checkout(Info).
View
105 test/dispcount_prop.erl
@@ -0,0 +1,105 @@
+%%% Basic sequential statem based tests for dispcount.
+-module(dispcount_prop).
+-include_lib("proper/include/proper.hrl").
+-behaviour(proper_statem).
+
+-export([command/1, initial_state/0, next_state/3, postcondition/3,
+ precondition/2]).
+-export([checkin/2]).
+
+-define(SERVER, dispcount).
+-define(NAME, prop_dispatch).
+-define(INFO, {call, erlang, element, [2,{call, dispcount, dispatcher_info, [?NAME]}]}).
+
+-record(state, {resource=[], last}).
+
+initial_state() ->
+ #state{}.
+
+%% Checking in and checking out is always valid
+command(#state{resource=R}) ->
+ oneof([{call, ?SERVER, checkout, [?INFO]}] ++
+ [{call, ?MODULE, checkin, [?INFO, hd(R)]} || R =/= []]).
+
+next_state(S=#state{resource=L}, V, {call, _, checkout, _}) ->
+ S#state{resource=[V|L], last=checkout};
+next_state(S=#state{resource=[_|L]}, _V, {call, _, checkin, _}) ->
+ S#state{resource=L, last=checkin}.
+
+%% when we have a resource checked in or out, we can either try to
+%% check it in or out again
+precondition(#state{resource=R}, {call,_,checkin,_}) when R =/= [] -> true;
+precondition(#state{resource=R}, {call,_,checkout,_}) when R =/= [] -> true;
+%% Without a resource, we can only check stuff out.
+precondition(#state{resource=[]}, {call,_,checkout,_}) -> true;
+precondition(_, _) -> false.
+
+%% The postconditions are a little bit more complex.
+%% The following rules are for resources that we managed to check out.
+postcondition(#state{resource=[{ok,_Ref,_Res}|_]}, {call, _, checkin, _}, ok) -> true;
+postcondition(#state{resource=[{ok,_Ref,_Res}|_]}, {call, _, checkout, _}, {error,busy}) -> true;
+%% These postconditions check for what happens when we were busy beforehand
+%% This state pretty much allows anything to go
+postcondition(#state{resource=[{error,busy}|_]}, {call, _, checkout, _}, {error,busy}) -> true;
+postcondition(#state{resource=[{error,busy}|_]}, {call, _, checkin, _}, busy_checkin) -> true;
+%% Checking out is always fine when we had no resource checked out beforehand
+postcondition(#state{resource=[]}, {call, _, checkout, _}, {ok,_Ref,_Res}) -> true;
+%% In case of a fast checkin following checkout on a similar resource might end up busy.
+postcondition(#state{resource=[],last=checkin}, {call, _, checkout, _}, {error,busy}) -> true;
+%% Gotta make sure we didn't manage to checkout the same resource twice
+postcondition(#state{resource=L=[_|_]}, {call, _, checkout, _}, {ok,_Ref,Res}) ->
+ case lists:keyfind(Res,3,L) of
+ false -> true;
+ {ok,_,Res} -> false
+ end;
+postcondition(_, _, _) -> false.
+
+prop_nocrash() ->
+ ?FORALL(Cmds, commands(?MODULE, #state{}),
+ begin
+ %% the ETS table works with the dispcount dispatcher
+ %% in this test to assign increasing IDs to each dispatch_watcher
+ %% instance.
+ Tid = ets:new(ids, [public,set]),
+ ets:insert(Tid, {id,0}),
+ application:stop(dispcount),
+ application:start(dispcount),
+ ok = ?SERVER:start_dispatch(?NAME,
+ {?NAME, [Tid]},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,10}]),
+ {H,_S,R} = run_commands(?MODULE, Cmds),
+ ets:delete(Tid),
+ ?WHENFAIL(io:format("History: ~p~n",[{Cmds,H}]),
+ aggregate(command_names(Cmds),
+ R =:= ok))
+ end).
+
+prop_parallel_nocrash() ->
+ ?FORALL(Cmds, parallel_commands(?MODULE, #state{}),
+ begin
+ %% the ETS table works with the dispcount dispatcher
+ %% in this test to assign increasing IDs to each dispatch_watcher
+ %% instance.
+ Tid = ets:new(ids, [public,set]),
+ ets:insert(Tid, {id,0}),
+ application:stop(dispcount),
+ application:start(dispcount),
+ ok = ?SERVER:start_dispatch(?NAME,
+ {?NAME, [Tid]},
+ [{restart,permanent},{shutdown,4000},
+ {maxr,10},{maxt,60},{resources,1}]),
+ {H,P,R} = run_parallel_commands(?MODULE, Cmds),
+ ets:delete(Tid),
+ ?WHENFAIL(io:format("Cmds:~p~nP:~p~nH:~p~nR:~p~n",[Cmds,P,H,R]),
+ R =:= ok)
+ end).
+
+
+%% Simple wrapper to work around limitations of statem stuff.
+%% busy_checkin is basically a hack to circumvent the idea that the
+%% previous result was a busy thing and we discard it through this function
+checkin(_Info, {error,busy}) -> busy_checkin;
+%% unpack & call the right checkin
+checkin(Info, {ok, Ref, Res}) -> ?SERVER:checkin(Info, Ref, Res).
+
View
29 test/prop_dispatch.erl
@@ -0,0 +1,29 @@
+-module(prop_dispatch).
+-behaviour(dispcount).
+-export([init/1, checkout/2, checkin/2, handle_info/2, dead/1,
+ terminate/2, code_change/3]).
+
+init([Tid]) ->
+ Id = ets:update_counter(Tid, id, 1),
+ {ok, Id}.
+
+checkout(_From, Id) ->
+ {ok, Id, Id}.
+
+checkin(Id, undefined) ->
+ {ok, Id};
+checkin(_SomeId, Id) ->
+ {ignore, Id}.
+
+dead(Id) ->
+ {ok, Id}.
+
+handle_info(_Msg, State) ->
+ {ok, State}.
+
+terminate(_Reason, _State) ->
+ ok.
+
+code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
+
View
2  test/ref_dispatch.erl
@@ -11,7 +11,7 @@ checkout(_From, Ref) ->
checkin(Ref, undefined) ->
{ok, Ref};
-checkin(SomeRef, Ref) ->
+checkin(_SomeRef, Ref) ->
{ignore, Ref}.
dead(undefined) ->
View
32 test/ref_dispatch_restart.erl
@@ -0,0 +1,32 @@
+-module(ref_dispatch_restart).
+-behaviour(dispcount).
+-export([init/1, checkout/2, checkin/2, handle_info/2, dead/1,
+ terminate/2, code_change/3]).
+
+init([Ref]) ->
+ {ok, Ref}.
+
+checkout(From, Ref) ->
+ {dictionary, Dict} = erlang:process_info(From, dictionary),
+ case proplists:get_value(crash, Dict) of
+ true -> erlang:error(asked_for);
+ false -> {ok, Ref, undefined}
+ end.
+
+checkin(Ref, undefined) ->
+ {ok, Ref};
+checkin(_SomeRef, Ref) ->
+ {ignore, Ref}.
+
+dead(undefined) ->
+ {ok, make_ref()}.
+
+handle_info(_Msg, State) ->
+ {ok, State}.
+
+terminate(_Reason, _State) ->
+ ok.
+
+code_change(_OldVsn, State, _Extra) ->
+ {ok, State}.
+
Please sign in to comment.
Something went wrong with that request. Please try again.