Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

QuickCheck Poolboy #7

Merged
merged 11 commits into from

3 participants

@Vagabond
Collaborator

Hopefully all the commits have understandable comments, but I also did a writeup of the whole thing here:

http://andrew.hijacked.us/article/356/quickchecking-poolboy-for-fun-and-profit

jaredmorrow and others added some commits
@jaredmorrow jaredmorrow Update rebar version: 2 date: 20110907_012454 vcs: git 1cb1ae2 0deace6
@Vagabond Vagabond Ensure the worker supervisor terminates when you stop poolboy 17ee86c
@jaredmorrow jaredmorrow Roll version 0.3.1 to fix bz1188 211a8fa
@Vagabond Vagabond Merge branch 'master' of https://github.com/devinus/poolboy bcd6508
@Vagabond Vagabond Initial quickcheck test for poolboy
For now, only test the simplest API; start, stop, checkout nonblocking
and checkin.
44a816e
@Vagabond Vagabond Add blocking checkout to quickcheck test and fix poolboy bug
The bug discovered was that blocking checkouts that had expired were not
being removed from the list of 'waiting' checkouts, so when a checkout
had timed out and a worker was subsequently checked in, the new worker
would be sent to the expired checkout request, not any current ones.
6a53f06
@Vagabond Vagabond Add kill_worker and fix resulting bugs
kill_worker exposed some new faults in poolboy, where the maximum size
of the pool could end up smaller than it should be after a worker was
killed.

To help in debugging this, a 'status' command was added to poolboy to
look at its internal state and a dynamic_precondition function was used
to compare the expected state to the actual state.

Numerous problems were discovered by doing this, poolboy had issues
around when it changed states, like allocating the last non-overflow
worker would leave the pool in 'ready' until the next request came in,
instead of immediately changing the state.
c2ba14c
@Vagabond Vagabond Add command to kill workers after they're checked back in, and fix bugs
Extend the 'status' to check the number of monitors poolboy has on
processes that have checked out workers. When a process dies after being
checked back in, poolboy zeroes out the list of monitors for some reason.
eacf28f
@Vagabond Vagabond Add command to send poolboy EXIT messages for unrelated pids; fix bugs
Sending poolboy EXIT messages for pids that are not workers causes
invalid changes to poolboy's state. Also, legitimate processes dying
after being checked back in also would corrupt the internal state. Add
better checking in the EXIT handling code to guard against this.
e964cc5
@Vagabond Vagabond Checkin non-worker processes 10% of the time and fix resulting bugs
Checking in an invalid process corrupts the internal state similarly to
receiving EXITs from non-worker pids.
e6af0b6
@Vagabond Vagabond Add a parallel property for poolboy
To trigger some of the more interesting code, we need to truly do things
in parallel, so blocking checkouts can actually happen while other
requests are being made. This commit adds a parallel property for
poolboy, based on the original serial one.

One hack that was needed was to add a trailing call to erlang:self() to
the end of each parallel branch so that we could filter out workers
checked out by that particular branch from the model. This is necessary
because poolboy monitors the process holding the worker and will destroy
any workers held by a process and re-start them, if necessary, to
maintain the right number of workers in the pool. This filtering is done
in the next_state command, and all elements in the checked_out list in
the model are tuples of {Worker, self()} so the list can be filtered.

Happily, although this exercises more core, no additional bugs were
found.
fce0d9f
@devinus devinus merged commit 75572c2 into devinus:master
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Sep 7, 2011
  1. @jaredmorrow
Commits on Sep 19, 2011
  1. @Vagabond
Commits on Sep 20, 2011
  1. @jaredmorrow
Commits on Jan 5, 2012
  1. @Vagabond
Commits on Jan 21, 2012
  1. @Vagabond

    Initial quickcheck test for poolboy

    Vagabond authored
    For now, only test the simplest API; start, stop, checkout nonblocking
    and checkin.
  2. @Vagabond

    Add blocking checkout to quickcheck test and fix poolboy bug

    Vagabond authored
    The bug discovered was that blocking checkouts that had expired were not
    being removed from the list of 'waiting' checkouts, so when a checkout
    had timed out and a worker was subsequently checked in, the new worker
    would be sent to the expired checkout request, not any current ones.
Commits on Jan 22, 2012
  1. @Vagabond

    Add kill_worker and fix resulting bugs

    Vagabond authored
    kill_worker exposed some new faults in poolboy, where the maximum size
    of the pool could end up smaller than it should be after a worker was
    killed.
    
    To help in debugging this, a 'status' command was added to poolboy to
    look at its internal state and a dynamic_precondition function was used
    to compare the expected state to the actual state.
    
    Numerous problems were discovered by doing this, poolboy had issues
    around when it changed states, like allocating the last non-overflow
    worker would leave the pool in 'ready' until the next request came in,
    instead of immediately changing the state.
  2. @Vagabond

    Add command to kill workers after they're checked back in, and fix bugs

    Vagabond authored
    Extend the 'status' to check the number of monitors poolboy has on
    processes that have checked out workers. When a process dies after being
    checked back in, poolboy zeroes out the list of monitors for some reason.
  3. @Vagabond

    Add command to send poolboy EXIT messages for unrelated pids; fix bugs

    Vagabond authored
    Sending poolboy EXIT messages for pids that are not workers causes
    invalid changes to poolboy's state. Also, legitimate processes dying
    after being checked back in also would corrupt the internal state. Add
    better checking in the EXIT handling code to guard against this.
  4. @Vagabond

    Checkin non-worker processes 10% of the time and fix resulting bugs

    Vagabond authored
    Checking in an invalid process corrupts the internal state similarly to
    receiving EXITs from non-worker pids.
  5. @Vagabond

    Add a parallel property for poolboy

    Vagabond authored
    To trigger some of the more interesting code, we need to truly do things
    in parallel, so blocking checkouts can actually happen while other
    requests are being made. This commit adds a parallel property for
    poolboy, based on the original serial one.
    
    One hack that was needed was to add a trailing call to erlang:self() to
    the end of each parallel branch so that we could filter out workers
    checked out by that particular branch from the model. This is necessary
    because poolboy monitors the process holding the worker and will destroy
    any workers held by a process and re-start them, if necessary, to
    maintain the right number of workers in the pool. This filtering is done
    in the next_state command, and all elements in the checked_out list in
    the model are tuples of {Worker, self()} so the list can be filtered.
    
    Happily, although this exercises more core, no additional bugs were
    found.
This page is out of date. Refresh to see the latest.
Showing with 405 additions and 102 deletions.
  1. +181 −102 src/poolboy.erl
  2. +224 −0 test/poolboy_eqc.erl
View
283 src/poolboy.erl
@@ -35,7 +35,7 @@ checkout(Pool, Block) ->
-spec checkout(Pool :: node(), Block :: boolean(), Timeout :: timeout())
-> pid() | full.
checkout(Pool, Block, Timeout) ->
- gen_fsm:sync_send_event(Pool, {checkout, Block}, Timeout).
+ gen_fsm:sync_send_event(Pool, {checkout, Block, Timeout}, Timeout).
-spec checkin(Pool :: node(), Worker :: pid()) -> ok.
checkin(Pool, Worker) ->
@@ -66,21 +66,31 @@ init([{stop_fun, StopFun} | Rest], State) when is_function(StopFun) ->
init(Rest, State#state{worker_stop=StopFun});
init([_ | Rest], State) ->
init(Rest, State);
-init([], #state{size=Size, worker_sup=Sup, worker_init=InitFun}=State) ->
+init([], #state{size=Size, worker_sup=Sup, worker_init=InitFun,
+ max_overflow=MaxOverflow}=State) ->
Workers = prepopulate(Size, Sup, InitFun),
- {ok, ready, State#state{workers=Workers}}.
+ StartState = case queue:len(Workers) of
+ 0 when MaxOverflow ==0 -> full;
+ 0 -> overflow;
+ _ -> ready
+ end,
+ {ok, StartState, State#state{workers=Workers}}.
ready({checkin, Pid}, State) ->
- Workers = queue:in(Pid, State#state.workers),
- Monitors = case lists:keytake(Pid, 1, State#state.monitors) of
- {value, {_, Ref}, Left} -> erlang:demonitor(Ref), Left;
- false -> State#state.monitors
- end,
- {next_state, ready, State#state{workers=Workers, monitors=Monitors}};
+ case lists:keytake(Pid, 1, State#state.monitors) of
+ {value, {_, Ref}, Monitors} ->
+ erlang:demonitor(Ref),
+ Workers = queue:in(Pid, State#state.workers),
+ {next_state, ready, State#state{workers=Workers,
+ monitors=Monitors}};
+ false ->
+ %% unknown process checked in, ignore it
+ {next_state, ready, State}
+ end;
ready(_Event, State) ->
{next_state, ready, State}.
-ready({checkout, Block}, {FromPid, _}=From, State) ->
+ready({checkout, Block, Timeout}, {FromPid, _}=From, State) ->
#state{workers = Workers,
worker_sup = Sup,
max_overflow = MaxOverflow,
@@ -89,8 +99,16 @@ ready({checkout, Block}, {FromPid, _}=From, State) ->
{{value, Pid}, Left} ->
Ref = erlang:monitor(process, FromPid),
Monitors = [{Pid, Ref} | State#state.monitors],
- {reply, Pid, ready, State#state{workers=Left,
- monitors=Monitors}};
+ NextState = case queue:len(Left) of
+ 0 when MaxOverflow == 0 ->
+ full;
+ 0 ->
+ overflow;
+ _ ->
+ ready
+ end,
+ {reply, Pid, NextState, State#state{workers=Left,
+ monitors=Monitors}};
{empty, Empty} when MaxOverflow > 0 ->
{Pid, Ref} = new_worker(Sup, FromPid, InitFun),
Monitors = [{Pid, Ref} | State#state.monitors],
@@ -102,32 +120,41 @@ ready({checkout, Block}, {FromPid, _}=From, State) ->
{empty, Empty} ->
Waiting = State#state.waiting,
{next_state, full, State#state{workers=Empty,
- waiting=queue:in(From, Waiting)}}
+ waiting=add_waiting(From, Timeout, Waiting)}}
end;
ready(_Event, _From, State) ->
{reply, ok, ready, State}.
-overflow({checkin, Pid}, #state{overflow=1}=State) ->
- StopFun = State#state.worker_stop,
- dismiss_worker(Pid, StopFun),
- Monitors = case lists:keytake(Pid, 1, State#state.monitors) of
- {value, {_, Ref}, Left} -> erlang:demonitor(Ref), Left;
- false -> []
- end,
- {next_state, ready, State#state{overflow=0, monitors=Monitors}};
+overflow({checkin, Pid}, #state{overflow=0}=State) ->
+ case lists:keytake(Pid, 1, State#state.monitors) of
+ {value, {_, Ref}, Monitors} ->
+ erlang:demonitor(Ref),
+ NextState = case State#state.size > 0 of
+ true -> ready;
+ _ -> overflow
+ end,
+ {next_state, NextState, State#state{overflow=0, monitors=Monitors,
+ workers=queue:in(Pid, State#state.workers)}};
+ false ->
+ %% unknown process checked in, ignore it
+ {next_state, overflow, State}
+ end;
overflow({checkin, Pid}, State) ->
#state{overflow=Overflow, worker_stop=StopFun} = State,
- dismiss_worker(Pid, StopFun),
- Monitors = case lists:keytake(Pid, 1, State#state.monitors) of
- {value, {_, Ref}, Left} -> erlang:demonitor(Ref), Left;
- false -> State#state.monitors
- end,
- {next_state, overflow, State#state{overflow=Overflow-1,
- monitors=Monitors}};
+ case lists:keytake(Pid, 1, State#state.monitors) of
+ {value, {_, Ref}, Monitors} ->
+ dismiss_worker(Pid, StopFun),
+ erlang:demonitor(Ref),
+ {next_state, overflow, State#state{overflow=Overflow-1,
+ monitors=Monitors}};
+ _ ->
+ %% unknown process checked in, ignore it
+ {next_state, overflow, State}
+ end;
overflow(_Event, State) ->
{next_state, overflow, State}.
-overflow({checkout, Block}, From, #state{overflow=Overflow,
+overflow({checkout, Block, Timeout}, From, #state{overflow=Overflow,
max_overflow=MaxOverflow
}=State) when Overflow >= MaxOverflow ->
case Block of
@@ -135,48 +162,63 @@ overflow({checkout, Block}, From, #state{overflow=Overflow,
{reply, full, full, State};
Block ->
Waiting = State#state.waiting,
- {next_state, full, State#state{waiting=queue:in(From, Waiting)}}
+ {next_state, full, State#state{waiting=add_waiting(From, Timeout, Waiting)}}
end;
-overflow({checkout, _Block}, {From, _}, State) ->
- #state{worker_sup=Sup, overflow=Overflow, worker_init=InitFun} = State,
+overflow({checkout, _Block, _Timeout}, {From, _}, State) ->
+ #state{worker_sup=Sup, overflow=Overflow,
+ worker_init=InitFun, max_overflow=MaxOverflow} = State,
{Pid, Ref} = new_worker(Sup, From, InitFun),
Monitors = [{Pid, Ref} | State#state.monitors],
- {reply, Pid, overflow, State#state{monitors=Monitors,
- overflow=Overflow+1}};
+ NewOverflow = Overflow + 1,
+ Next = case NewOverflow >= MaxOverflow of
+ true -> full;
+ _ -> overflow
+ end,
+ {reply, Pid, Next, State#state{monitors=Monitors,
+ overflow=NewOverflow}};
overflow(_Event, _From, State) ->
{reply, ok, overflow, State}.
full({checkin, Pid}, State) ->
#state{waiting = Waiting, max_overflow = MaxOverflow,
overflow = Overflow, worker_stop = StopFun} = State,
- Monitors = case lists:keytake(Pid, 1, State#state.monitors) of
- {value, {_, Ref0}, Left0} -> erlang:demonitor(Ref0), Left0;
- false -> State#state.monitors
- end,
- case queue:out(Waiting) of
- {{value, {FromPid, _}=From}, Left} ->
- Ref = erlang:monitor(process, FromPid),
- Monitors1 = [{Pid, Ref} | Monitors],
- gen_fsm:reply(From, Pid),
- {next_state, full, State#state{waiting=Left,
- monitors=Monitors1}};
- {empty, Empty} when MaxOverflow < 1 ->
- Workers = queue:in(Pid, State#state.workers),
- {next_state, ready, State#state{workers=Workers, waiting=Empty,
- monitors=Monitors}};
- {empty, Empty} ->
- dismiss_worker(Pid, StopFun),
- {next_state, overflow, State#state{waiting=Empty,
- monitors=Monitors,
- overflow=Overflow-1}}
+ case lists:keytake(Pid, 1, State#state.monitors) of
+ {value, {_, Ref0}, Monitors} ->
+ erlang:demonitor(Ref0),
+ case queue:out(Waiting) of
+ {{value, {{FromPid, _}=From, Timeout, StartTime}}, Left} ->
+ case wait_valid(StartTime, Timeout) of
+ true ->
+ Ref = erlang:monitor(process, FromPid),
+ Monitors1 = [{Pid, Ref} | Monitors],
+ gen_fsm:reply(From, Pid),
+ {next_state, full, State#state{waiting=Left,
+ monitors=Monitors1}};
+ _ ->
+ %% replay this event with cleaned up waiting queue
+ full({checkin, Pid}, State#state{waiting=Left})
+ end;
+ {empty, Empty} when MaxOverflow < 1 ->
+ Workers = queue:in(Pid, State#state.workers),
+ {next_state, ready, State#state{workers=Workers, waiting=Empty,
+ monitors=Monitors}};
+ {empty, Empty} ->
+ dismiss_worker(Pid, StopFun),
+ {next_state, overflow, State#state{waiting=Empty,
+ monitors=Monitors,
+ overflow=Overflow-1}}
+ end;
+ false ->
+ %% unknown process checked in, ignore it
+ {next_state, full, State}
end;
full(_Event, State) ->
{next_state, full, State}.
-full({checkout, false}, _From, State) ->
+full({checkout, false, _Timeout}, _From, State) ->
{reply, full, full, State};
-full({checkout, _Block}, From, #state{waiting=Waiting}=State) ->
- {next_state, full, State#state{waiting=queue:in(From, Waiting)}};
+full({checkout, _Block, Timeout}, From, #state{waiting=Waiting}=State) ->
+ {next_state, full, State#state{waiting=add_waiting(From, Timeout, Waiting)}};
full(_Event, _From, State) ->
{reply, ok, full, State}.
@@ -198,6 +240,10 @@ handle_sync_event(stop, _From, _StateName,State) ->
Sup = State#state.worker_sup,
exit(Sup, shutdown),
{stop, normal, ok, State};
+handle_sync_event(status, _From, StateName, State) ->
+ {reply, {StateName, queue:len(State#state.workers),
+ State#state.overflow, length(State#state.monitors)},
+ StateName, State};
handle_sync_event(_Event, _From, StateName, State) ->
Reply = {error, invalid_message},
{reply, Reply, StateName, State}.
@@ -216,51 +262,75 @@ handle_info({'EXIT', Pid, _}, StateName, State) ->
waiting = Waiting,
max_overflow = MaxOverflow,
worker_init = InitFun} = State,
- Monitors = case lists:keytake(Pid, 1, State#state.monitors) of
- {value, {_, Ref}, Left} -> erlang:demonitor(Ref), Left;
- false -> []
- end,
- case StateName of
- ready ->
- W = queue:filter(fun (P) -> P =/= Pid end, State#state.workers),
- {next_state, ready, State#state{workers=queue:in(new_worker(Sup, InitFun), W),
- monitors=Monitors}};
- overflow when Overflow =< 1 ->
- {next_state, ready, State#state{monitors=Monitors, overflow=0}};
- overflow ->
- {next_state, overflow, State#state{monitors=Monitors,
- overflow=Overflow-1}};
- full when MaxOverflow < 1 ->
- case queue:out(Waiting) of
- {{value, {FromPid, _}=From}, LeftWaiting} ->
- MonitorRef = erlang:monitor(process, FromPid),
- Monitors2 = [{FromPid, MonitorRef} | Monitors],
- gen_fsm:reply(From, new_worker(Sup, InitFun)),
- {next_state, full, State#state{waiting=LeftWaiting,
- monitors=Monitors2}};
- {empty, Empty} ->
- Workers2 = queue:in(new_worker(Sup, InitFun), State#state.workers),
- {next_state, ready, State#state{monitors=Monitors,
- waiting=Empty,
- workers=Workers2}}
- end;
- full when Overflow =< MaxOverflow ->
- case queue:out(Waiting) of
- {{value, {FromPid, _}=From}, LeftWaiting} ->
- MonitorRef = erlang:monitor(process, FromPid),
- Monitors2 = [{FromPid, MonitorRef} | Monitors],
- NewWorker = new_worker(Sup, InitFun),
- gen_fsm:reply(From, NewWorker),
- {next_state, full, State#state{waiting=LeftWaiting,
- monitors=Monitors2}};
- {empty, Empty} ->
- {next_state, overflow, State#state{monitors=Monitors,
- overflow=Overflow-1,
- waiting=Empty}}
- end;
- full ->
- {next_state, full, State#state{monitors=Monitors,
- overflow=Overflow-1}}
+ case lists:keytake(Pid, 1, State#state.monitors) of
+ {value, {_, Ref}, Monitors} -> erlang:demonitor(Ref),
+ case StateName of
+ ready ->
+ W = queue:filter(fun (P) -> P =/= Pid end, State#state.workers),
+ {next_state, ready, State#state{workers=queue:in(new_worker(Sup, InitFun), W),
+ monitors=Monitors}};
+ overflow when Overflow == 0 ->
+ W = queue:filter(fun (P) -> P =/= Pid end, State#state.workers),
+ {next_state, ready, State#state{workers=queue:in(new_worker(Sup, InitFun), W),
+ monitors=Monitors}};
+ overflow ->
+ {next_state, overflow, State#state{monitors=Monitors,
+ overflow=Overflow-1}};
+ full when MaxOverflow < 1 ->
+ case queue:out(Waiting) of
+ {{value, {{FromPid, _}=From, Timeout, StartTime}}, LeftWaiting} ->
+ case wait_valid(StartTime, Timeout) of
+ true ->
+ MonitorRef = erlang:monitor(process, FromPid),
+ Monitors2 = [{FromPid, MonitorRef} | Monitors],
+ gen_fsm:reply(From, new_worker(Sup, InitFun)),
+ {next_state, full, State#state{waiting=LeftWaiting,
+ monitors=Monitors2}};
+ _ ->
+ %% replay it
+ handle_info({'EXIT', Pid, foo}, StateName, State#state{waiting=LeftWaiting})
+ end;
+ {empty, Empty} ->
+ Workers2 = queue:in(new_worker(Sup, InitFun), State#state.workers),
+ {next_state, ready, State#state{monitors=Monitors,
+ waiting=Empty,
+ workers=Workers2}}
+ end;
+ full when Overflow =< MaxOverflow ->
+ case queue:out(Waiting) of
+ {{value, {{FromPid, _}=From, Timeout, StartTime}}, LeftWaiting} ->
+ case wait_valid(StartTime, Timeout) of
+ true ->
+ MonitorRef = erlang:monitor(process, FromPid),
+ Monitors2 = [{FromPid, MonitorRef} | Monitors],
+ NewWorker = new_worker(Sup, InitFun),
+ gen_fsm:reply(From, NewWorker),
+ {next_state, full, State#state{waiting=LeftWaiting,
+ monitors=Monitors2}};
+ _ ->
+ %% replay it
+ handle_info({'EXIT', Pid, foo}, StateName, State#state{waiting=LeftWaiting})
+ end;
+ {empty, Empty} ->
+ {next_state, overflow, State#state{monitors=Monitors,
+ overflow=Overflow-1,
+ waiting=Empty}}
+ end;
+ full ->
+ {next_state, full, State#state{monitors=Monitors,
+ overflow=Overflow-1}}
+ end;
+ _ ->
+ %% not a monitored pid, is it in the worker queue?
+ case queue:member(Pid, State#state.workers) of
+ true ->
+ %% a checked in worker died
+ W = queue:filter(fun (P) -> P =/= Pid end, State#state.workers),
+ {next_state, StateName, State#state{workers=queue:in(new_worker(Sup, InitFun), W)}};
+ _ ->
+ %% completely irrelevant pid exited, don't change anything
+ {next_state, StateName, State}
+ end
end;
handle_info(_Info, StateName, State) ->
{next_state, StateName, State}.
@@ -294,3 +364,12 @@ prepopulate(0, _Sup, Workers, _InitFun) ->
Workers;
prepopulate(N, Sup, Workers, InitFun) ->
prepopulate(N-1, Sup, queue:in(new_worker(Sup, InitFun), Workers), InitFun).
+
+add_waiting(From, Timeout, Queue) ->
+ queue:in({From, Timeout, os:timestamp()}, Queue).
+
+wait_valid(infinity, _) ->
+ true;
+wait_valid(StartTime, Timeout) ->
+ Waited = timer:now_diff(os:timestamp(), StartTime),
+ (Waited div 1000) < Timeout.
View
224 test/poolboy_eqc.erl
@@ -0,0 +1,224 @@
+-module(poolboy_eqc).
+-compile([export_all]).
+
+-ifdef(TEST).
+-ifdef(EQC).
+-include_lib("eqc/include/eqc.hrl").
+-include_lib("eqc/include/eqc_statem.hrl").
+
+-include_lib("eunit/include/eunit.hrl").
+
+poolboy_test_() ->
+ {timeout, 300,
+ fun() ->
+ ?assert(eqc:quickcheck(eqc:testing_time(290, poolboy_eqc:prop_parallel())))
+ end
+ }.
+
+-record(state,
+ {
+ pid,
+ size,
+ max_overflow,
+ checked_out = []
+ }).
+
+initial_state() ->
+ #state{}.
+
+command(S) ->
+ oneof(
+ [{call, ?MODULE, start_poolboy, make_args(S, nat(), nat())} || S#state.pid == undefined] ++
+ [{call, ?MODULE, stop_poolboy, [S#state.pid]} || S#state.pid /= undefined] ++
+ [{call, ?MODULE, checkout_nonblock, [S#state.pid]} || S#state.pid /= undefined] ++
+ %% checkout shrinks to checkout_nonblock so we can simplify counterexamples
+ [{call, ?MODULE, ?SHRINK(checkout_block, [checkout_nonblock]), [S#state.pid]} || S#state.pid /= undefined] ++
+ [{call, ?MODULE, checkin, [S#state.pid, fault({call, ?MODULE, spawn_process, []}, elements(S#state.checked_out))]} || S#state.pid /= undefined, S#state.checked_out /= []] ++
+ [{call, ?MODULE, kill_worker, [elements(S#state.checked_out)]} || S#state.pid /= undefined, S#state.checked_out /= []] ++
+ [{call, ?MODULE, kill_idle_worker, [S#state.pid]} || S#state.pid /= undefined] ++
+ [{call, ?MODULE, spurious_exit, [S#state.pid]} || S#state.pid /= undefined]
+ ).
+
+make_args(_S, Size, Overflow) ->
+ [[{size, Size}, {max_overflow, Overflow}, {worker_module, poolboy_test_worker}, {name, {local, poolboy_eqc}}]].
+
+spawn_process() ->
+ {spawn(fun() ->
+ timer:sleep(5000)
+ end), self()}.
+
+spawn_linked_process(Pool) ->
+ Parent = self(),
+ Pid = spawn(fun() ->
+ link(Pool),
+ Parent ! {linked, self()},
+ timer:sleep(5000)
+ end),
+ receive
+ {linked, Pid} ->
+ Pid
+ end.
+
+start_poolboy(Args) ->
+ {ok, Pid} = poolboy:start_link(Args),
+ Pid.
+
+stop_poolboy(Pid) ->
+ gen_fsm:sync_send_all_state_event(Pid, stop),
+ timer:sleep(1).
+
+checkout_nonblock(Pool) ->
+ {poolboy:checkout(Pool, false), self()}.
+
+checkout_block(Pool) ->
+ {catch(poolboy:checkout(Pool, true, 100)), self()}.
+
+checkin(Pool, {Worker, _}) ->
+ Res = poolboy:checkin(Pool, Worker),
+ gen_fsm:sync_send_all_state_event(Pool, get_avail_workers),
+ Res.
+
+kill_worker({Worker, _}) ->
+ exit(Worker, kill),
+ timer:sleep(1).
+
+kill_idle_worker(Pool) ->
+ Pid = poolboy:checkout(Pool, false),
+ case Pid of
+ _ when is_pid(Pid) ->
+ poolboy:checkin(Pool, Pid),
+ kill_worker({Pid, self()});
+ _ ->
+ timer:sleep(1),
+ kill_idle_worker(Pool)
+ end.
+
+spurious_exit(Pool) ->
+ Pid = spawn_linked_process(Pool),
+ exit(Pid, kill).
+
+precondition(S,{call,_,start_poolboy,_}) ->
+ %% only start new pool when old one is stopped
+ S#state.pid == undefined;
+precondition(S,_) when S#state.pid == undefined ->
+ %% all other states need a running pool
+ false;
+precondition(S, {call, _, kill_worker, [Pid]}) ->
+ lists:member(Pid, S#state.checked_out);
+precondition(S,{call,_,kill_idle_worker,[_Pool]}) ->
+ length(S#state.checked_out) < S#state.size;
+precondition(_S,{call,_,_,_}) ->
+ true.
+
+%% XXX comment out for parallel mode XXX
+%dynamic_precondition(S = #state{pid=Pid},_) when Pid /= undefined ->
+ %State = if length(S#state.checked_out) == S#state.size + S#state.max_overflow ->
+ %full;
+ %length(S#state.checked_out) >= S#state.size ->
+ %overflow;
+ %true ->
+ %ready
+ %end,
+
+ %Workers = max(0, S#state.size - length(S#state.checked_out)),
+ %OverFlow = max(0, length(S#state.checked_out) - S#state.size),
+ %Monitors = length(S#state.checked_out),
+
+ %RealStatus = gen_fsm:sync_send_all_state_event(Pid, status),
+ %case RealStatus == {State, Workers, OverFlow, Monitors} of
+ %true ->
+ %true;
+ %_ ->
+ %exit({wrong_state, RealStatus, {State, Workers, OverFlow, Monitors}})
+ %end;
+%dynamic_precondition(_,_) ->
+ %true.
+
+postcondition(S,{call,_,checkout_block,[_Pool]},R) ->
+ case R of
+ {{'EXIT', {timeout, _}}, _} ->
+ length(S#state.checked_out) >= S#state.size + S#state.max_overflow;
+ _ ->
+ length(S#state.checked_out) < S#state.size + S#state.max_overflow
+ end;
+postcondition(S,{call,_,checkout_nonblock,[_Pool]},R) ->
+ case R of
+ {full, _} ->
+ length(S#state.checked_out) >= S#state.size + S#state.max_overflow;
+ _ ->
+ length(S#state.checked_out) < S#state.size + S#state.max_overflow
+ end;
+postcondition(_S, {call,_,checkin,_}, R) ->
+ R == ok;
+postcondition(_S,{call,_,_,_},_R) ->
+ true.
+
+next_state(S,V,{call,_,start_poolboy, [Args]}) ->
+ S#state{pid=V,
+ size=proplists:get_value(size, Args),
+ max_overflow=proplists:get_value(max_overflow, Args)
+ };
+next_state(S,_V,{call,_,stop_poolboy, [_Args]}) ->
+ S#state{pid=undefined, checked_out=[]};
+next_state(S,V,{call,_,checkout_block,_}) ->
+ %% if the model says the checkout worked, store the result
+ case checkout_ok(S) of
+ false ->
+ S;
+ _ ->
+ S#state{checked_out=S#state.checked_out++[V]}
+ end;
+next_state(S,V,{call,_,checkout_nonblock,_}) ->
+ %% if the model says the checkout worked, store the result
+ case checkout_ok(S) of
+ false ->
+ S;
+ _ ->
+ S#state{checked_out=S#state.checked_out++[V]}
+ end;
+next_state(S,_V,{call, _, checkin, [_Pool, Worker]}) ->
+ S#state{checked_out=S#state.checked_out -- [Worker]};
+next_state(S,_V,{call, _, kill_worker, [Worker]}) ->
+ S#state{checked_out=S#state.checked_out -- [Worker]};
+next_state(S,_V,{call, _, kill_idle_worker, [_Pool]}) ->
+ S;
+next_state(S,_V,{call, _, spurious_exit, [_Pool]}) ->
+ S;
+next_state(S,V,{call, erlang, self, []}) ->
+ %% added after test generation, values are never symbolic
+ S#state{checked_out=[{Worker, Pid} || {Worker, Pid} <- S#state.checked_out, Pid /= V]}.
+
+
+prop_sequential() ->
+ fault_rate(1, 10,
+ ?FORALL(Cmds,commands(?MODULE),
+ ?TRAPEXIT(
+ aggregate(command_names(Cmds),
+ begin
+ {H,S,Res} = run_commands(?MODULE,Cmds),
+ catch(stop_poolboy(whereis(poolboy_eqc))),
+ ?WHENFAIL(io:format("History: ~p\nState: ~p\nRes: ~p\n~p\n",
+ [H,S,Res, zip(tl(Cmds), [Y || {_, Y} <- H])]),
+ Res == ok)
+ end)))).
+
+prop_parallel() ->
+ fault_rate(1, 10,
+ ?FORALL(Cmds={Seq,Par},parallel_commands(?MODULE),
+ ?TRAPEXIT(
+ aggregate(command_names(Cmds),
+ begin
+ NewPar = [P ++ [{set, {var, 0}, {call, erlang, self, []}}] || P <- Par],
+ {H,S,Res} = run_parallel_commands(?MODULE,{Seq,NewPar}),
+ catch(stop_poolboy(whereis(poolboy_eqc))),
+ ?WHENFAIL(io:format("History: ~p\nState: ~p\nRes: ~p\n",
+ [H,S,Res]),
+ Res == ok)
+ end)))).
+
+
+checkout_ok(S) ->
+ length(S#state.checked_out) < S#state.size + S#state.max_overflow.
+
+-endif.
+-endif.
Something went wrong with that request. Please try again.