Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
branch: master
Commits on Jun 4, 2011
  1. Major insight with McErlang: size of per-process state *matters*.

    authored
    So, silly me, McErlang (and most/all other finite state model
    checkers) need to be able to identify states that have been seen
    before.  The SPIN book warned against the use of counters and has a
    long commentary on how process IDs are managed to try to avoid using
    too many.  And there's a section in the McErlang user's guide about
    avoiding things like counters, references, timestamps,
    `erlang:now()`, and other things are end up being unique.
    
    Did I pay attention?  Well, I am now.
    
    Here are various hacks to reduce the size and variability of process
    state records:
    
    * Always sort lists (unless order is important).
    * Avoid `now()` and use `make_ref()` instead.  McErlang's parse
      transform takes care of making as-undynamic-as-possible substitute
      values for PIDs and references, but it doesn't do anything with
      `now()`.
    * Remove unused `#s` record members: watchers2 and ph2_now.
    * Use a constant atom instead of time for vclock timestamp.
    
    The `#c` client record and protcols both could be slightly (more than
    slightly?) simplified by making the cookie constant.  However, as
    the protocol needs cookie uniqueness to avoid mistaking delayed
    messages with current operations.  Bleh.  Same for the `clop`
    identifier (for selective receive).  Bleh bleh.
    
    So, these changes make a difference ... but, alas, they don't make a
    tremendously huge difference.  When I tried running a 1 client x 3
    server x 2 counter ops by that single client, it finishes in under an
    hour and a half, but ... 66 million states?  {sigh}
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_2phase_vclocksetwatch_sim, [{no_generator, true}], 1, 3, 1, lists:duplicate(2, {c3, {counter_op, [s1,s2,s3]}}), [{c3,[],counter_client1}], [{s1, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}, {s2, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}, {s3, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}]], #mce_opts{table={mce_table_bitHash,[1024*1024*510]}, pathLimit=100}).
    
        *** Run ending. 66524923 states explored, stored states 15383564.
    
        Execution terminated normally.
        Table has 15383564 states and 66524922 transitions.
    
        Reductions: 67208.03 mreds; Runtime: 4081.100 seconds; 4139.29 elapsed seconds
Commits on Jun 3, 2011
  1. Intermediate checkin for McErlang tests: nothing fundamentally new, a…

    authored
    …las.
    
    So, I'm trying to figure out why the
    distrib_counter_2phase_vclocksetwatch_sim algorithm blows up
    so incredibly badly with McErlang.  1 client + 3 servers + that
    single client issuing a single `counter_op` = 22 *million* states.
    That's insane.  Something is wrong.  Or, at least I hope so.  The
    alternative is that I'm simply not used to dealing with this kind
    of exponential state explosion.  {sigh}  The challenge/trick: how
    to figure out the difference!
Commits on May 31, 2011
  1. Use McErlang probe() to store final process state, abandon harvest sc…

    authored
    …heme.
    
    Houston, we may have a problem:
    
        # of states explored
        ====================
                                          1 server   2 servers   3 servers
                                          --------   ---------   ---------
        1 client, 1 watch op + 0 set ops:      34          128         862
        1 client, 1 set op + 0 watch ops:      48        1,517   3,631,187
    
    Holy !@#$!%!, that's a huge jump in state size.  It's really only
    feasible on my 8GB RAM Mac by using the `mce_table_bitHash` feature,
    it's looking like, oi oi oi....
    
    The command that does it (well, with variations in duplicate arg
    (0 or 1) and # of servers in the watch_op/counter_op server list:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_2phase_vclocksetwatch_sim, [{no_generator, true}], 1, 1, 1, lists:duplicate(1, {c1, {watch_op, [s1,s2,s3]}}) ++ lists:duplicate(0, {c1, {counter_op, [s1,s2,s3]}}), [{c1, [], counter_client1}], [{s1, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}, {s2, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}, {s3, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}]], #mce_opts{is_infinitely_fast=true, table={mce_table_bitHash,[1024*1024*510]}, pathLimit=100}).
  2. Fix for last commit, which was sending the wrong ClOp value in its {w…

    authored
    …atch_notify_maybe_resp,...} reply
    
    Example usage:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_2phase_vclocksetwatch_sim, [{no_generator, true}], 1, 1, 1, lists:duplicate(1, {c1, {watch_op, [s1]}}) ++ lists:duplicate(1, {c1, {counter_op, [s1]}}), [{c1, [], counter_client1}], [{s1, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}]], #mce_opts{is_infinitely_fast=true, pathLimit=2000}).
        [...]
        *** Run ending. 446 states explored, stored states 303.
    
        Execution terminated normally.
        Table has 303 states and 445 transitions.
    
        Reductions: 0.18 mreds; Runtime: 0.020 seconds; 0.02 elapsed seconds
        Access result using mce:result()
        ok
    
    That is much, **much** better than the infinite looping that I'd
    see before (unless it was interrupted with a `pathLimit` setting).
  3. Fix first bug found by McErlang, I think.

    authored
    Sequence with client c1 and server s1::
    
    1. Client c1 sets up a watch but times out.
      * Server s1's reply is dropped.
    2. Client c1's watch request times out
    3. Client c1 starts a counter_op and gets as far as `client_ph2_waiting`.
    4. Server s1's response is dropped.
       * Client c1 is still waiting for a `{ph2_ok,...}` message
    5. Server s1 times out waiting for c1's `{watch_notifies_delivered,...}`
       message.
    6. Server s1 sends a `{watch_notify_maybe_req,...}` message to c1.
    7. Client c1 isn't expecting a `{watch_notify_maybe_req,...}` message
       and therefore never replies.
    8. Server s1 times out waiting for `{watch_notify_maybe_resp,...}` from
       client c1.  Goto step #6, looping forever.
  4. Justincase commit ... what is broken, WHY WHY WHY?

    authored
    Hrm, I can't figure this out because try/catch catching whatever error
    is happening inside a client process.
    
    Simplified example:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_2phase_vclocksetwatch_sim, [{no_generator, true}], 1, 1, 1, lists:duplicate(0, {c1, {watch_op, [s1]}}) ++ lists:duplicate(1, {c1, {counter_op, [s1]}}), [{c1, [], counter_client1}], [{s1, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}]], #mce_opts{is_infinitely_fast=true, pathLimit=20, table={mce_table_bitHash,[1024*1024*510]}}).
Commits on May 30, 2011
  1. First successful McErlang run of distrib_counter_2phase_vclocksetwatc…

    authored
    …h_sim
    
    Undoubtedly there's a lot more work to do, but it's good to see that this
    tiny little example works.
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_2phase_vclocksetwatch_sim, [{no_generator, true}], 1, 1, 1, lists:duplicate(3, {c1, {counter_op, [s1]}}), [{c1, [], counter_client1}], [{s1, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}]], #mce_opts{is_infinitely_fast=true, shortest=true}).
    
    Not long after, I discovered that adding a watch operation *really*
    explodes the state space.  I believe (not confirmed yet) that the
    retransmit logic for watch notifications can cause an infinite loop, which
    will then cause McErlang to each an infinite amount of memory very quickly.
    
    Using the `pathLimit=N` helps, but I wonder how small I can make that
    number and still get worthwhile results with non-trivial combinations of
    ops.  For example, with 2 counter ops by client C1 and 1 watch op by
    client C2 and only 1 server and limits of
    `pathLimit=45, table={mce_table_bitHash,[1024*1024*510]}` (NOTE: your
    machine will need over 4100MB of RAM to make that bit hash table),
    I see:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_2phase_vclocksetwatch_sim, [{no_generator, true}], 2, 1, 1, lists:duplicate(1, {c1, {counter_op, [s1]}}) ++ lists:duplicate(1, {c2, {watch_op, [s1]}}), [{c1, [], counter_client1}, {c2, [], counter_client1}], [{s1, {s,{obj,[],[0]},undefined,undefined,[]}, counter_server1}]], #mce_opts{is_infinitely_fast=true, pathLimit=45, table={mce_table_bitHash,[1024*1024*510]}}).
        [...]
        Generated states 610000; checked states 896702; relation 1.4700032786885247
    
        *** Run ending. 899408 states explored, stored states 611704.
    
        Execution terminated normally.
        Table has 611704 states and 899407 transitions.
    
        Reductions: 269.14 mreds; Runtime: 28.930 seconds; 29.24 elapsed seconds
    
    If that increases to a path length of 50:
    
        *** Run ending. 3636785 states explored, stored states 2566636.
    
        Execution terminated normally.
        Table has 2566636 states and 3636784 transitions.
    
        Reductions: 1101.09 mreds; Runtime: 119.980 seconds; 121.31 elapsed seconds
    
    So, it's quite likely that I'll have to put some kind of short-circuit into
    the retry logic and then add probe message of some kind when the
    short-circuit doodad is triggered (so that later verification predicates
    can relax their checks when the probe appears).
Commits on May 29, 2011
  1. justincase

    authored
  2. justincase

    authored
  3. justincase

    authored
  4. Silly rebase conflict

    authored
  5. Some simple notes about memory & execution time, QuickCheck vs. McErl…

    authored
    …ang.
    
    Using the known-flawed `distrib_counter_bad1_sim` test, here's a bit
    of fair/unfair comparison.
    
    In the QuickCheck corner, the example:
    
        > timer:tc(eqc,quickcheck,[eqc:numtests(15*1000, slf_msgsim_qc:prop_simulate(distrib_counter_bad1_sim, [{max_clients,2}, {max_servers,3}]))]).
    
    In the McErlang corner, the example:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 3, 1, lists:duplicate(2, {c1, {counter_op, [s1,s2,s3]}}) ++ lists:duplicate(1, {c2, {counter_op, [s1,s2,s3]}}), [{c1, [], counter_client1}, {c2, [], counter_client1}], [{s1, 0, counter_server1}, {s2, 0, counter_server1}, {s3, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true}).
    
    Time used by these contestants:
    
    * QuickCheck: anywhere from 1.5 seconds to 83 seconds (20 iterations
      were performed).  (Note that the QuickCheck method has no guarantee
      that it will find a counterexample in a buggy protocol, even when
      run for 10x longer.)
    * McErlang: 13-14 seconds
    
    Memory used by these contestants:
    
    * QuickCheck: RSS size is consistently less than 35MB.
    * McErlang: RSS size grows to roughly 750MB,
      `Generated states 140000; checked states 460073` before an error is
      found.
Commits on May 28, 2011
  1. Change my_bang() to use McErlang probe when message is dropped.

    authored
    Now, it's a bit easier to see what's happening in a counter-example,
    using this:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 3, 1, lists:duplicate(3, {c1, {counter_op, [s1,s2,s3]}}), [{c1, [], counter_client1}], [{s1, 0, counter_server1}, {s2, 0, counter_server1}, {s3, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true, shortest=true}).
        > mce_erl_debugger:start(mce:result()).
        > showExecution().
        [... save this output to a file, e.g. ~/foo ...]
        > quit().  %% to exit the McErlang debugger
    
        % egrep '^[0-9]+:|^    receive|"registered as .*"\!|PROBE' ~/foo
    
        1: process <'node0@sbb.snookles.com',4>:
        2: process <'node0@sbb.snookles.com',4> "registered as s3":
            receive {ping,{pid,'node0@sbb.snookles.com',1}}
        3: process <'node0@sbb.snookles.com',1>:
            receive pong
        4: process <'node0@sbb.snookles.com',3>:
        5: process <'node0@sbb.snookles.com',3> "registered as s2":
            receive {ping,{pid,'node0@sbb.snookles.com',1}}
        6: process <'node0@sbb.snookles.com',1>:
            receive pong
        7: process <'node0@sbb.snookles.com',2>:
        8: process <'node0@sbb.snookles.com',2> "registered as s1":
            receive {ping,{pid,'node0@sbb.snookles.com',1}}
        9: process <'node0@sbb.snookles.com',1>:
            receive pong
        10: process <'node0@sbb.snookles.com',5>:
        11: process <'node0@sbb.snookles.com',5> "registered as c1":
            receive {ping,{pid,'node0@sbb.snookles.com',1}}
        12: process <'node0@sbb.snookles.com',5> "registered as c1":
            <'node0@sbb.snookles.com',2> "registered as s1"!{incr_counter,c1}
        13: process <'node0@sbb.snookles.com',5> "registered as c1":
            *** PROBE {drop,c1,s2,{incr_counter,c1}} ***
        14: process <'node0@sbb.snookles.com',5> "registered as c1":
            *** PROBE {drop,c1,s3,{incr_counter,c1}} ***
        15: process <'node0@sbb.snookles.com',1>:
            receive pong
        16: process <'node0@sbb.snookles.com',2> "registered as s1":
            receive {incr_counter,c1}
        17: process <'node0@sbb.snookles.com',2> "registered as s1":
            <'node0@sbb.snookles.com',5> "registered as c1"!{incr_counter_reply,s1,0}
        18: process <'node0@sbb.snookles.com',5> "registered as c1":
            receive {incr_counter_reply,s1,0}
        19: process <'node0@sbb.snookles.com',5> "registered as c1":
        20: process <'node0@sbb.snookles.com',5> "registered as c1":
            <'node0@sbb.snookles.com',2> "registered as s1"!{incr_counter,c1}
        21: process <'node0@sbb.snookles.com',5> "registered as c1":
            <'node0@sbb.snookles.com',3> "registered as s2"!{incr_counter,c1}
        22: process <'node0@sbb.snookles.com',5> "registered as c1":
            *** PROBE {drop,c1,s3,{incr_counter,c1}} ***
        23: process <'node0@sbb.snookles.com',3> "registered as s2":
            receive {incr_counter,c1}
        24: process <'node0@sbb.snookles.com',3> "registered as s2":
            <'node0@sbb.snookles.com',5> "registered as c1"!{incr_counter_reply,s2,0}
        25: process <'node0@sbb.snookles.com',5> "registered as c1":
            receive {incr_counter_reply,s2,0}
        26: process <'node0@sbb.snookles.com',2> "registered as s1":
            receive {incr_counter,c1}
        27: process <'node0@sbb.snookles.com',2> "registered as s1":
            <'node0@sbb.snookles.com',5> "registered as c1"!{incr_counter_reply,s1,1}
        28: process <'node0@sbb.snookles.com',5> "registered as c1":
            receive {incr_counter_reply,s1,1}
        29: process <'node0@sbb.snookles.com',5> "registered as c1":
        30: process <'node0@sbb.snookles.com',5> "registered as c1":
            *** PROBE {drop,c1,s1,{incr_counter,c1}} ***
        31: process <'node0@sbb.snookles.com',5> "registered as c1":
            <'node0@sbb.snookles.com',3> "registered as s2"!{incr_counter,c1}
        32: process <'node0@sbb.snookles.com',5> "registered as c1":
            <'node0@sbb.snookles.com',4> "registered as s3"!{incr_counter,c1}
        33: process <'node0@sbb.snookles.com',4> "registered as s3":
            receive {incr_counter,c1}
        34: process <'node0@sbb.snookles.com',4> "registered as s3":
            <'node0@sbb.snookles.com',5> "registered as c1"!{incr_counter_reply,s3,0}
        35: process <'node0@sbb.snookles.com',5> "registered as c1":
            receive {incr_counter_reply,s3,0}
        36: process <'node0@sbb.snookles.com',3> "registered as s2":
            receive {incr_counter,c1}
        37: process <'node0@sbb.snookles.com',3> "registered as s2":
            <'node0@sbb.snookles.com',5> "registered as c1"!{incr_counter_reply,s2,1}
        38: process <'node0@sbb.snookles.com',5> "registered as c1":
            receive {incr_counter_reply,s2,1}
        39: process <'node0@sbb.snookles.com',5> "registered as c1":
        40: process <'node0@sbb.snookles.com',1>:
            receive {{pid,'node0@sbb.snookles.com',5},
            <'node0@sbb.snookles.com',2> "registered as s1"!shutdown
            <'node0@sbb.snookles.com',3> "registered as s2"!shutdown
            <'node0@sbb.snookles.com',4> "registered as s3"!shutdown
        41:
  2. Fix really-inanely-screwed-up tests in verify_mc_property().

    authored
    I dunno how I managed to screw this up so badly, but I did.  The
    code before this patch was trying to do pattern matching with the
    wrongwrongwrong pattern, which resulted in empty lists, and those
    empty lists never showed a bad event (i.e. when list was non-empty).
    
    So, finally, we find our first counter-example that we know that
    distrib_counter_bad1_sim + QuickCheck - McErlang + network partitions
    does indeed find:
    
        > eqc:current_counterexample().
        [{1,3,1},
         {[{c1,{counter_op,[s1,s2,s3]}},
           {c1,{counter_op,[s1,s2,s3]}},
           {c1,{counter_op,[s1,s2,s3]}},
           {c1,{counter_op,[s1,s2,s3]}}],
          [{c1,[],counter_client1}],
          [{s1,0,counter_server1},
           {s2,0,counter_server1},
           {s3,0,counter_server1}],
          [c1,s1,s2,s3],
          [{partition,[s3],[c1],29,40},{partition,[c1],[s1,s2],1,13}],
          []}]
        > eqc:check(eqc:numtests(5000, slf_msgsim_qc:prop_simulate(distrib_counter_bad1_sim, [])), X5).
        [...]
             [{c1,43,{counter,2}},    %% Here's the emitted list, in reverse order
              {c1,31,{counter,2}},
              {c1,18,{counter,timeout}},
              {c1,12,{counter,0}}],
        [...]
        Emitted counters = [0,2,2]
        false
    
    McErlang does inded find a duplicate counter.  After a bit of
    experimenting, I see that really only three counter_ops are necessary:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 3, 1, lists:duplicate(3, {c1, {counter_op, [s1,s2,s3]}}), [{c1, [], counter_client1}], [{s1, 0, counter_server1}, {s2, 0, counter_server1}, {s3, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true}).
        [...]
        *** User code generated error
        exception exit due to reason
        {all_emitted,[{c1,{counter,0}},{c1,{counter,1}},{c1,{counter,1}}]}
        Stack trace:
          -distrib_counter_bad1_sim:verify_mc_property near line
        127/4-anonymous-0-/3
          slf_msgsim_qc:prop_mc_simulate2/7
          mce_erl_opsem:runUserCode/2
          mce_erl_opsem:doStep/2
          mce_alg_safety:'-transitions/5-fun-0-'/6
          lists:map/2
          mce_alg_safety:run/4
          mce_alg_safety:check_transitions/5
    
        Stack depth 46 entries.
    
        Reductions: 4.09 mreds; Runtime: 0.500 seconds; 0.51 elapsed seconds
    
    If we try to find the shortest, we add `shortest=true` to the
    `#mce_opts` record.
    
        [...]
        Generated states 90000; checked states 190194; relation 2.1132666666666666
    
        *** Run ending. 208793 states explored, stored states 98735.
    
        *** Property violation detected at minimum depth 40
    
        *** User code generated error
        exception exit due to reason
        {all_emitted,[{c1,{counter,timeout}},{c1,{counter,1}},{c1,{counter,1}}]}
        Stack trace:
          -distrib_counter_bad1_sim:verify_mc_property near line
        127/4-anonymous-0-/3
          slf_msgsim_qc:prop_mc_simulate2/7
          mce_erl_opsem:runUserCode/2
          mce_erl_opsem:doStep/2
          mce_alg_safety:'-transitions/5-fun-0-'/6
          lists:map/2
          mce_alg_safety:run/4
          mce_alg_safety:check_transitions/5
    
        Stack depth 41 entries.
    
        Reductions: 49.88 mreds; Runtime: 6.270 seconds; 6.50 elapsed seconds
  3. Hrm, it looks like 2 receive statements "nested" don't work.

    authored
    If Proc A is waiting for a message from Proc B with timeout 5X,
    and Proc B is waiting for a message with timeout 1X, then McErlang
    can decide that Proc A's timeout can happen first, even when
    
    So, we'll avoid the problem by removing an "after" clause when
    getting results from clients.  As a result, if a client is misbehaved
    and doesn't send us the results message, then we'll hang forever.
  4. Amazing what happens when you actually check for failure conditions.

    authored
    So, now this bad boy fails with a `hey_timeout_bad`, which shouldn't
    be happening (I think).
    
        mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 3, 1, lists:duplicate(2, {c1, {counter_op, [s1,s2,s3]}}), [{c1, [], counter_client1}], [{s1, 0, counter_server1}, {s2, 0, counter_server1}, {s3, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true}).
  5. Hm, a scenario that ought to fail still is *not* failing with McErlang.

    authored
    Scenario:
    
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 3, 1, lists:duplicate(4, {c1, {counter_op, [s1,s2,s3]}}), [{c1, [], counter_client1}], [{s1, 0, counter_server1}, {s2, 0, counter_server1}, {s3, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true, algorithm={mce_alg_safety_parallel,4}}).
    
    I think I need to double-check the `verify_mc_property()` func....
  6. Small milestone: McErlang + possible message drops

    authored
    Example (WARNING: example is flawed because all 3 server names are not properly
    enumerated in `{counter_op,...}` tuples!):
    
        mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 2, 3, 1, [{c1, {counter_op, [s1]}}, {c2, {counter_op, [s1]}}, {c1, {counter_op, [s1]}}, {c2, {counter_op, [s1]}}, {c1, {counter_op, [s1]}}, {c2, {counter_op, [s1]}}], [{c1, [], counter_client1}, {c2, [], counter_client1}], [{s1, 0, counter_server1}, {s2, 0, counter_server1}, {s3, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true}).
    
    If `my_bang(Rcpt, Msg) -> Rcpt ! Msg`, then:
    
        *** Run ending. 20429 states explored, stored states 9582.
    
        Execution terminated normally.
        Table has 9582 states and 20428 transitions.
    
        Reductions: 5.95 mreds; Runtime: 0.710 seconds; 0.74 elapsed seconds
    
    If `my_bang(Rcpt, Msg) -> mce_erl:choice([{fun() -> Rcpt ! Msg end, []},...`
    then:
    
        *** Run ending. 270677 states explored, stored states 145332.
    
        Execution terminated normally.
        Table has 145332 states and 270676 transitions.
    
        Reductions: 152.82 mreds; Runtime: 13.450 seconds; 14.03 elapsed seconds
        Access result using mce:result()
  7. First passing test case with McErlang, after adding is_infinitely_fas…

    authored
    …t=true.
    
    Example:
    
        mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 1, 1, [{c1, {counter_op, [s1]}}], [{c1, [], counter_client1}], [{s1, 0, counter_server1}]], #mce_opts{is_infinitely_fast=true}).
    
    ... which gives the output:
    
        *** Run ending. 15 states explored, stored states 13.
    
        Execution terminated normally.
        Table has 13 states and 14 transitions.
    
        Reductions: 0.01 mreds; Runtime: 0.000 seconds; 0.00 elapsed seconds
        Access result using mce:result()
        ok
    
    This is a really small example.  If we reduce the # of ops from 1 -> 0, then
    we see:
    
        *** Run ending. 9 states explored, stored states 9.
    
        Execution terminated normally.
        Table has 9 states and 8 transitions.
  8. Sort-of milestone: first run of distrib_counter_bad1_sim under McErlang.

    authored
    Usage:
    
        % mcerl_compiler -sources src/distrib_counter_bad1_sim.erl src/slf_msgsim*erl % -output_dir mc_ebin
        % mcerl -pz ./mc_ebin
        > rr(mce:find_mce_opts()).
        > mce:apply(slf_msgsim_qc, prop_mc_simulate2, [distrib_counter_bad1_sim, [{no_generator, true}], 1, 1, 1, [{c1, {counter_op, [s1]}}], [{c1, [], counter_client1}], [{s1, 0, counter_server1}]]).
    
    ... which fails in 9 steps due to a `badarg` in mcerlang:resolvePid/2, whee!
  9. justincase-part2

    authored
  10. justincase

    authored
  11. Aside: notes on playing with "erl +T {n}" values to adjust the Erlang…

    authored
    … scheduler
    
    Experimentation using:
    
        eqc:quickcheck(eqc:numtests(200, slf_msgsim_qc:prop_mc_simulate(distrib_counter_bad1_sim, []))).
    
    Using `erl +T 0`, the output of the per-client-process counters usually looks
    something like a zig-zag, starting with the 1st client process first line.
    And it's like this very consistently.  For example:
    
        [[{counter,0},{counter,2}],
         [{counter,1},{counter,4},{counter,6}],
         [{counter,3},{counter,5}],
         [],
         [{counter,7},{counter,8},{counter,9}]]
    
    Using `erl +T 5`, the same zig-zag pattern usually appears, but occasionally
    something different happens, like this almost-top-to-bottom-left-to-right
    pattern:
    
        [[{counter,0}],
         [{counter,1},{counter,4},{counter,7}],
         [],
         [{counter,2},{counter,6},{counter,8}],
         [{counter,3},{counter,5},{counter,9}]]
    
    Using `erl +T 6`, things appear to get *much* slower.  (The "man" page
    wasn't lying when it said that higher `+T` values can hurt performance.)
    Now, we almost always see each process being as greedy as possible,
    running seemingly to completion before allowing another client to run,
    for example:
    
        [[{counter,0},{counter,1}],
         [{counter,2},{counter,3},{counter,4}],
         [{counter,5}],
         [{counter,6},{counter,7},{counter,8},{counter,9}]]
    
    At `+T 8` and `+T 9`, things are noticibly slower than the slow `+T 6` or
    `+T 7`.  But the same greedy-like client behavior as described above for
    `+T 6` is the same.
  12. 1st attempt at getting a McErlang simulation ready: distrib_counter_b…

    authored
    …ad1_sim
    
    This doesn't do anything directly with McErlang yet.  But it's
    getting closer, I hope.  I want/need something that can reuse as
    much of the msgdropsim infrastructure as possible ... but I haven't
    spent enough time to figure out the easiest/laziest/most beautiful
    way to do that yet.  As a result, this commit is pretty ugly.
    
    Added a new QuickCheck property creator-thingie to slf_msgsim_qc.erl,
    `prop_mc_simulate/2`.  It usage is indended to be 100% compatible
    with `prop_simulate/2`.  Right now it's only an ugly kludge, probably
    only good enough for `distrib_counter_bad1_sim.erl`, but it's a
    start.
    
    Example:
    
        eqc:quickcheck(eqc:numtests(5000, slf_msgsim_qc:prop_mc_simulate(distrib_counter_bad1_sim, []))).
    
    For `distrib_counter_bad1_sim.erl` there have been a lot of changes,
    many of them temporary kludges:
    
    * Modify `general_initial_ops/4` to be able to take a list of
    server pids out of the `Props` property list.  The kludge is that
    the server processes need to be started first so that their PIDs
    can be embedded into the client op tuple because we aren't using
    registered names so we gotta use PIDs, bleh.  I may yet go back
    to using registered names, I think, it may make life a heckuvalot
    easier ... except for the race conditions that may pop up when
    starting all of these silly processes, bleh.
    
    * Create a `verify_mc_property/7` function to perform the same
    test sanity check as `verify_property/7` does for msgdropsim's
    simulated processes.
    
    * Convert the entire process simulation code to standard Erlang
    code.  I'd *love* to find a better way to eliminate this step
    or to automate the code generation from this step, because it's
    ugly and manual (and therefore error-prone), but the wonderful-
    in-the-real-world-but-difficult-to-model semantics of selective
    receive make that job difficult.  So, I need to put a lot more
    thought into this aspect, alas.
    
    For each state, e.g., `counter_client1_reply()`, there is an
    Erlang function that corresponds to that FSM state, e.g.,
    `e_counter_client1_reply()`.  The `e_` prefix stands for
    "Erlang".
Commits on May 26, 2011
Something went wrong with that request. Please try again.