Skip to content

Commit

Permalink
add ellipses to docs for doctests
Browse files Browse the repository at this point in the history
  • Loading branch information
geraintpalmer committed Apr 26, 2024
1 parent c594983 commit 828cf6f
Show file tree
Hide file tree
Showing 8 changed files with 15 additions and 23 deletions.
2 changes: 1 addition & 1 deletion docs/Guides/CustomerBehaviour/baulking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ When the system is simulated, the baulked customers are recorded as data records
>>> baulked_recs = [r for r in recs if r.record_type=="baulk"]
>>> r = baulked_recs[0]
>>> (r.id_number, r.customer_class, r.node, r.arrival_date)
(44, 'Class 0', 1, 9.45892050639243)
(44, 'Class 0', 1, 9.45892050639...)

Note that baulking works and behaves differently to simply setting a queue capacity.
Filling a queue's capacity results in arriving customers being *rejected* (and recorded as data records of type :code:`"rejection"`), and transitioning customers to be blocked.
Expand Down
4 changes: 2 additions & 2 deletions docs/Guides/Queues/queue_capacities.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Information about blockages are visible in the service data records::
>>> recs = Q.get_all_records(only=['service'])
>>> dr = recs[381]
>>> dr
Record(id_number=281, customer_class='Customer', original_customer_class='Customer', node=1, arrival_date=86.47159563260503, waiting_time=0.23440800156484443, service_start_date=86.70600363416987, service_time=0.6080763379283525, service_end_date=87.31407997209823, time_blocked=0.7507016571852461, exit_date=88.06478162928347, destination=2, queue_size_at_arrival=4, queue_size_at_departure=2, server_id=3, record_type='service')
Record(id_number=281, customer_class='Customer', original_customer_class='Customer', node=1, arrival_date=86.47159..., waiting_time=0.23440..., service_start_date=86.70600..., service_time=0.60807..., service_end_date=87.31407..., time_blocked=0.75070..., exit_date=88.06478..., destination=2, queue_size_at_arrival=4, queue_size_at_departure=2, server_id=3, record_type='service')

In the case above, the customer ended service at date :code:`87.31407997209823`, but didn't exit until date :code:`88.06478162928347`, giving a :code:`time_blocked` of :code:`0.7507016571852461`.
In the case above, the customer ended service at date :code:`87.31407...`, but didn't exit until date :code:`88.06478...`, giving a :code:`time_blocked` of :code:`0.75070...`.

4 changes: 2 additions & 2 deletions docs/Guides/Queues/system_capacity.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
How to Set a Maximium Capacity for the Whole System
===================================================

We have seen that :ref:`node capacities<_tutorial-iii>` can define restricted queueing networks. Ciw also allows for a whole system capacity to be set. When a system capacity is set, when the total number of customers present in *all* the nodes of the system is equal to the system capacity, then newly arriving customers will be rejected. Once the total number of customers drops back below the system capacity, then customers will be accepted into the system again.
We have seen that :ref:`node capacities<tutorial-iii>` can define restricted queueing networks. Ciw also allows for a whole system capacity to be set. When a system capacity is set, when the total number of customers present in *all* the nodes of the system is equal to the system capacity, then newly arriving customers will be rejected. Once the total number of customers drops back below the system capacity, then customers will be accepted into the system again.

In order to implement this, we use the :code:`system_capacity` keyworks when creating the network::

Expand All @@ -27,5 +27,5 @@ In this case, the total capacity of nodes 1 and 2 is 4, and the system will neve
>>> Q.simulate_until_max_time(100)
>>> state_probs = Q.statetracker.state_probabilities()
>>> state_probs
{0: 0.03369655546653017, 1: 0.1592711312247873, 2: 0.18950832844418355, 3: 0.2983478656854591, 4: 0.31917611917904}
{0: 0.03369..., 1: 0.15927..., 2: 0.18950..., 3: 0.29834..., 4: 0.31917...}

12 changes: 2 additions & 10 deletions docs/Guides/Routing/join_shortest_queue.rst
Original file line number Diff line number Diff line change
Expand Up @@ -169,13 +169,5 @@ We'll run this for 100 time units::
We can look at the state probabilities, that is, the proportion of time the system spent in each state, where a state represents the number of customers present in the system::

>>> state_probs = Q.statetracker.state_probabilities(observation_period=(10, 90))
>>> for n in range(8):
... print(n, round(state_probs[n], 5))
0 0.436
1 0.37895
2 0.13629
3 0.03238
4 0.01255
5 0.00224
6 0.00109
7 0.00051
>>> state_probs
{1: 0.37895..., 2: 0.13628..., 3: 0.03237..., 0: 0.43600..., 4: 0.01254..., 5: 0.00224..., 6: 0.00108..., 7: 0.00050...}
2 changes: 1 addition & 1 deletion docs/Guides/Services/processor-sharing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Now we create a simulation object using :code:`ciw.PSNode` rather than :code:`ci
>>> Q = ciw.Simulation(N, node_class=ciw.PSNode)

Note that this applies the process sharing node to every node of the network.
Alternatively we could provide a list of different node classes, for use on each different node of the network (see :ref:`this example <ps-routing>` for an in depth example of this)::
Alternatively we could provide a list of different node classes, for use on each different node of the network (see :ref:`this example <example_lb>` for an in depth example of this)::

>>> ciw.seed(0)
>>> Q = ciw.Simulation(N, node_class=[ciw.PSNode])
Expand Down
8 changes: 4 additions & 4 deletions docs/Guides/Services/server_priority.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Observing the utilisation of each server we can see that server 1 is far more bu
>>> Q.simulate_until_max_time(1000)

>>> [srv.utilisation for srv in Q.nodes[1].servers]
[0.3184942440259139, 0.1437617661984246, 0.04196395909329539]
[0.31849424..., 0.14376176..., 0.04196395...]


Ciw allows servers to be prioritised according to a custom server priority function.
Expand All @@ -52,7 +52,7 @@ The :code:`server_priority_functions` keyword takes a list of server priority fu
>>> Q.simulate_until_max_time(1000)

>>> [srv.utilisation for srv in Q.nodes[1].servers]
[0.16784616228588892, 0.16882711899030475, 0.1675466880414403]
[0.16784616..., 0.16882711..., 0.16754668...]



Expand Down Expand Up @@ -95,7 +95,7 @@ Now let's see this in action when we have equal numbers of individuals of class
>>> Q = ciw.Simulation(N)
>>> Q.simulate_until_max_time(1000)
>>> [srv.utilisation for srv in Q.nodes[1].servers]
[0.36132860028585134, 0.3667939476202799, 0.2580202674603771]
[0.36132860..., 0.36679394..., 0.25802026...]

Utilisation is fairly even between the first two servers, with the third server picking up any slack. Now let's see what happens when there is three times as many individuals of class 0 entering the system as there are of class 1::

Expand All @@ -116,7 +116,7 @@ Utilisation is fairly even between the first two servers, with the third server
>>> Q = ciw.Simulation(N)
>>> Q.simulate_until_max_time(1000)
>>> [srv.utilisation for srv in Q.nodes[1].servers]
[0.447650059165907, 0.2678754897968868, 0.29112382084389343]
[0.44765005..., 0.26787548..., 0.29112382...]

Now the first server is much busier than the others.

Expand Down
2 changes: 1 addition & 1 deletion docs/Guides/Services/service_disciplines.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Custom Disciplines

Other service disciplines can also be implemented by writing a custom service discipline function. These functions take in a list of individuals, and the current time, and returns an individual from that list that represents the next individual to be served. As this is a list of individuals, we can access the individuals' attributes when making the service discipline decision.

For example, say we wish to implement a service discipline that chooses the customers randomly, but with probability proportional to their arrival order, we could write:
For example, say we wish to implement a service discipline that chooses the customers randomly, but with probability proportional to their arrival order, we could write::

>>> def SIRO_proportional(individuals, t):
... n = len(inds)
Expand Down
4 changes: 2 additions & 2 deletions docs/Guides/Simulation/results.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ This gives a list of :code:`DataRecord` objects, which are named tuples with a n

>>> r = recs[14]
>>> r
Record(id_number=15, customer_class='Customer', original_customer_class='Customer', node=1, arrival_date=16.58266884119802, waiting_time=0.0, service_start_date=16.58266884119802, service_time=1.6996950244974869, service_end_date=18.28236386569551, time_blocked=0.0, exit_date=18.28236386569551, destination=-1, queue_size_at_arrival=0, queue_size_at_departure=1, server_id=1, record_type='service')
Record(id_number=15, customer_class='Customer', original_customer_class='Customer', node=1, arrival_date=16.58266..., waiting_time=0.0, service_start_date=16.58266..., service_time=1.69969..., service_end_date=18.28236..., time_blocked=0.0, exit_date=18.28236..., destination=-1, queue_size_at_arrival=0, queue_size_at_departure=1, server_id=1, record_type='service')

These data records have a number of useful fields, set out in detail :ref:`here<refs-results>`. Importantly, fields can be accessed as attributes::

Expand All @@ -45,7 +45,7 @@ And so relevant data can be gathered using list comprehension::

>>> waiting_times = [r.waiting_time for r in recs]
>>> sum(waiting_times) / len(waiting_times)
0.3989747357976479
0.3989747...

For easier manipulation, use in conjuction with `Pandas <https://pandas.pydata.org/>`_ is recommended, allowing for easier filtering, grouping, and summary statistics calculations. Lists of data records convert to Pandas data frames smoothly:

Expand Down

0 comments on commit 828cf6f

Please sign in to comment.