Skip to content

Getting started

Giuliano Casale edited this page Jan 9, 2024 · 3 revisions

Table of Contents

Getting started

Getting started

Installation and support

This is the fastest way to get started with Line:

  1. Obtain the latest release:

    Ensure that the files are decompressed (or checked out) in the installation folder.

  2. Before running Line you will always need to add the installation folder and its subfolders to the MATLAB path. In order to do so, start MATLAB and change the active directory to the installation folder, then run

    lineStart
    
  3. Line is now ready to use. For example, you can run the demonstrators using

    allExamples
    

Solver verbosity may be controlled as follows, e.g.:

GlobalConstants.setVerbose(VerboseLevel.DEBUG)

Alternative verbosity levels are VerboseLevel.STD and VerboseLevel.DEBUG.

Software requirements

Certain features of Line depend on external tools and libraries. The recommended dependencies are:

  • MATLAB: version 2023a or later, with the following toolboxes:

    • Global Optimization Toolbox

    • Optimization Toolbox

    • Parallel Computing Toolbox

    • Statistics and Machine Learning Toolbox

    • Symbolic Math Toolbox

These libraries are automatically downloaded or shipped with Line:

Optional dependencies recommended to utilize all features available in Line are as follows:

  • LQNS (https://github.com/layeredqueuing/V6): version 6.2.28 or later. System paths need to be configured such that the lqns and lqnsim solvers need are available on the command line (i.e., can be invoked from MATLAB via the dos or unix commands without need to specify the paths of the executables).

Documentation

This manual introduces the main concepts to define models in Line and run its solvers. The document includes in particular several tables that summarize the features currently supported in the modeling language and by individual solvers. Additional resources are as follows:

Getting help

For discussions, bug reports, new feature requests, please create a thread on one of the following Sourceforge boards:

Getting started examples

In this section, we present some examples that illustrate how to use Line. The relevant scripts are included under the gettingstarted/ folder.

Systems can be described in Line using one of the available classes of stochastic models:

  • Network models are extended queueing networks. Typical instances are open, closed and mixed queueing networks, possibly including advanced features such as class-switching, finite capacity, priorities, non-exponential distributions, and others. Technical background on these models can be found in books such as [@BolGMT06; @LazZGS84] or in tutorials such as [@Lav89; @Bal00].

  • LayeredNetwork models are layered queueing networks, i.e., models consisting of layers, each corresponding to a Network object, which interact through synchronous and asynchronous calls. Technical background on layered queueing networks can be found in [@lqntut].

The goal of the remainder of this chapter is to provide simple examples that explain the basics on how these models can be analyzed in Line. More advanced forms of evaluation, such as probabilistic or transient analyses, are discussed in later chapters. Additional examples are supplied under the examples/ and gallery/ folders, the latter is discussed in the next subsection.

Model gallery

Line includes a collection of classic, commonly occurring, queueing models under the gallery/ folder. They include single queueing systems (e.g., $M/M/1$ , $M/H_2/1$, $D/M/1$, ...), tandem queueing systems, and basic queueing networks. For example, to instantiate and estimate the mean response time for a tandem network of $M/M/1$ queues we may run

>> SolverMVA(gallery_mm1_tandem).getAvgRespTTable
MVA analysis (method: default) completed in 0.054887 seconds.
ans =
  2x3 table
    Station    JobClass    RespT
    _______    ________    _____
    Queue1     myClass       9
    Queue2     myClass       9

The examples in the gallery may also be used as templates to accelerate the definition of basic models. Example 9 shows later an example of gallery instantiation of a $M/E_2/1$ queue.

Example 1: A M/M/1 queue

The M/M/1 queue is a classic model of a queueing system where jobs arrive into an infinite-capacity buffer, wait to be processed in first-come first-served (FCFS) order, and then leave after service completion. Arrival and service times are assumed to be independent and exponentially distributed random variables.

In this example, we wish to compute average performance measures for the M/M/1 queue. We assume that arrivals come in at rate $\lambda=1$ job/s, while service has rate $\mu=2$ job/s. It is known from theory that the exact value of the server utilization in this case is $\rho=\lambda/\mu=0.5$, i.e., 50%, while the mean response time for a visit is $R=1/(\mu-\lambda)=1$s. We wish to verify these values using JMT-based simulation, instantiated through Line.

The general structure of a Line script consists of four blocks:

  1. Definition of nodes

  2. Definition of job classes and associated statistical distributions

  3. Instantiation of model topology

  4. Solution

For example, the following script solves the M/M/1 model

model = Network('M/M/1');
%% Block 1: nodes
source = Source(model, 'mySource');
queue = Queue(model, 'myQueue', SchedStrategy.FCFS);
sink = Sink(model, 'mySink');
%% Block 2: classes
oclass = OpenClass(model, 'myClass');
source.setArrival(oclass, Exp(1));
queue.setService(oclass, Exp(2));
%% Block 3: topology
model.link(Network.serialRouting(source,queue,sink));
%% Block 4: solution
AvgTable = SolverJMT(model,'seed',23000).getAvgTable

In the example, source and sink are arrival and departure points of jobs; queue is a queueing station with FCFS scheduling; oclass defines an open class of jobs that arrive, get served, and leave the system; Exp(x) defines an exponential distribution with rate x; finally, the getAvgTable command solves for average performance measures with JMT’s simulator, using for reproducibility a specific seed for the random number generator.

The result is a table with mean performance measures including: the number of jobs in the station either queueing or receiving service (QLen); the utilization of the servers (Util); the mean response time for a visit to the station (RespT); the mean residence time, i.e. the mean response time cumulatively spent at the station over all visits (ResidT); the mean throughput of departing jobs (Tput)

AvgTable =
  2x7 table
    Station     JobClass     QLen      Util       RespT     ResidT      Tput
    ________    ________    ______    _______    _______    _______    _______
    mySource    myClass          0          0          0          0    0.99894
    myQueue     myClass     0.9555    0.48736    0.95429    0.95429    0.99987

One can verify that this matches JMT results by first typing

model.jsimgView

which will open the model inside JSIMgraph, as shown in Figure 1.1. From this screen, the simulation can be started using the green “play” button in the JSIMgraph toolbar.

M/M/1 example in JSIMgraph

A pre-defined gallery of classic models is also available, for example

model = gallery_mm1

returns a M/M/1 with $50%$ utilization.

If we want to select a particular row of the AvgTable data structure, we can use the tget (table get) command, for example

>> ARow = tget(AvgTable, 'myQueue', 'myClass')
ans =
  1x7 table
    Station    JobClass     QLen      Util       RespT     ResidT      Tput
    _______    ________    ______    _______    _______    _______    _______
    myQueue    myClass     0.9555    0.48736    0.95429    0.95429    0.99987

If we specify only ’myQueue’ or ’myClass’, tget will return all entries corresponding to that station or class. Moreover, the following syntax is also valid

>> ARow = tget(AvgTable, queue, oclass)

if we specify only queue or only oclass, tget will return all entries corresponding to that station or class.

Example 2: A multiclass M/G/1 queue

We now consider a more challenging variant of the first example. We assume that there are two classes of incoming jobs with non-exponential service times. For the first class, service times are Erlang distributed with unit rate and variance $1/3$; they are instead read from a trace for the second class. Both classes have exponentially distributed inter-arrival times with mean $2s$.

To run this example, let us first change the MATLAB working directory to the examples/ folder. Then we specify the node block

model = Network('M/G/1');
source = Source(model,'Source');
queue = Queue(model, 'Queue', SchedStrategy.FCFS);
sink = Sink(model,'Sink');

The next step consists in defining the classes. We fit automatically from mean and squared coefficient of variation (i.e., SCV=variance/mean$^2$) an Erlang distribution and use the Replayer distribution to request that the specified trace is read cyclically to obtain the service times of class 2

jobclass1 = OpenClass(model, 'Class1');
jobclass2 = OpenClass(model, 'Class2');

source.setArrival(jobclass1, Exp(0.5));
source.setArrival(jobclass2, Exp(0.5));

queue.setService(jobclass1, Erlang.fitMeanAndSCV(1, 1/3));
queue.setService(jobclass2, Replayer('example_trace.txt'));

Note that the example_trace.txt file consists of a single column of doubles, each representing a service time value, e.g.,

   1.2377474e-02
   4.4486055e-02
   1.0027642e-02
   2.0983173e-02
   ...

We now specify a linear route through source, queue, and sink for both classes

P = model.initRoutingMatrix();
P{jobclass1} = Network.serialRouting(source,queue,sink);
P{jobclass2} = Network.serialRouting(source,queue,sink);
model.link(P);

and solve the model with JMT

>>jmtAvgTable = SolverJMT(model,'seed',23000).getAvgTable
jmtAvgTable =
  4x7 table
    Station    JobClass     QLen        Util       RespT     ResidT      Tput
    _______    ________    _______    ________    _______    _______    _______
    Source      Class1           0           0          0          0    0.50017
    Source      Class2           0           0          0          0    0.49114
    Queue       Class1     0.86153      0.4984     1.7389     1.7389    0.49953
    Queue       Class2     0.43751    0.049184    0.85879    0.85879    0.49064

We wish now to validate this value against an analytical solver. Since jobclass2 has trace-based service times, we first need to revise its service time distribution to make it analytically tractable, e.g., we may ask Line to fit an acyclic phase-type distribution [@BobHST03] based on the trace

queue.setService(jobclass2, Replayer('example_trace.txt').fitAPH());

We can now use a Continuous Time Markov Chain (CTMC) to solve the system, but since the state space is infinite in open models, we need to truncate it to be able to use this solver. For example, we may restrict to states with at most 2 jobs in each class, checking with the verbose option the size of the resulting state space

>> ctmcAvgTable2 = SolverCTMC(model,'cutoff',2,'verbose',true).getAvgTable
State space size: 46 states.
CTMC analysis completed in 0.096734 sec
ctmcAvgTable2 =
  4x7 table
    Station    JobClass     QLen        Util       RespT     ResidT      Tput
    _______    ________    _______    ________    _______    _______    _______
    Source      Class1           0           0          0          0    0.44948
    Source      Class2           0           0          0          0    0.48424
    Queue       Class1     0.56734     0.44948     1.2863     1.2863    0.44107
    Queue       Class2     0.24456    0.048942    0.51396    0.51396    0.47583

However, we see from the comparison with JMT that the errors of the CTMC solver are rather large. Since the truncated state space consists of just 46 states, we can further increase the cutoff to 4, trading a slower solution time for higher precision

>> ctmcAvgTable4 = SolverCTMC(model,'cutoff',4,'verbose',true).getAvgTable
State space size: 626 states.
CTMC analysis completed in 1.051784 sec
ctmcAvgTable4 =
  4x7 table
    Station    JobClass     QLen        Util       RespT     ResidT      Tput
    _______    ________    _______    ________    _______    _______    _______
    Source      Class1           0           0          0          0    0.49215
    Source      Class2           0           0          0          0    0.49626
    Queue       Class1      0.7958     0.49215     1.6187     1.6187    0.49162
    Queue       Class2     0.37558    0.050157    0.75763    0.75763    0.49573

To gain more accuracy, we could either keep increasing the cutoff value or, if we wish to compute an exact solution, we may call the matrix-analytic method (MAM) solver instead. MAM uses the repetitive structure of the CTMC to exactly analyze open systems with an infinite state space

>> mamAvgTable = SolverMAM(model).getAvgTable
mamAvgTable =
  4x7 table
    Station    JobClass     QLen        Util       RespT     ResidT     Tput
    _______    ________    _______    ________    _______    _______    ____
    Source      Class1           0           0          0          0    0.5
    Source      Class2           0           0          0          0    0.5
    Queue       Class1     0.87646         0.5     1.7529     1.7529    0.5
    Queue       Class2       0.427    0.050536    0.85399    0.85399    0.5

The current MAM implementation is primarily constructed on top of the BuTools solver [@Hor17] and the SMC solver [@BiniMSPH12].

Example 3: Machine interference problem

Closed models involve jobs that perpetually cycle within a network of queues. The machine interference problem is a classic example, in which a group of repairmen is tasked with fixing machines as they break and the goal is to choose the optimal size of the group. We here illustrate how to evaluate the performance of a given group size. We consider a scenario with $S=2$ repairmen, with machines that break down at a rate of $0.5$ failed machines/week, after which a machine is fixed in an exponential distributed time with rate $4.0$ repaired machines/week. There are a total of $N=3$ machines.

Suppose that we wish to obtain an exact numerical solution using Continuous Time Markov Chains (CTMCs). The above model can be analyzed as follows:

S=2; N=3;
model = Network('MIP');
%% Block 1: nodes
delay = Delay(model,'WorkingState');
queue = Queue(model, 'RepairQueue', SchedStrategy.FCFS);
queue.setNumberOfServers(S);
%% Block 2: classes
cclass = ClosedClass(model, 'Machines', N, delay);
delay.setService(cclass, Exp(0.5));
queue.setService(cclass, Exp(4.0));
%% Block 3: topology
model.link(Network.serialRouting(delay,queue));
%% Block 4: solution
solver = SolverCTMC(model);
ctmcAvgTable = solver.getAvgTable()

Here, delay appears in the constructor of the closed class to specify that a job will be considered completed once it returns to the delay (i.e., the machine returns in working state). We say that the delay is thus the reference station of cclass. The above code prints the following result

ctmcAvgTable =
  2x7 table
      Station       JobClass     QLen       Util       RespT     ResidT      Tput
    ____________    ________    _______    _______    _______    _______    ______
    WorkingState    Machines     2.6648     2.6648          2          2    1.3324
    RepairQueue     Machines    0.33516    0.16655    0.25154    0.25154    1.3324

As before, we can inspect and analyze the model in JSIMgraph using the command

model.jsimgView

Figure 1.2 illustrates the result, demonstrating the automated definition of the closed class.

Machine interference model in JSIMgraph

We can now also inspect the CTMC more in the details as follows

StateSpace = model.getStateSpace()
InfGen = full(solver.getGenerator())

which produces in output the state space of the model and the infinitesimal generator of the CTMC

StateSpace =
     0     1     2
     1     0     2
     2     0     1
     3     0     0
InfGen =
   -8.0000    8.0000         0         0
    0.5000   -8.5000    8.0000         0
         0    1.0000   -5.0000    4.0000
         0         0    1.5000   -1.5000

For example, the first state (0 1 2) consists of two components: the initial 0 denotes the number of jobs in service in the delay, while the remaining part is the state of the FCFS queue. In the latter, the 1 means that a job of class 1 (the only class in this model) is in the waiting buffer, while the 2 means that there are two jobs in service at the queue.

As another example, the second state (1 0 2) is similar, but one job has completed at the queue and has then moved to the delay, concurrently triggering an admission in service for the job that was in the queue buffer. As a result of this, the buffer is now empty. The corresponding transition rate in the infinitesimal generator matrix is InfGen(1,2)=8.0, which sums the completion rates at the queue for each server in the first state, and where indexes 1 and 2 are the rows in StateSpace associated to the source and destination states.

On this and larger infinite generators, we may also list individual non-zero transitions as follows

>> model.printInfGen(InfGen,StateSpace)
[0 1 2]->[1 0 2]: 8.000000
[1 0 2]->[0 1 2]: 0.500000
[1 0 2]->[2 0 1]: 8.000000
[2 0 1]->[1 0 2]: 1.000000
[2 0 1]->[3 0 0]: 4.000000
[3 0 0]->[2 0 1]: 1.500000

The above printout helps in matching the state transitions to their rates.

To avoid having to inspect the StateSpace variable to determine to which station a particular column refers to, we can alternatively use the more general invocation

>> [StateSpace,nodeStateSpace] = solver.getStateSpace();
>> nodeStateSpace{delay}
ans =
     0
     1
     2
     3
>> nodeStateSpace{queue}
ans =
     1     2
     0     2
     0     1
     0     0

which automatically splits the state space into its constituent parts for each stateful node.

A further observation is that model.getStateSpace() forces the regeneration of the state space at each invocation, whereas the equivalent function in the CTMC solver, solver.getStateSpace(), returns the state space cached during the solution of the CTMC.

Example 4: Round-robin load-balancing

In this example we consider a system of two parallel processor-sharing queues and we wish to study the effect of load-balancing on the average performance of an open class of jobs. We begin as usual with the node block, where we now include a special node, called the Router, to control the routing of jobs from the source into the queues:

model = Network('RRLB');
source = Source(model, 'Source');
lb = Router(model, 'LB');
queue1 = Queue(model, 'Queue1', SchedStrategy.PS);
queue2 = Queue(model, 'Queue2', SchedStrategy.PS);
sink  = Sink(model, 'Sink');

Let us then define the class block by setting exponentially-distributed inter-arrival times and service times, e.g.,

oclass = OpenClass(model, 'Class1');
source.setArrival(oclass, Exp(1));
queue1.setService(oclass, Exp(2));
queue2.setService(oclass, Exp(2));

We now wish to express the fact that the router applies a round-robin strategy to dispatch jobs to the queues. Since this is now a non-probabilistic routing strategy, we need to adopt a slightly different style to declare the routing topology as we cannot specific anymore routing probabilities. First, we indicate the connections between the nodes, using the addLinks function:

model.addLinks([source, lb;
                lb,     queue1;
                lb,     queue2;
                queue1, sink;
                queue2, sink]);

At this point, all nodes are automatically configured to route jobs with equal probabilities on the outgoing links (RoutingStrategy.RAND policy). If we solve the model at this point, we see that the response time at the queues is around $0.66 s$.

>> jmtAvgTable = SolverJMT(model,'seed',23000).getAvgTable
jmtAvgTable =
  3x7 table
    Station    JobClass     QLen       Util       RespT     ResidT      Tput
    _______    ________    _______    _______    _______    _______    _______
    Source      Class1           0          0          0          0     1.0135
    Queue1      Class1     0.31612    0.24682    0.65411    0.65411      0.501
    Queue2      Class1     0.33403    0.25076    0.68406    0.34203    0.50413

After resetting the internal data structures, which is required before modifying a model we can require Line to solve again the model using this time a round-robin policy at the router.

model.reset()
lb.setRouting(oclass, RoutingStrategy.RROBIN);

A representation of the model at this point is shown in Figure 1.3.

Load-balancing model

Lastly, we run again JMT and find that round-robin produces a visible decrease in response times, which are now around $0.56s$.

>> jmtAvgTableRR = SolverJMT(model,'seed',23000).getAvgTable
jmtAvgTableRR =
  3x7 table
    Station    JobClass     QLen       Util       RespT     ResidT      Tput
    _______    ________    _______    _______    _______    _______    _______
    Source      Class1           0          0          0          0     1.0089
    Queue1      Class1     0.30429    0.26118    0.58482    0.58482    0.50526
    Queue2      Class1     0.29282    0.24397    0.57293    0.28647    0.50526

Example 5: Modelling a re-entrant line

Let us now consider a simple example inspired to the classic problem of modeling re-entrant lines. This arises in manufacturing systems where parts (i.e., jobs) re-enter multiple times a machine (i.e., a queueing station), asking at each visit a different class of service. This implies, for example, that the service time at every visit could feature a different mean or a different distribution compared to the previous visits, thus modeling a different stage of processing.

To illustrate this, consider for example a degenerate model composed by a single FCFS queue and $K$ classes. In this model, a job that completes processing in class $k$ is routed back at the tail of the queue in class $k+1$, unless $k=K$ in which case the job re-enters in class $1$.

We take the following assumptions: $K=3$ and class $k$ has an Erlang-2 service time distribution at the queue with mean equal to $k$; the system starts with $N_1=1$ jobs in class 1 and zero jobs in all other classes.

model = Network('RL');
queue = Queue(model, 'Queue', SchedStrategy.FCFS);

K = 3; N = [1,0,0];
for k=1:K
    jobclass{k} = ClosedClass(model, ['Class',int2str(k)], N(k), queue);
    queue.setService(jobclass{k}, Erlang.fitMeanAndOrder(k,2));
end

P = model.initRoutingMatrix();
P{jobclass{1},jobclass{2}}(queue,queue) = 1.0;
P{jobclass{2},jobclass{3}}(queue,queue) = 1.0;
P{jobclass{3},jobclass{1}}(queue,queue) = 1.0;
model.link(P);

The corresponding JMT model is shown in Figure 1.4, where it can be seen that the class-switching rule is automatically enforced by introduction of a ClassSwitch node in the network.

Re-entrant lines as an example of class-switching

We can now simulate the performance indexes for the different classes

>> ctmcAvgTable = SolverCTMC(model).getAvgTable
ctmcAvgTable =
    Station    JobClass     QLen       Util      RespT    ResidT      Tput
    _______    ________    _______    _______    _____    _______    _______
     Queue      Class1     0.16667    0.16667      1      0.33333    0.16667
     Queue      Class2     0.33333    0.33333      2      0.66667    0.16667
     Queue      Class3         0.5        0.5      3            1    0.16667

Suppose now that the job is considered completed, for the sake of computation of system performance metrics, only when it departs the queue in class $K$ (here Class3). By default, Line will return system-wide performance metrics using the getAvgSysTable method, i.e.,

>> ctmcAvgSysTable = SolverCTMC(model).getAvgSysTable
ctmcAvgSysTable =
  1x4 table
    Chain        JobClasses        SysRespT    SysTput
    ______    _________________    ________    _______
    Chain1    {1x3 categorical}       2          0.5

This method identifies the model chains, i.e., groups of classes that can exchange jobs with each other, but not with classes in other chains. Since the job can switch into any of the three classes, in this model there is a single chain comprising the three classes.

To see the composition of Chain1, we look at the second column entry

>> ctmcAvgSysTable.JobClasses{1}
ans =
  1x3 categorical array
     Class1      Class2      Class3

We see that the throughput of the chain is $0.5$, which means that Line is counting every departure from the queue in any class as a completion for the whole chain. This is incorrect for our model since we want to count completions only when jobs depart in Class3. To require this behavior, we can tell to the solver that passages for classes 1 and 2 through the reference station should not be counted as completions

jobclass{1}.completes = false;
jobclass{2}.completes = false;

This modification then gives the correct chain throughput, matching the one of Class3 alone

>> ctmcAvgSysTable2 = SolverCTMC(model).getAvgSysTable
ctmcAvgSysTable =
  1x4 table
    Chain        JobClasses        SysRespT    SysTput
    ______    _________________    ________    _______
    Chain1    {1x3 categorical}       6        0.16667

Example 6: A queueing network with caching

In this more advanced example, we show how to include in a queueing network a cache adopting a least-recently used (LRU) replacement policy. Under LRU, upon a cache miss the least-recently accessed item will be discarded to make room for the newly requested item.

We consider a cache with a capacity of 50 items, out of a set of 1000 cacheable items. Items are accessed by jobs visiting the cache according to a Zipf-like law with exponent $\alpha=1.4$ and defined over the finite set of items. A client cyclically issues requests for the items, waiting for a reply before issuing the next request. We assume that a cache hit takes on average $0.2ms$ to process, while a cache hit takes $1ms$. We ask for the average request throughput of the system, differentiated across hits and misses.

Node block

As usual, we begin by defining the nodes. Here a delay node will be used to describe the time spent by the requests in the system, while the cache node will determine hits and misses:

model = Network('QNC');
% Block 1: nodes
clientDelay = Delay(model, 'Client');
cacheNode = Cache(model, 'Cache', 1000, 50, ReplacementStrategy.LRU);
cacheDelay = Delay(model, 'CacheDelay');
Class block

We define a set of classes to represent the incoming requests (clientClass), cache hits (hitClass) and cache misses (missClass). These classes need to be closed to ensure that there is a single outstanding request from the client at all times:

% Block 2: classes
clientClass = ClosedClass(model, 'ClientClass', 1, clientDelay, 0);
hitClass = ClosedClass(model, 'HitClass', 0, clientDelay, 0);
missClass = ClosedClass(model, 'MissClass', 0, clientDelay, 0);

We then assign the processing times, using the Immediate distribution to ensure that the client issues immediately the request to the cache:

clientDelay.setService(clientClass, Immediate());
cacheDelay.setService(hitClass, Exp.fitMean(0.2));
cacheDelay.setService(missClass, Exp.fitMean(1));

The next step involves specifying that the request uses a Zipf-like distribution (with parameter $\alpha=1.4$) to select the item to read from the cache, out of a pool of 1000 items

cacheNode.setRead(clientClass, Zipf(1.4,1000));

Finally, we ask that the job should become of class hitClass after a cache hit, and should become of class missClass after a cache miss:

cacheNode.setHitClass(clientClass, hitClass);
cacheNode.setMissClass(clientClass, missClass);
Topology block

Next, in the topology block we setup the routing so that the request, which starts in clientClass at the clientDelay, then moves from there to the cache, remaining in clientClass

% Block 3: topology
P = model.initRoutingMatrix();
P{clientClass, clientClass}(clientDelay, cacheNode) = 1.0;

Internally to the cache, the job will switch its class into either hitClass or missClass. Upon departure in one of these classes, we ask it to join in the same class cacheDelay for further processing

P{hitClass, hitClass}(cacheNode, cacheDelay) = 1.0;
P{missClass, missClass}(cacheNode, cacheDelay) = 1.0;

Lastly, the job returns to clientDelay for completion and start of a new request, which is done by switching its class back to clientClass

P{hitClass, clientClass}(cacheDelay, clientDelay) = 1.0;
P{missClass, clientClass}(cacheDelay, clientDelay) = 1.0;

The above routing strategy is finally applied to the model

model.link(P);
Solution block

To solve the model, since JMT does not support cache modeling, we use the native MATLAB-based simulation engine provided within Line, the SSA solver:

% Block 4: solution
ssaAvgTable = SolverSSA(model,'samples',2e4,'seed',1,'verbose',true).getAvgTable

The above script produces the following result

SSA samples:   20000
SSA analysis completed in 11.902675 sec
ssaAvgTable =
  3x7 table
     Station       JobClass       QLen       Util      RespT    ResidT      Tput
    __________    ___________    _______    _______    _____    _______    _______
    Client        ClientClass          0          0       0           0     2.9674
    CacheDelay    HitClass       0.50101    0.50101     0.2     0.16884      2.505
    CacheDelay    MissClass      0.49899    0.49899       1     0.16815    0.49899

The departing flows from the CacheDelay are the miss and hit rates. Thus, the hit rate is $2.4554$ req/ms, while the miss rate is $0.50892$ req/ms, which both include the service times.

Let us now suppose that we wish to verify the result with a longer simulation, for example with 10 times more samples. To this aim, we can use the automatic parallelization of SSA based on MATLAB’s spmd construct:

ssaAvgTablePara = SolverSSA(model,'para','samples',2e4,'seed',1).getAvgTable

This gives us a rather similar result, when run on a dual-core machine

Starting parallel pool (parpool) using the 'local' profile ...
connected to 2 workers.
ssaAvgTablePara =
 3x6 table
     Station       JobClass       QLen       Util      RespT    ResidT      Tput
    __________    ___________    _______    _______    _____    _______    _______
    Client        ClientClass          0          0       0           0     3.0652
    CacheDelay    HitClass       0.49871    0.49871     0.2     0.16689     2.4935
    CacheDelay    MissClass      0.50129    0.50129       1     0.16553    0.50129

The execution time is longer than usual at the first invocation of the parallel solver due to the time needed by MATLAB to bootstrap the parallel pool, in this example around 22 seconds. Successive invocations of parallel SSA normally take much less, with this example around 7 seconds each.

Example 7: Response time distribution and percentiles

In this example we illustrate the computation of response time percentiles in a queueing network model. We begin by instantiating a simple closed model consisting of a delay followed by a processor-sharing queueing station.

model = Network('model');
% Block 1: nodes
node{1} = Delay(model, 'Delay');
node{2} = Queue(model, 'Queue1', SchedStrategy.PS);

There is a single class consisting of 5 jobs that circulate between the two stations, taking exponential service times at both.

% Block 2: classes
jobclass{1} = ClosedClass(model, 'Class1', 5, node{1}, 0);
node{1}.setService(jobclass{1}, Exp(1.0));
node{2}.setService(jobclass{1}, Exp(0.5));

% Block 3: topology
model.link(Network.serialRouting(node{1},node{2}));

We now wish to compare the response time distribution at the PS queue computed analytically with a fluid approximation against the simulated values returned by JMT. To do so, we call the getCdfRespT method

% Block 4: solution
RDfluid = SolverFluid(model).getCdfRespT();
RDsim = SolverJMT(model,'seed',23000,'samples',1e4).getCdfRespT();

The returned data structures, RDfluid and RDsim, are cell arrays consisting of a 2-column matrix. The element in position ${i,r}$ of the cell array describes the response times at station $i$ for class $r$. The two columns within such matrices are defined similar to the output of MATLAB’s ecdf function, i.e., the first column represents the cumulative distribution function (CDF) value $F(t)=Pr(T\leq t)$, where $T$ is the random variable denoting the response time, while $t$ is the percentile appearing in the corresponding entry of the second column.

For example, to plot the complementary CDF $1-F(t)$ we can use the following code

% Plot results
semilogx(RDsim{2,1}(:,2),1-RDsim{2,1}(:,1),'r'); hold all;
semilogx(RDfluid{2,1}(:,2),1-RDfluid{2,1}(:,1),'--');
legend('jmt-transient','fluid-steady','Location','Best');
ylabel('Pr(T > t)'); xlabel('time t');

which produces the graph shown in Figure 1.5

Comparison of simulated response time distribution and its fluid approximation

The graph shows that, although the simulation refers to a transient, while the fluid approximation refers to steady-state, there is a tight matching between the two response time distributions.

We can also readily compute the percentiles from the RDfluid and RDsim data structures, e.g., for the 95th and 99th percentiles of the simulated distribution

>> prc95=max(RDsim{2,1}(RDsim{2,1}(:,1)<0.95,2))
prc95 =
   27.0222
>> prc99=max(RDsim{2,1}(RDsim{2,1}(:,1)<0.99,2))
prc99 =
   41.8743

That is, 95% of the response times at the PS queue (node 2, class 1) are less than or equal to 27.0222 time units, while 99% are less than or equal to 41.8743 time units.

Example 8: Optimizing a performance metric

In this example, we show how to optimize with the help of LINE a performance metric. We wish to find the optimal routing probabilities that minimize average response times for two parallel processor sharing queues. We assume that jobs are fed by a delay station, arranged with the two queues in a closed network topology.

We first create a MATLAB function (e.g., ex8.m) with header

function p_star = ex8()

Within the function definition, we instantiate the two queues and the delay station

model = Network('LoadBalCQN');
% Block 1: nodes
delay = Delay(model,'Think');
queue1 = Queue(model, 'Queue1', SchedStrategy.PS);
queue2 = Queue(model, 'Queue2', SchedStrategy.PS);

We assume that 16 jobs circulate among the nodes, and that the service rates are $\sigma=1$ job/s at the delay, and $\mu_1=0.75$ and $\mu_2=0.50$ at the two queues:

% Block 2: classes
cclass = ClosedClass(model, 'Job1', 16, delay);
delay.setService(cclass, Exp(1));
queue1.setService(cclass, Exp(0.75));
queue2.setService(cclass, Exp(0.50));

We initially setup a topology with arbitrary values for the routing probabilities between delay and queues, ensuring that jobs completing at the queues return to the delay:

% Block 3: topology
P = model.initRoutingMatrix();
P{cclass}(queue1, delay) = 1.0;
P{cclass}(queue2, delay) = 1.0;
model.link(P);

We now write a nested function that returns the system response time for the jobs as a function of the routing probability $p$ to choose queue 1 instead of queue 2:

% Block 4: solution
    function R = objFun(p)
        P{cclass}(delay, queue1) = p;
        P{cclass}(delay, queue2) = 1-p;
        model.link(P);
        R = SolverMVA(model,'exact','verbose',false).getAvgSysRespT;
    end
p_opt = fminbnd(@(p) objFun(p), 0,1)

In the above listing, objFun first updates the routing topology, prior to obtaining the corresponding system response time. To search for the optimal routing probability $p$, we have also used MATLAB’s fminbnd which restricts the search range in the interval $[0,1]$.

We are now ready to run ex8 from the MATLAB command line. The execution returns the optimal value p_opt$=0.6105$.

Example 9: Studying a departure process

This examples illustrates Line’s support for extracting simulation data about particular events in an extended queueing network, such as departures from a particular queue.

Our goal is to obtain the squared coefficient of variation of the inter-departure times from a $M/E_2/1$ queue, which has Poisson arrivals and 2-phase Erlang distributed service times.

Because this is a classic model, we can find it in Line’s model gallery. The additional return parameters (e.g., source,queue, ...) provide handles to the entities within the model.

[model,source,queue,sink,oclass]=gallery_merl1;

We now extract 50,000 samples from simulation based on the underpinning continuous-time Markov chain

solver = SolverCTMC(model,'cutoff',150,'seed',23000);
sa = solver.sampleSysAggr(1e5);

The returned data structure supplies information about the stateful nodes (here source and queue) at each of the 50,000 instants of sampling, together with the events that have been collected at these instants.

>> sa
sa =
  struct with fields:

         handle: {[1x1 Source]  [1x1 Queue]}
              t: [50000x1 double]
          state: {[Inf]  [50000x1 double]}
          event: {1x100000 cell}
    isaggregate: 1

As an example, the first two events occur both at timestamp 0 and indicate a departure event from node 1 (the type EventType.DEP maps to event: DEP) followed by an arrival event at node 2 (the type EventType.ARV maps to event: ARV) which accepts it always (prob: 1).

>> sa.event{1}
ans =
  Event with properties:

     node: 1
    event: DEP
    class: 1
     prob: NaN
    state: []
        t: 0
>> sa.event{2}
ans =
  Event with properties:

     node: 2
    event: ARV
    class: 1
     prob: 1
    state: []
        t: 0

We may also plot the first 300 events as follows

plot(sa.t(1:300),sa.state{queue}(1:300));

that after adding axes labels gives the following figure

A sample path for a M/E2/1 queueing station

We are now ready to filter the timestamps of events related to departures from the queue node

ind = model.getNodeIndex(queue);
filtEvent = cellfun(@(c) c.node == ind && c.event == EventType.DEP, sa.event);

Followed by a calculation of the time series of inter-departure times

interDepTimes = diff(cellfun(@(c) c.t, {sa.event{filtEvent}}));

We may now for example compute the squared coefficient of variation of this process

SCVdEst = var(interDepTimes)/mean(interDepTimes)^2

which evaluates to $0.8750$. Using Marshall’s exact formula for the $GI/G/1$ queue we get a theoretical value of $0.8750$, the calculation is included in the script associated to this example (getting_started_ex9.m).

Example 10: Evaluating a CTMC symbolically

In this example, we will use MATLAB’s symbolic toolbox to examine the continuous-time Markov chain underlying a simple closed queueing network. The network consists of a delay station and a processor sharing station arrange in a cyclic topology. We may generate it using LINE’s demo gallery as follows

model = gallery_cqn(1,true);

Here, the first argument adds a single station to the next, while the second argument requires presence of a delay station. The network has a single class with $4$ circulating jobs.

The getSymbolicGenerator method of the CTMC solver can now be called to obtain the symbolic generator

[infGen, eventFilt, syncInfo, stateSpace, nodeStateSpace] = SolverCTMC(model).getSymbolicGenerator;

The first returned argument is the symbolic infinitesimal generator

>> infGen
infGen =
[-4*x2,        4*x2,           0,         0,   0]
[   x1, - x1 - 3*x2,        3*x2,         0,   0]
[    0,          x1, - x1 - 2*x2,      2*x2,   0]
[    0,           0,          x1, - x1 - x2,  x2]
[    0,           0,           0,        x1, -x1]

There are therefore 5 states, corresponding to all possible way of distributing the 4 jobs across the two stations

stateSpace =
     0     4
     1     3
     2     2
     3     1
     4     0

An event is represented in Line as a synchronization between an active agent and a passive agent. Typically, the station that completes a job is an active agent, whereas the one that receives it is a passive agent. In this sense, x1 and x2 may be seen as the rates at which the two agents synchronize to perform the two actions.

To learn the meaning of the symbolic variables x1 and x2 we can now use the syncInfo data structure

>> syncInfo{1}.active{1}.print, syncInfo{1}.passive{1}.print % x1
(DEP: node: 1, class: 1)
(ARV: node: 2, class: 1)
>> syncInfo{2}.active{1}.print, syncInfo{2}.passive{1}.print % x2
(DEP: node: 2, class: 1)
(ARV: node: 1, class: 1)

In the above, we see that x1 is a class-1 departure from station 1 (they delay) into station 2 (the processor sharing queue), and viceversa x2 is a departure from station 2 that joins station 1.

steadyStateVector = CTMC(infGen).solve