Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing queuing latency handling for buses #1148

Closed
reteprelief opened this issue Apr 18, 2018 · 43 comments · Fixed by #2206
Closed

Missing queuing latency handling for buses #1148

reteprelief opened this issue Apr 18, 2018 · 43 comments · Fixed by #2206
Assignees
Labels
Milestone

Comments

@reteprelief
Copy link
Contributor

@reteprelief reteprelief commented Apr 18, 2018

We currently deal with sampling latency for periodic protocols and buses.
We could do queuing latency based on the number of connections bound to the bus

@reteprelief reteprelief removed their assignment Nov 4, 2019
@reteprelief
Copy link
Contributor Author

@reteprelief reteprelief commented Dec 9, 2019

For periodic protocols and buses we interpret the Period property on the protocol or bus to determine a sampling latency, i.e., the latency to wait for the protocol or bus slot to come around. This is similar to periodic threads sampling their input.

When buses or protocols are not periodic and multiple senders submit messages, they go into a queue to be transmitted. Similar to a thread port having a queue with a specified queue size, we can allow the specification of a queue size for a bus or protocol. In the case of a thread the queuing latency is the size of the queue times the size of the data sent through the port and the amount of time it takes for each of the messages in the queue to be processed (deadline or execution time).

A queue on a protocol or bus has to handle messages of different sizes determined by the source port of the connections bound to the bus.

A simple way of dealing with this is for users to just specify the maximum queue size in bytes independent of the number of bound connections (often done when configuring communication systems). Note that the standard Queue_Size property has number of messages and here we would need a new property indicating the size in bytes.

An alternative way of doing it is to make the assumption that all connection sources send at the same time, i.e., the queue has one message from each sender in the queue.
The queuing latency would be calculated based on how long it takes to transmit each message (based on the transmission time property).

There are several test examples with an Actual_Connection_Binding property, e.g, package transmission_time that can be modified.

@lwrage lwrage added this to the 2.7.0 milestone Dec 16, 2019
@reteprelief reteprelief removed their assignment Dec 19, 2019
@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Dec 20, 2019

I think I need to update FlowLatencyAnalysisSwitch.processTransmissionTime(). Still looking at how it all it works. I found the following handy methods:

  • getMaximumTransmissionTimePerByte()
  • getMaximumTransmissionTimeFixed()
  • getMaximumTimeToTransferData()

Also min versions.

Going to need a way to find all the connections bound to the same bus.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Dec 20, 2019

No. I need to update processSamplingTime(), called by processActualConnectionBindingsSampling().

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Dec 20, 2019

The method processSamplingTime() is currently

	public void processSamplingTime(NamedElement boundBus, LatencyContributor latencyContributor) {
		/**
		 * we add the bus/VB sampling time as a subcontributor.
		 */

		// XXX: [Code Coverage] boundBus cannot be null.
		if (boundBus != null) {
			double period = GetProperties.getPeriodinMS(boundBus);
			if (period > 0) {
				// add sampling latency due to the protocol or bus being periodic
				LatencyContributor samplingLatencyContributor = new LatencyContributorComponent(boundBus,
						report.isMajorFrameDelay());
				samplingLatencyContributor.setBestCaseMethod(LatencyContributorMethod.SAMPLED_PROTOCOL);
				samplingLatencyContributor.setWorstCaseMethod(LatencyContributorMethod.SAMPLED_PROTOCOL);
				samplingLatencyContributor.setSamplingPeriod(period);

				latencyContributor.addSubContributor(samplingLatencyContributor);
			}
		}
	}

Only does anything when the bus itself has a period associated with it. Need to make it work when the bus does not have a period.

Need to find the other connections bound to the same bus and fix as described in the original message.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Jan 3, 2020

Test case:

package QueuingLatency
public
	data D1
		properties
			Data_Size => 8 Bytes;
	end D1;
	
	data D11
		properties
			Data_Size => 16 Bytes;
	end D11;
	
	data D111
		properties
			Data_Size => 24 Bytes;
	end D111;
	
	system s1
	end s1;
	system implementation s1.unbound
		subcomponents
			sub1: system a1;
			sub2: system a2;
		connections
			conn1: feature sub1.f1 -> sub2.f2;
			conn2: feature sub1.f11 -> sub2.f22;
			conn3: feature sub1.f111 -> sub2.f222;
		flows
			etef1: end to end flow sub1.fsource1 -> conn1 -> sub2.fsink2 {Latency => 10 ms .. 25 ms;};
			etef11: end to end flow sub1.fsource11 -> conn2 -> sub2.fsink22 {Latency => 10 ms .. 25 ms;};
			etef111: end to end flow sub1.fsource111 -> conn3 -> sub2.fsink222 {Latency => 10 ms .. 25 ms;};
	end s1.unbound;
	
	
	system implementation s1.async extends s1.unbound
		subcomponents
			theBus: bus {
				Transmission_Time => [
					Fixed => 1 ms .. 2ms;
					PerByte => 2 ms .. 3ms;
				];
			};
		properties
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;
	end s1.async;
	
	system implementation s1.periodic extends s1.unbound
		subcomponents
			theBus: bus {
				Period => 5 ms;
				Transmission_Time => [
					Fixed => 1 ms .. 2ms;
					PerByte => 2 ms .. 3ms;
				];
			};		
		properties
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;
	end s1.periodic;
	
	system a1
		features
			f1: out data port D1;
			f11: out data port D11;
			f111: out data port D111;
		flows
			fsource1: flow source f1 {Latency => 1 ms .. 2 ms;};
			fsource11: flow source f11 {Latency => 1 ms .. 2 ms;};
			fsource111: flow source f111 {Latency => 1 ms .. 2 ms;};
	end a1;
	
	system a2
		features
			f2: in data port D1;
			f22: in data port D11;
			f222: in data port D111;
		flows
			fsink2: flow sink f2 {Latency => 3 ms .. 5 ms;};
			fsink22: flow sink f22 {Latency => 3 ms .. 5 ms;};
			fsink222: flow sink f222 {Latency => 3 ms .. 5 ms;};
	end a2;
end QueuingLatency;

System s1.periodic binds the connections to a bus that has a period value binding. System s1.async binds the connections to a bus that does not have a period value binding.

Currently, flow analysis generates a sampling delay for the bus in the periodic case, but not in the asynchronous case.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Jan 3, 2020

This is a bit tricky to work out because of the data size of the messages. The data size is built up from the original size of the data component, plus overhead additions from the different connections, and bindings it passes through. I need to have the data size for the connect at the particular bus component. I think the easiest way to do this is the following (especially considering that the sampling delay is initially computed before the transmission delay):

  1. In processSamplingTime, if there is no period binding, then we record the connection instance (which is not currently present in processSamplingTime), the bus component instance, and the latency contributor. We build up a collection of these records.
  2. In some combination of processTransmissionTime and processActualConnectionBindingsTransmission we record the transmission times for all the pairs connection instance and bus.
  3. In a new step, we then use information from (1) to look up the necessary transmission times from (2) and insert the result based on the recorded latency contributor.

@reteprelief
Copy link
Contributor Author

@reteprelief reteprelief commented Jan 8, 2020

Here is what I was thinking.
Get all connections that are bound to the bus.
Forall connection (except the one for which we want to record the latency) we calculate the transmission time (we already know how to do that for transmission time where we take care of message overhead), add them up and record the result as queuing time in a LatencyReportEntry. We do this instead of the sampling latency for periodic buses. Then as usual we calculate the transmission latency for the connection of interest and record it as transmission latency in a LatencyReportEntry.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Jan 8, 2020

Peter,

Yes, I understand the general idea. The problem I am identifying is that you cannot just take the data size that you need to compute the transmission time from the connection bound to the bus. The data size can be influenced by other factors along the flow. These factors are taken into account by the current method that figures out the transmission time, but again, they cannot be obtained just by looking at the connection bound to the bus. So in my comment above, I'm trying to think about how best to take them into account when computing the latency without completely recomputing everything a second time.

@lwrage
Copy link
Contributor

@lwrage lwrage commented Jan 10, 2020

I think it would be helpful to first create a couple more example models for the different cases and calculate manually what the latency should be (and document in detail how to get to that result).

@lwrage lwrage removed this from the 2.7.0 milestone Jan 11, 2020
@lwrage lwrage added this to the 2.7.1 milestone Jan 11, 2020
@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 7, 2020

Peter,

There are two cases here as seen in the two places where processSamplingTime() is called from processActualConnectionBindingsSampling()`:

		/**
		 * required virtual bus class indicates protocols the connection intends to use.
		 * We also can have an actual connection binding to a virtual bus
		 * If we have that we want to use that virtual bus overhead
		 */
		if (!willDoVirtualBuses) {
			List<ComponentClassifier> protocols = GetProperties.getRequiredVirtualBusClass(connorvb);
			// XXX: [Code Coverage] protocols cannot be null.
			if ((protocols != null) && (protocols.size() > 0)) {
				if (willDoBuses) {
					latencyContributor.reportInfo("Adding required virtual bus contributions to bound bus");
				}
				for (ComponentClassifier cc : protocols) {
					processSamplingTime(cc, latencyContributor);
					processActualConnectionBindingsSampling(cc, latencyContributor);
				}
			}
		}

		for (ComponentInstance componentInstance : bindings) {
			processSamplingTime(componentInstance, latencyContributor);
			if (componentInstance.getCategory().equals(ComponentCategory.VIRTUAL_BUS)) {
				processActualConnectionBindingsSampling(componentInstance, latencyContributor);
			}
		}

The second case , where the Bus ComponentImplementations bound to a ConnectionInstance are looped over deals with the case being considered in the introduction to this issue. We have a particular bus instance and we can then find the other connection instances bound to. No problem, I know what to do.

The first case deals with the Required_Virtual_Bus_Class property of the connection instance. Here, as the name suggests, we have the classifier of a (virtual) bus. I'm not sure what to do in the case. This case only executes when the connection is not actually bound to a virtual bus (but should be). So we don't know the other connections that will be bound to the same bus component instance (as there isn't one). We either have to ignore this case, which isn't really correct, or consider all the connections that either are or should be bound to a bus with the same component classifier.

Thoughts?

@reteprelief
Copy link
Contributor Author

@reteprelief reteprelief commented Feb 10, 2020

The idea of the virtualbusclass is that you indicate a protocol is to be used without explicitly specifying an instance of the virtual bus to be bound to. In this case you add the overhead to the base data size. The connection is still bound to the bus and that determines which connections need to be considered.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 10, 2020

Thanks Peter. Ah. I see. The connection instance must still be bound to a (non-virtual) bus somewhere.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 10, 2020

Lutz asked me to verify that the use of Required_Virtual_Bus_Class is documented in the help text.

It is:

Users can indicate the intended transport mechanism via the Actual_Connection_Binding property or Required_Virtual_Bus_Class property.

The latency analysis will take into account every element of an Actual_Connection_Binding as latency contributor. In the case of a virtual bus the analysis also includes the entities that the virtual bus is bound to according to its Actual_Connection_Binding property value. If the virtual bus or connection does not have a specified Actual_Connection_Binding property value, then the Required_Virtual_Bus_Class property values are interpreted as latency contributors.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 11, 2020

Given

  • Each example has 3 flows, which is the simplest test for get the transmission time of the other connections.
  • Each flow is build from connections that use data with different data_size property values.
  • We have a system, a bus, and a system. Connections from system to system, each connection bound to the bus, or in later examples virtual bus that is bound to the bus.

Basic examples

  • Periodic bus, bus data_size is 0
  • Async bus, bus data_size is 0
  • Periodic bus, bus data_size is non-zero
  • Async bus, bus data_size is non-zero

Add a virtual bus layer

  • Periodic bus, bus data_size is non-zero, virtual bus (connection bound to, itself bound to bus) data_size non-zero
  • Async bus, bus data_size is non-zero, virtual bus (connection bound to, itself bound to bus) data_size non-zero
  • Periodic bus, bus data_size is non-zero, virtual bus (connection "requires") data_size non-zero, connections bound to actual bus
  • Async bus, bus data_size is non-zero, virtual bus (connection "requires") data_size non-zero, connections bound to actual bus

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 11, 2020

Basic Model.png

Basic model

package QueuingLatency
public
	data D1
		properties
			Data_Size => 8 Bytes;
	end D1;
	
	data D2
		properties
			Data_Size => 16 Bytes;
	end D2;
	
	data D3
		properties
			Data_Size => 24 Bytes;
	end D3;


	system S1
		features
			out1: out data port D1;
			out2: out data port D2;
			out3: out data port D3;
		flows
			fsrc1: flow source out1 {Latency => 1 ms .. 2 ms;};
			fsrc2: flow source out2 {Latency => 1 ms .. 2 ms;};
			fsrc3: flow source out3 {Latency => 1 ms .. 2 ms;};
	end S1;
	
	system S2
		features
			in1: in data port D1;
			in2: in data port D2;
			in3: in data port D3;
		flows
			fsink1: flow sink in1 {Latency => 3 ms .. 5 ms;};
			fsink2: flow sink in2 {Latency => 3 ms .. 5 ms;};
			fsink3: flow sink in3 {Latency => 3 ms .. 5 ms;};
	end S2;

	
	system Top
	end Top;
	
	
	
	system implementation Top.unbound
		subcomponents
			sub1: system S1;
			sub2: system S2;
			theBus: bus {
				Transmission_Time => [
					Fixed => 1 ms .. 2ms;
					PerByte => 2 ms .. 3ms;	
				];
			};
		connections
			conn1: feature sub1.out1 -> sub2.in1;
			conn2: feature sub1.out2 -> sub2.in2;
			conn3: feature sub1.out3 -> sub2.in3;
		flows
			etef1: end to end flow sub1.fsrc1 -> conn1 -> sub2.fsink1 {Latency => 0 ms .. 500 ms;};
			etef2: end to end flow sub1.fsrc2 -> conn2 -> sub2.fsink2 {Latency => 0 ms .. 500 ms;};
			etef3: end to end flow sub1.fsrc3 -> conn3 -> sub2.fsink3 {Latency => 0 ms .. 500 ms;};
	end Top.unbound;

	--  . . .
end QueuingLatency;

We will vary the property associations on theBus and the connections to make different examples.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 11, 2020

In the simplest most straightforward case we bind the connections to periodic bus:

  • period is 5ms
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • The bus itself has no data overhead
	system implementation Top.simple_periodic extends Top.unbound
		properties
			-- Bind the connections to the bus
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;
			
			-- Configure the bus
			Period => 5 ms applies to theBus;
	end Top.simple_periodic;

The periodic bus means there is min 0ms and max 5 ms waiting time to use the bus (sampling delay).

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 17 ms transmission time (1 + 2 * 8)
      • 3 ms (in port latency)
      • total of 21 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 26 ms transmission time (2 + 3 * 8)
      • 5 ms (in port latency)
      • total of 38 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 33 ms transmission time (1 + 2 * 16)
      • 3 ms (in port latency)
      • total of 37 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 50 ms transmission time (2 + 3 * 16)
      • 5 ms (in port latency)
      • total of 62 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 49 ms transmission time (1 + 2 * 24)
      • 3 ms (in port latency)
      • total of 53 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 74 ms transmission time (2 + 3 * 24)
      • 5 ms (in port latency)
      • total of 86 ms

Current flow latency implementation (master branch) agrees with this.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 12, 2020

In the simplest asynchronous case we bind the connections to an asynchronous bus:

  • no period value
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • The bus itself has no data overhead
	system implementation Top.simple_periodic extends Top.unbound
		properties
			-- Bind the connections to the bus
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;
	end Top.simple_periodic;

The minimum waiting time is 0ms and the max waiting time for a connection is the sum of the max transfer times of the other connections bound to the bus.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 17 ms transmission time (1 + 2 * 8)
      • 3 ms (in port latency)
      • total of 21 ms
    • max
      • 2 ms (out port latency)
      • 124ms (sampling delay) (sum of 50ms + 74ms)
      • 26 ms transmission time (2 + 3 * 8)
      • 5 ms (in port latency)
      • total of 157 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 33 ms transmission time (1 + 2 * 16)
      • 3 ms (in port latency)
      • total of 37 ms
    • max
      • 2 ms (out port latency)
      • 100 ms (sampling delay) (26ms + 74ms)
      • 50 ms transmission time (2 + 3 * 16)
      • 5 ms (in port latency)
      • total of 157 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 49 ms transmission time (1 + 2 * 24)
      • 3 ms (in port latency)
      • total of 53 ms
    • max
      • 2 ms (out port latency)
      • 76 ms (sampling delay) (26ms + 50ms)
      • 74 ms transmission time (2 + 3 * 24)
      • 5 ms (in port latency)
      • total of 157 ms

Current flow latency implementation (master branch) does not implement this. It simply proceeds with no queuing latency.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 12, 2020

Now we consider a periodic bus where the bus has a data overhead contribution:

  • period is 5ms
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • data_size is 2 bytes
	system implementation Top.periodic_overhead extends Top.simple_periodic
		properties
			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;
	end Top.periodic_overhead;
	

The periodic bus means there is min 0ms and max 5 ms waiting time to use the bus (sampling delay). Bus data overhead will affect the transmission time by increasing the overall amount of data sent.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 21 ms transmission time (1 + 2 * (8+2))
      • 3 ms (in port latency)
      • total of 25 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 32 ms transmission time (2 + 3 * (8+2))
      • 5 ms (in port latency)
      • total of 44 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 37 ms transmission time (1 + 2 * (16+2))
      • 3 ms (in port latency)
      • total of 41 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 56 ms transmission time (2 + 3 * (16+2))
      • 5 ms (in port latency)
      • total of 68 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 53 ms transmission time (1 + 2 * (24+2))
      • 3 ms (in port latency)
      • total of 57 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 80 ms transmission time (2 + 3 * (24+2))
      • 5 ms (in port latency)
      • total of 92 ms

Current flow latency implementation (master branch) agrees with this.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 12, 2020

Now we consider an asynchronous bus where the bus has a data overhead contribution:

  • no period value
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • data_size is 2 bytes
	system implementation Top.async_overhead extends Top.simple_async
		properties
			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;
	end Top.async_overhead;

The minimum waiting time is 0ms and the max waiting time for a connection is the sum of the max transfer times of the other connections bound to the bus. Bus data overhead will affect the transmission time by increasing the overall amount of data sent. This, in turn, will increase the queuing times of the worse cases.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 21 ms transmission time (1 + 2 * (8+2))
      • 3 ms (in port latency)
      • total of 25 ms
    • max
      • 2 ms (out port latency)
      • 136 ms (sampling delay) (56 ms + 80 ms)
      • 32 ms transmission time (2 + 3 * (8+2))
      • 5 ms (in port latency)
      • total of 175 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 37 ms transmission time (1 + 2 * (16+2))
      • 3 ms (in port latency)
      • total of 41 ms
    • max
      • 2 ms (out port latency)
      • 112 ms (sampling delay) (32 ms + 80 ms)
      • 56 ms transmission time (2 + 3 * (16+2))
      • 5 ms (in port latency)
      • total of 175 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 53 ms transmission time (1 + 2 * (24+2))
      • 3 ms (in port latency)
      • total of 57 ms
    • max
      • 2 ms (out port latency)
      • 88 ms (sampling delay) (32 ms + 56 ms)
      • 80 ms transmission time (2 + 3 * (24+2))
      • 5 ms (in port latency)
      • total of 175 ms

Current flow latency implementation (master branch) does not implement this. It simply proceeds with no queuing latency.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 12, 2020

Now we introduce a virtual bus layer. The virtual bus has a data size overhead that should trickle down to the data size that travels over the actual bus.

  • data_size is 10 bytes

We build this on top of a actual bus that also has data overhead

  • data_size is 2 bytes (see Top.vb_bound_to_bus)
	virtual bus VB
		properties
			data_size => 10 bytes;	
	end VB;

In the first set of examples, the virtual bus is bound to the actual bus and the connections will be bound to the virtual bus.

	system implementation Top.vb_bound_to_bus extends Top.unbound
		subcomponents
			theVB: virtual bus VB;
		properties
			-- Bind the connections to the virtual bus
			Actual_Connection_Binding => (reference (theVB)) applies to conn1, conn2, conn3;

			-- Configure the virtual bus
			Actual_Connection_Binding => (reference (theBus)) applies to theVB;

			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;			
	end Top.vb_bound_to_bus;

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 12, 2020

Let's look at a periodic case:

  • period is 5ms
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • The bus itself has data_size of 2 bytes
  • The virtual bus has data_size of 10 bytes
  • The connections are bound to the virtual bus
  • The virtual bus is bound to the actual bus.
	system implementation Top.vb_bound_periodic extends Top.vb_bound_to_bus
		properties
			-- Configure the bus
			Period => 5 ms applies to theBus;
	end Top.vb_bound_periodic;

The periodic bus means there is min 0ms and max 5 ms waiting time to use the bus (sampling delay). The virtual bus data overhead will affect the transmission time by increasing the overall amount of data sent over the actual bus, which itself increases the amount of data by its own overhead.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 41 ms transmission time (1 + 2 * (10 + 2 + 8))
      • 3 ms (in port latency)
      • total of 45 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 62 ms transmission time (2 + 3 * (10 + 2 + 8))
      • 5 ms (in port latency)
      • total of 74 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 57 ms transmission time (1 + 2 * (10 + 2 + 16))
      • 3 ms (in port latency)
      • total of 61 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 86 ms transmission time (2 + 3 * (10 + 2 + 16))
      • 5 ms (in port latency)
      • total of 98 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 73 ms transmission time (1 + 2 * (10 + 2 + 24))
      • 3 ms (in port latency)
      • total of 77 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 110 ms transmission time (2 + 3 * (10 + 2 + 24))
      • 5 ms (in port latency)
      • total of 122 ms

Current flow latency implementation (master branch) agrees with this.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 12, 2020

Now the asyncronous case:

  • no period value
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • The bus itself has data_size of 2 bytes
  • The virtual bus has data_size of 10 bytes
  • The connections are bound to the virtual bus
  • The virtual bus is bound to the actual bus.
	system implementation Top.vb_bound_async extends Top.vb_bound_to_bus
		-- Bus is asynchronous by default
	end Top.vb_bound_async;

The minimum waiting time is 0ms and the max waiting time for a connection is the sum of the max transfer times of the other connections bound to the bus. The virtual bus data overhead will affect the transmission time by increasing the overall amount of data sent over the actual bus, which further increases the transmission time due to its data overhead. This, in turn, will increase the queuing times of the worse cases.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 41 ms transmission time (1 + 2 * (10 + 2 + 8))
      • 3 ms (in port latency)
      • total of 45 ms
    • max
      • 2 ms (out port latency)
      • 196 ms (sampling delay) (86 ms + 110ms)
      • 62 ms transmission time (2 + 3 * (10 + 2 + 8))
      • 5 ms (in port latency)
      • total of 265 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 57 ms transmission time (1 + 2 * (10 + 2 + 16))
      • 3 ms (in port latency)
      • total of 61 ms
    • max
      • 2 ms (out port latency)
      • 172 ms (sampling delay) (62 ms + 110 ms)
      • 86 ms transmission time (2 + 3 * (10 + 2 + 16))
      • 5 ms (in port latency)
      • total of 265 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 73 ms transmission time (1 + 2 * (10 + 2 + 24))
      • 3 ms (in port latency)
      • total of 77 ms
    • max
      • 2 ms (out port latency)
      • 148 ms (sampling delay) (62 ms + 86 ms)
      • 110 ms transmission time (2 + 3 * (10 + 2 + 24))
      • 5 ms (in port latency)
      • total of 265 ms

Current flow latency implementation (master branch) does not implement this. It simply proceeds with no queuing latency.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 13, 2020

The final examples use a virtual bus. It is not bound to an actual bus, and the connections are not bound to it. Instead, the connections use the Required_Virtual_Bus_Class property. The virtual bus has a data size overhead that should trickle down to the data size that travels over the actual bus.

	system implementation Top.vb_required extends Top.unbound
		subcomponents
			theVB: virtual bus VB;
		properties
			-- The connections require the virtual bus
			Required_Virtual_Bus_Class => (classifier (VB)) applies to conn1, conn2, conn3;
			
			-- Bind the connections to the actual bus
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;

			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;			
	end Top.vb_required;

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 13, 2020

Let's look at a periodic case:

  • period is 5ms
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • The bus itself has data_size of 2 bytes
  • The virtual bus has data_size of 10 bytes
  • The connections require the virtual bus
  • The virtual bus is is not bound to the actual bus
	system implementation Top.vb_required_periodic extends Top.vb_required
		properties
			-- Configure the bus
			Period => 5 ms applies to theBus;
	end Top.vb_required_periodic;

The periodic bus means there is min 0ms and max 5 ms waiting time to use the bus (sampling delay). The virtual bus data overhead will affect the transmission time by increasing the overall amount of data sent over the actual bus, which itself increases the amount of data by its own overhead.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 41 ms transmission time (1 + 2 * (10 + 2 + 8))
      • 3 ms (in port latency)
      • total of 45 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 62 ms transmission time (2 + 3 * (10 + 2 + 8))
      • 5 ms (in port latency)
      • total of 74 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 57 ms transmission time (1 + 2 * (10 + 2 + 16))
      • 3 ms (in port latency)
      • total of 61 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 86 ms transmission time (2 + 3 * (10 + 2 + 16))
      • 5 ms (in port latency)
      • total of 98 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 73 ms transmission time (1 + 2 * (10 + 2 + 24))
      • 3 ms (in port latency)
      • total of 77 ms
    • max
      • 2 ms (out port latency)
      • 5 ms (sampling delay)
      • 110 ms transmission time (2 + 3 * (10 + 2 + 24))
      • 5 ms (in port latency)
      • total of 122 ms

Current flow latency implementation (master branch) agrees with this.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 13, 2020

Now the asyncronous case:

  • no period value
  • transmission_time
    • min 1ms + 2ms * data_size
    • max 2ms + 3ms * data_size
  • The bus itself has data_size of 2 bytes
  • The virtual bus has data_size of 10 bytes
  • The connections require the virtual bus
  • The virtual bus is not bound to the actual bus.
	system implementation Top.vb_required_async extends Top.vb_required
		-- Bus is asynchronous by default
	end Top.vb_required_async;

The minimum waiting time is 0ms and the max waiting time for a connection is the sum of the max transfer times of the other connections bound to the bus. The virtual bus data overhead will affect the transmission time by increasing the overall amount of data sent over the actual bus, which further increases the transmission time due to its data overhead. This, in turn, will increase the queuing times of the worse cases.

We expect the following flow latencies

  • etef1
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 41 ms transmission time (1 + 2 * (10 + 2 + 8))
      • 3 ms (in port latency)
      • total of 45 ms
    • max
      • 2 ms (out port latency)
      • 196 ms (sampling delay) (86 ms + 110ms)
      • 62 ms transmission time (2 + 3 * (10 + 2 + 8))
      • 5 ms (in port latency)
      • total of 265 ms
  • efef2
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 57 ms transmission time (1 + 2 * (10 + 2 + 16))
      • 3 ms (in port latency)
      • total of 61 ms
    • max
      • 2 ms (out port latency)
      • 172 ms (sampling delay) (62 ms + 110 ms)
      • 86 ms transmission time (2 + 3 * (10 + 2 + 16))
      • 5 ms (in port latency)
      • total of 265 ms
  • etef3
    • min
      • 1 ms (out port latency)
      • 0 ms (sampling delay)
      • 73 ms transmission time (1 + 2 * (10 + 2 + 24))
      • 3 ms (in port latency)
      • total of 77 ms
    • max
      • 2 ms (out port latency)
      • 148 ms (sampling delay) (62 ms + 86 ms)
      • 110 ms transmission time (2 + 3 * (10 + 2 + 24))
      • 5 ms (in port latency)
      • total of 265 ms

Current flow latency implementation (master branch) does not implement this. It simply proceeds with no queuing latency.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 13, 2020

The complete QueuingLatency package:

package QueuingLatency
public
	data D1
		properties
			Data_Size => 8 Bytes;
	end D1;
	
	data D2
		properties
			Data_Size => 16 Bytes;
	end D2;
	
	data D3
		properties
			Data_Size => 24 Bytes;
	end D3;


	system S1
		features
			out1: out data port D1;
			out2: out data port D2;
			out3: out data port D3;
		flows
			fsrc1: flow source out1 {Latency => 1 ms .. 2 ms;};
			fsrc2: flow source out2 {Latency => 1 ms .. 2 ms;};
			fsrc3: flow source out3 {Latency => 1 ms .. 2 ms;};
	end S1;
	
	system S2
		features
			in1: in data port D1;
			in2: in data port D2;
			in3: in data port D3;
		flows
			fsink1: flow sink in1 {Latency => 3 ms .. 5 ms;};
			fsink2: flow sink in2 {Latency => 3 ms .. 5 ms;};
			fsink3: flow sink in3 {Latency => 3 ms .. 5 ms;};
	end S2;

	
	system Top
	end Top;
	
	
	
	system implementation Top.unbound
		subcomponents
			sub1: system S1;
			sub2: system S2;
			theBus: bus {
				Transmission_Time => [
					Fixed => 1 ms .. 2ms;
					PerByte => 2 ms .. 3ms;	
				];
			};
		connections
			conn1: feature sub1.out1 -> sub2.in1;
			conn2: feature sub1.out2 -> sub2.in2;
			conn3: feature sub1.out3 -> sub2.in3;
		flows
			etef1: end to end flow sub1.fsrc1 -> conn1 -> sub2.fsink1 {Latency => 0 ms .. 500 ms;};
			etef2: end to end flow sub1.fsrc2 -> conn2 -> sub2.fsink2 {Latency => 0 ms .. 500 ms;};
			etef3: end to end flow sub1.fsrc3 -> conn3 -> sub2.fsink3 {Latency => 0 ms .. 500 ms;};
	end Top.unbound;
	
	
	
	system implementation Top.simple_periodic extends Top.unbound
		properties
			-- Bind the connections to the bus
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;
			
			-- Configure the bus
			Period => 5 ms applies to theBus;
	end Top.simple_periodic;
	
	system implementation Top.periodic_overhead extends Top.simple_periodic
		properties
			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;
	end Top.periodic_overhead;
	
	
	
	system implementation Top.simple_async extends Top.unbound
		properties
			-- Bind the connections to the bus
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;
	end Top.simple_async;
	
	system implementation Top.async_overhead extends Top.simple_async
		properties
			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;
	end Top.async_overhead;
	


	virtual bus VB
		properties
			data_size => 10 bytes;	
	end VB;

	system implementation Top.vb_bound_to_bus extends Top.unbound
		subcomponents
			theVB: virtual bus VB;
		properties
			-- Bind the connections to the virtual bus
			Actual_Connection_Binding => (reference (theVB)) applies to conn1, conn2, conn3;

			-- Configure the virtual bus
			Actual_Connection_Binding => (reference (theBus)) applies to theVB;

			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;			
	end Top.vb_bound_to_bus;

	system implementation Top.vb_bound_periodic extends Top.vb_bound_to_bus
		properties
			-- Configure the bus
			Period => 5 ms applies to theBus;
	end Top.vb_bound_periodic;

	system implementation Top.vb_bound_async extends Top.vb_bound_to_bus
		-- Bus is asynchronous by default
	end Top.vb_bound_async;
	
	

	system implementation Top.vb_required extends Top.unbound
		subcomponents
			theVB: virtual bus VB;
		properties
			-- The connections require the virtual bus
			Required_Virtual_Bus_Class => (classifier (VB)) applies to conn1, conn2, conn3;
			
			-- Bind the connections to the actual bus
			Actual_Connection_Binding => (reference (theBus)) applies to conn1, conn2, conn3;

			-- Configure the bus
			Data_Size => 2 bytes applies to theBus;			
	end Top.vb_required;

	system implementation Top.vb_required_periodic extends Top.vb_required
		properties
			-- Configure the bus
			Period => 5 ms applies to theBus;
	end Top.vb_required_periodic;

	system implementation Top.vb_required_async extends Top.vb_required
		-- Bus is asynchronous by default
	end Top.vb_required_async;
end QueuingLatency;

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 14, 2020

A side note about the Flow Latency Analysis: While I completely understand why having a spreadsheet output is desirable and useful, it is very annoying to look at if you just want a quick idea of what the results are. I know the analysis creates warnings for when the times are out of range, but it would be nice to somehow look at the contribution breakdown within OSATE.

I'm not sure what the right way to do this would be, I just know that I am getting annoyed by flipping back and forth between OSATE and Excel.

@jjhugues
Copy link
Contributor

@jjhugues jjhugues commented Feb 14, 2020

Agreed with the last comment. FTA plug-in can output a tabular view. I wonder if this mechanism can also be applied here

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 14, 2020

@jjhugues I just talked with Peter, and it sounds like we just need a way to view the .result file in OSATE. I says there is an Ecore model for this, but I don't have an Ecore viewer for it. Any how, probably should open a new issue for this.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 14, 2020

I spoke with Peter about the above examples. He is agrees with them.

I am proceeding with the implementation.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 14, 2020

The primary implementation is in. As described a while ago, the idea is to cache the transmission times as they are computed during the process. They are indexed by a pair of <bus, connection>. We also remember those buses that non-periodic. I've added a new step at the end where we go through all the non-periodic buses, and then find all the connections bound to them, and then look up all the transmission times.

In principle, this is straightforward, but I'm running into some minor problems because when we remember the pairs, we also need to remember the pairs for any of the buses the (virtual) bus is bound to.

Another problem is that the results are stored in LatencyReportEntry objects that have a finalize ReportEntry() method. I have to defer the invocation of the finalize method until after all the queuing times are computed.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 18, 2020

Lutz and I agreed there should be two separate contributions:

  • Sampling delay
  • Queuing delay

In the periodic case, sampling delay is non-zero and queuing delay is always zero. In the non-periodic case, it's the other way around.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 18, 2020

Basic implementation is in. Actual analysis results match the expected from the examples above.

Need to clean some things up.

Need to make there are 2 contributions (as above)

Don't forget to update the help file.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 19, 2020

Made it so there is always a sampling contribution and always a queuing contribution

Need to fix the existing unit tests because these changes are going to break them

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 20, 2020

Fixed the existing unit tests. Need to add new ones.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 20, 2020

Added new unit tests, but I'm having a non-deterministic ordering problem caused by the fact that I use a HashSet to store the buses that need to visited later for queuing time. I need to change it to use a TreeSet ordered by the order in which the bus components are visited.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 20, 2020

Fixed the ordering problem.

Need to fix the "Versioning" issues now

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 20, 2020

Updated help

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 20, 2020

Fixed the versioning by incrementing the major version number of the plug-in. Had to do this because I was forced to change the signature of methods that were public but really shouldn't have been. Since I was incrementing the version number any how, I changed most of these methods to private to avoid this problem in the future.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Feb 24, 2020

Fixed all the invoke methods now. Was broken for Alisa.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 2, 2020

Need to add a workspace preference to "turn off" the queuing latency. But this has some caveats:

  • In implementation, the preference should be checked in the command handler, and then passed as a boolean parameter to the invoke methods.
    • Probably need to add a new set of invoke methods so as to not mess up the API and have the existing methods that use 4 boolean parameters defer to the new method with a default value.
  • By turn off we mean to report a 0 for the queuing latency. We are not trying to replicate the old results exactly (by not reporting queuing latency at all). Furthermore, we might want to report the actual queuing latency in the comment portion of the result.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 3, 2020

Added new preference "disable queuing latency." Updated the UI, invoke methods(), and FlowLatencyAnalysisSwitch.fillInQueueingLatency(). Kept the old invoke() methods, and added new ones that now have 5 boolean flags. made the old ones @deprecated.

Need to update the help.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 4, 2020

Update the help for the dialogs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants