Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flow latency analysis uses compute execution time instead of response time #2122

Closed
Etienne13 opened this issue Oct 2, 2017 · 8 comments · Fixed by #2245
Closed

Flow latency analysis uses compute execution time instead of response time #2122

Etienne13 opened this issue Oct 2, 2017 · 8 comments · Fixed by #2245
Assignees
Milestone

Comments

@Etienne13
Copy link
Contributor

@Etienne13 Etienne13 commented Oct 2, 2017

Hello,

flow latency analysis uses the compute_execution_time or deadline property of a thread as a contribution to the flow latency. Using the deadline is ok, but pessimistic. Using the compute_execution_time property (corresponding to a range of BCET..WCET of the thread) can only apply when there are no interferences (e.g. preemptions) from other threads.

Could you add a response_time property (a range of time), applicable to threads, in order to account for the contribution of threads to the flow latency (and use deadline or execution time if response time is not provided).

This matter was discussed on the aadl mailing list, as well as in the AADL committee. As far as I remember, it seemed to be accepted by the committee members, and I think it would be great if it was added in the plugin.

Best regards,
Etienne.

@lwrage lwrage transferred this issue from osate/osate2-plugins Dec 18, 2019
@jjhugues
Copy link
Contributor

@jjhugues jjhugues commented Dec 18, 2019

Actually this should be first discussed in saeaadl/aadlv2.2 as an errata

@lwrage
Copy link
Contributor

@lwrage lwrage commented Mar 13, 2020

Add property

	Response_Time: Time_Range
		applies to (thread, device, subprogram, event port, event data port);

to property set SEI.
In the latency analysis wherever Compute_Execution_Time is used, try to use SEI::Response_Time, instead. If that isn't set use Compute_Execution_Time.

Update documentation.

@lwrage lwrage added this to the 2.7.1 milestone Mar 13, 2020
@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 17, 2020

Test cases: Flows go between two devices

  1. Top.Unbound does not use Compute_Execution_Time or Response_TIme
  2. Top.CET uses only Compute_Execution_Time on the devices and their ports: analysis should use Compute_Execution_Time
  3. Top.CET uses both Response_Time and Compute_Execution_Time on the devices and their ports: analysis should use Response_Time
package ResponseTime
public
	with SEI;

	data D1
		properties
			Data_Size => 8 Bytes;
	end D1;

	data D2
		properties
			Data_Size => 16 Bytes;
	end D2;

	data D3
		properties
			Data_Size => 24 Bytes;
	end D3;

	device Device1
		features
			out1: out event data port D1;
			out2: out event data port D2;
			out3: out event data port D3;
		flows
			fsrc1: flow source out1 {Latency => 1ms .. 2ms;};
			fsrc2: flow source out2 {Latency => 1ms .. 2ms;};
			fsrc3: flow source out3 {Latency => 1ms .. 2ms;};
	end Device1;

	device Device2
		features
			in1: in event data port D1;
			in2: in event data port D2;
			in3: in event data port D3;
		flows
			fsink1: flow sink in1 {Latency => 3ms .. 5ms;};
			fsink2: flow sink in2 {Latency => 3ms .. 5ms;};
			fsink3: flow sink in3 {Latency => 3ms .. 5ms;};
	end Device2;

	system Top
	end Top;

	system implementation Top.unbound
		subcomponents
			sub1: device Device1;
			sub2: device Device2;
		connections
			conn1: feature sub1.out1 -> sub2.in1;
			conn2: feature sub1.out2 -> sub2.in2;
			conn3: feature sub1.out3 -> sub2.in3;
		flows
			etef1: end to end flow sub1.fsrc1 -> conn1 -> sub2.fsink1 {Latency => 0ms .. 500ms;};
			etef2: end to end flow sub1.fsrc2 -> conn2 -> sub2.fsink2 {Latency => 0ms .. 500ms;};
			etef3: end to end flow sub1.fsrc3 -> conn3 -> sub2.fsink3 {Latency => 0ms .. 500ms;};
	end Top.unbound;

	system implementation Top.CET extends Top.unbound
		properties
			Compute_Execution_Time => 4ms .. 10ms applies to sub1, sub2;
			Compute_Execution_Time => 1ms .. 2ms applies to sub1.out1, sub1.out2, sub1.out3;
			Compute_Execution_Time => 3ms .. 6ms applies to sub2.in1, sub2.in2, sub2.in3;
	end Top.CET;

	system implementation Top.RT extends Top.CET
		properties
			SEI::Response_Time => 8ms .. 20ms applies to sub1, sub2;
			SEI::Response_Time => 2ms .. 4ms applies to sub1.out1, sub1.out2, sub1.out3;
			SEI::Response_Time => 6ms .. 12ms applies to sub2.in1, sub2.in2, sub2.in3;
	end Top.RT;
end ResponseTime;

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 17, 2020

Updated FlowLatencyAnalysisSwitch.mapComponentInstance() to favor the response time property over the compute execution time.

Need to

  • Make sure old unit tests don't break
  • Update the docs
  • Add new unit tests
  • Deal with versioning issues

@lwrage
Copy link
Contributor

@lwrage lwrage commented Mar 17, 2020

Make sure that the generated report and analysis result data structure show where the response time was used in the calculation instead of computation time.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 18, 2020

@lwrage Yeah, I did that. It says "Response time" instead of "processing time".

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 18, 2020

I'm trying to fix the "versioning" issues this change creates, but it spirals out of control, and I don't think I completely believe what eclipse is telling me.

I'm going to ignore this for now.

@AaronGreenhouse
Copy link
Contributor

@AaronGreenhouse AaronGreenhouse commented Mar 18, 2020

Changed so that The methods I added to GetProperties are instead local private methods of FlowLatencyAnalysisSwitch. This keeps the versioning changes from going bananas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants