Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when using the Behavior Annex #2352

Closed
smithdtyler opened this issue Jun 4, 2020 · 0 comments · Fixed by #2361
Closed

Memory leak when using the Behavior Annex #2352

smithdtyler opened this issue Jun 4, 2020 · 0 comments · Fixed by #2361
Assignees
Milestone

Comments

@smithdtyler
Copy link

smithdtyler commented Jun 4, 2020

Summary

Use of the AADL Behavior Annex leads to a memory leak in OSATE 2.7.1.

Expected and Current Behavior

Continued use of OSATE 2.7.1 with the AADL Behavior Annex leads to out of memory exceptions. Extended use of OSATE should not result in increased memory usage.

Steps to Reproduce

  1. In OSATE 2.7.1, create a new project
  2. Optionally, launch JVisualVM to monitor memory usage
  3. Add the AADL text below in a new file
  4. Repeatedly clean and rebuild the project
  5. Use the Perform GC function in JVisualVM to force a garbage collection. Note that less heap space is reclaimed with repeated clean-build cycles (upper right graph in the JVisualVM screenshot).

Screen Shot 2020-06-04 at 8 56 33 AM

Commentary

Eclipse Memory Analyzer's Leak Suspects report points to org.osate.ba as the likely culprit.

Screen Shot 2020-06-04 at 9 03 27 AM

I have not yet been able to identify a single source of the problem, though investigation suggests a combination of Behavior Annex property resolution and caching (note that extensive list of PropertyImpl retained, as shown below). The two screenshots below show before and after 10 clean+rebuild cycles of the included AADL model.

Screen Shot 2020-06-04 at 9 09 50 AM

Screen Shot 2020-06-04 at 9 10 09 AM

package example
public
  with ARINC653;
  with Base_Types;
  
  data message_pm
  end message_pm;

 thread group threadA
    features
      input: in event data port message_pm;
  end threadA;
  
  thread group implementation threadA.impl
    subcomponents
      t0 : thread;
    annex behavior_specification {**
    states
    s0: initial final state;
    s1: state;
    transitions
    s0 -[input=1]-> s1;
    **};
  end threadA.impl;

  thread group threadB
    features
      output: out event data port message_pm;
  end threadB;
  
  thread group implementation threadB.impl
    subcomponents
      t0 : thread;
      annex behavior_specification {**
    states
    s0: initial final state;
    s1: state;
    transitions
    s0 -[]-> s1 {output:=1};
    **};
  end threadB.impl;

  process processA
    features
      inputA: in event data port message_pm;
  end processA;
  
  process processB
    features
      outputB: out event data port message_pm;
  end processB;

  process implementation processA.impl
    subcomponents
      tA: thread group threadA.impl;
    connections
      throughput : port inputA -> tA.input;
  end processA.impl;
  
  process implementation processB.impl
    subcomponents
      tB: thread group threadB.impl;
    connections
      throughput: port tB.output -> outputB;
  end processB.impl;
  
    virtual processor partitionA
    features
      input: in event data port Base_Types::Integer;
    flows
      inputflow: flow sink input;
    properties
      ARINC653::Partition_Name=> "PartitionA";
      ARINC653::Partition_Identifier => 0;
      Scheduling_Protocol => (ARINC653);
  end partitionA;
  
  virtual processor partitionB
    features
      output: out event data port Base_Types::Integer;
    flows
      outputflow: flow source output;
    properties
      ARINC653::Partition_Name=> "PartitionB";
      ARINC653::Partition_Identifier => 1;
      Scheduling_Protocol => (ARINC653);
  end partitionB;
  
  virtual processor implementation partitionA.impl
  end partitionA.impl;
  
  virtual processor implementation partitionB.impl
  end partitionB.impl;
  
  processor core
  end core;

  processor implementation core.impl
  	subcomponents
        partA: virtual processor partitionA.impl;
        partB: virtual processor partitionB.impl;
  	properties
  		Scheduling_Protocol => (ARINC653);
    	ARINC653::Module_Major_Frame => 200 ms;
      ARINC653::Module_Schedule => ([Partition => reference (partA);
            Duration => 100 ms; 
            Periodic_Processing_Start => false;],
            [Partition => reference (partB); 
            Duration => 100ms;
            Periodic_Processing_Start => false;]
      );
  end core.impl;

  system basicSystem
  end basicSystem;
  
  system implementation basicSystem.impl
  	subcomponents
  		core1: processor core.impl;
  		partAProcess: process processA.impl;
  		partBProcess: process processB.impl;
  	connections
  		procBToA: port partBProcess.outputB -> partAProcess.inputA;
  	properties
      Actual_Processor_Binding => (reference (core1)) applies to core1.partA;
      Actual_Processor_Binding => (reference (core1)) applies to core1.partB;
      Actual_Processor_Binding => (reference (core1.partA)) applies to partAProcess;
      Actual_Processor_Binding => (reference (core1.partB)) applies to partBProcess;
  	  Compute_Execution_Time => 50ms .. 50ms applies to partAProcess.tA.t0, partBProcess.tB.t0;
  	  Period => 200ms applies to partBProcess.tB;
  	  Period => 200ms applies to partAProcess.tA;
  	  Deadline => 200ms applies to partBProcess.tB;
  	  Deadline => 200ms applies to partAProcess.tA;
  end basicSystem.impl;
  
end example;

Control Test

As a control, I repeated these tests using an identical model that has the behavior annex lines removed. Note that the retained heap does not increate over 20 clean+build cycles.

Screen Shot 2020-06-04 at 9 17 57 AM

Environment

  • OSATE Version:2.7.2
  • Operating System: Windows 10 64bit
  • I did not use the optional caching flag added in OSATE 2.7.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants