Skip to content

collecting average performance of multiple runs #285

@mand35

Description

@mand35

I want to perform performance degradation tests using ReFrame. Therefore each application need to run multiple (X) times and afterwards the average performance need to be compared with reference values.

I tried to implement the test as follow, but there seems to be a conceptual issue.

  • instantiate the test class (X times)
def _get_checks(**kwargs):
    PDT_rep = 2
    ret = []
    for i in range(1,PDT_rep+1):
       ret.append(IorCheck(i, PDT_rep, 64, '4m', '8g', '/scale_akl_nobackup/filesets/nobackup', 'POSIX', **kwargs))
    return ret
  • each instance add the performance value to an array
 self.perf_patterns = {}
        self.perf_PDT_values['write_%d'%count] = sn.extractsingle(
                r'^Max Write:\s+(?P<write>\S+) MiB/sec', self.stdout, 'write', float)
  • the X instance calculate the mean value and uses perf_pattern to compare
        if count == PDT_rep:
            sum_r = ''
            sum_w = ''
            self.perf_patterns['PDT_read_{0}_{1}_{2}'.format(procs,t_size,b_size)] = ""
            for i in range(1,PDT_rep+1):
               #sum_r += self.perf_PDT_values['read_%d'%i]  ## this also does not work
               self.perf_patterns['PDT_read_{0}_{1}_{2}'.format(procs,t_size,b_size)] += self.perf_PDT_values['read_%d'%i]

as a result I get:

  for m in finditer(patt, filename, encoding):
 File "/scale_akl_persistent/filesets/home/schoenherrm/projects/NeSI_ReFrame/reframe/utility/sanity.py", line 533, in finditer
   with open(filename, 'rt', encoding=encoding) as fp:
TypeError: expected str, bytes or os.PathLike object, not NoneType

Could you clarify the issue or do you have a simple solution running the application multiple times and evaluating the performances?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions