Skip to content

Conversation

@mand35
Copy link
Contributor

@mand35 mand35 commented Jun 18, 2018

One can create multiple Objectives for a singe test case (see example below). This commit improves the logging of the values. It add the tag to the log and it logs all values. (before it was all up to reference is not reached). Thus we can keep better track of all the values observed during the tests.

Reference example:
p_names = {'creation', 'stat', 'removal'}
for p_name in p_names:
self.perf_patterns[p_name] = sn.extractsingle(r'^\s+File '+p_name+
'\s+:\s+(?P<'+p_name+'>\S+) ', self.stdout, p_name, float)

    self.reference = {
        'kupe:compute' : {
            'creation' : (7747,  -(2*223.1)/7747, None),
            'stat' :     (16527, -(2*558.2)/16527,None),
            'removal' :  (7355, -(2*173.1)/7355, None)
        },
    }

mand35 added 2 commits June 19, 2018 10:40
One can create multiple Objectives for a singe test case (see example below). This commit improves the logging of the values. It add the tag to the log and it logs all values. (before it was all up to reference is not reached). Thus we can keep better track of all the values observed during the tests. 

Reference example:
       p_names = {'creation', 'stat', 'removal'}
        for p_name in p_names:
           self.perf_patterns[p_name] = sn.extractsingle(r'^\s+File '+p_name+
                    '\s+:\s+(?P<'+p_name+'>\S+) ', self.stdout, p_name, float)

        self.reference = {
            'kupe:compute' : {
                'creation' : (7747,  -(2*223.1)/7747, None),
                'stat' :     (16527, -(2*558.2)/16527,None),
                'removal' :  (7355, -(2*173.1)/7355, None)
            },
        }
@jenkins-cscs
Copy link
Collaborator

Can I test this patch?

@codecov-io
Copy link

codecov-io commented Jun 18, 2018

Codecov Report

Merging #334 into master will increase coverage by 0.02%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #334      +/-   ##
==========================================
+ Coverage   91.28%   91.31%   +0.02%     
==========================================
  Files          67       67              
  Lines        8018     8022       +4     
==========================================
+ Hits         7319     7325       +6     
+ Misses        699      697       -2
Impacted Files Coverage Δ
reframe/core/pipeline.py 94.78% <100%> (+0.05%) ⬆️
reframe/core/config.py 84.54% <0%> (+1.81%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2fc7c01...e9443ea. Read the comment docs.

@vkarak vkarak self-requested a review June 19, 2018 07:29
@vkarak vkarak self-assigned this Jun 19, 2018
@vkarak
Copy link
Contributor

vkarak commented Jun 19, 2018

@mand35 Thanks for your PR. The performance logging is completely redesigned in #213, which is planned to be merged this week. There you can specify exactly the format of the performance log in the configuration file and you can log the performance variables (tags) as well.

key = '%s:%s' % (self._current_partition.fullname, tag)
ref, low_thres, high_thres = self.reference[key]
evaluate(assert_reference(value[tag], ref, low_thres,
high_thres))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't understand very well the purpose of this nested loop here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not a nested loop. I needed to move the evaluation to an additional loop. The reason was that if one of the early reference comparison throw the assert the other values were not written to the log file. In this way first all values are written and then the comparison is performed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, for some reason I overlooked the alignment! I see what you are trying to do.

@vkarak
Copy link
Contributor

vkarak commented Jun 20, 2018

@jenkins-cscs retry all

Copy link
Contributor

@vkarak vkarak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the point of your PR and it's valid. Apart from my minor comments here, I'd suggest to revert the changes on logging and wait for #213 to be merged, incorporating its changes.


with os_ext.change_dir(self._stagedir):
# first check and print all values with references
value = {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better call this perf_values. I'd also change a bit the comment to "Evaluate all performance variables first, so as to log them, then assert them".

(tag, self._current_partition.fullname)
)
evaluate(assert_reference(value, ref, low_thres, high_thres))
for tag, expr in self.perf_patterns.items():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to go over the perf_patterns again here. Just over the evaluated perf_values.

Also, leave a blank line after the previous for loop.

(value, self.reference[key])
'%s, value: %s, reference: %s' %
(tag, value[tag], self.reference[key])
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the logging, I suggest waiting for #213 to be merged (perhaps today).

@vkarak vkarak changed the title store all performance values in log file Evaluate and log all performance values before asserting them Jun 20, 2018
@vkarak vkarak added this to the Upcoming sprint milestone Jun 20, 2018
@vkarak vkarak changed the title Evaluate and log all performance values before asserting them [feat] Evaluate and log all performance values before asserting them Jun 26, 2018
@vkarak vkarak removed this from the Upcoming sprint milestone Jun 27, 2018
@vkarak
Copy link
Contributor

vkarak commented Jul 17, 2018

Closing this. Replaced by #359.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants