Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks for extran and routing tests #133

Merged
merged 8 commits into from
Jan 17, 2018

Conversation

michaeltryby
Copy link

This PR addresses Issue #132. The scripts gen-config.cmd and run-nrtest.cmd have also been added to the tools folder.

@michaeltryby
Copy link
Author

@bemcdonnell appveyor is running now with 21/22 tests passing.

@bemcdonnell
Copy link
Member

@michaeltryby, it would be really helpful if you could have the testing framework print a few lines of the diff so we can get a quick preview of what is going on with it. How complicated would it be if the tool would print ~10 lines of a failed RPT diff?

@bemcdonnell
Copy link
Member

@michaeltryby, maybe make a small injection here:

https://github.com/michaeltryby/Stormwater-Management-Model/blob/develop/tools/nrtest-swmm/main.py#L79-L89

def report_compare(path_test, path_ref, (comp_args)):
    '''
    Compares results in two report files ignoring contents of header and footer. 
    '''
    with open(path_test ,'r') as ftest, open(path_ref, 'r') as fref:
        for (test_line, ref_line) in it.izip(hdf.parse(ftest, 4, 4)[1], 
                                             hdf.parse(fref, 4, 4)[1]): 
            if test_line != ref_line: 
                print('benchmark-> {}\ntest       ->{}'.format(ref_line.replace('\n',''), test_line.replace('\n','')))
                return False
              
    return True 

@michaeltryby
Copy link
Author

michaeltryby commented Jan 16, 2018

@bemcdonnell couple of thought ...

  1. Running tests on CI is not a substitute for running tests locally.
  2. Developers should run the tests locally to make sure their code works before issuing a PR.
  3. Being a developer comes with a lot of responsibility. One of those is testing code locally.
  4. If you want to retrieve failure information from the CI server, that should be handled in the CI configuration after tests run and a failure is detected. NOT in the nrtest comparison routine! On failure have the server email or cache the failing test artifacts where you can retrieve them.
  5. The developer should be testing locally. Getting information back from the server can be used in situations where there are differences between local and remote testing.
  6. When in doubt you should be testing locally.
  7. The great thing is, its super easy to run nrtest locally!

@michaeltryby
Copy link
Author

On Appveyor it looks like you are able to associate and retrieve artifacts with a CI build.
https://help.appveyor.com/discussions/questions/1649-how-to-keep-a-build-artifact-when-tests-fail

The support for savings test artifacts isn't as good on Travis. They only support S3.
https://github.com/travis-ci/artifacts

@michaeltryby
Copy link
Author

@bemcdonnell I propose we accept this PR as is and create an issue to configure appveyor to retrieve failed test artifacts. What do you think?

@michaeltryby
Copy link
Author

@bemcdonnell I can also create an issue to update the readme for running nrtest found here with step-by-step instructions for Windows.

@bemcdonnell
Copy link
Member

@michaeltryby, i wonder why we have one failure...

@michaeltryby
Copy link
Author

@bemcdonnell Let's find out!

@michaeltryby michaeltryby merged commit 463e4d6 into pyswmm:develop Jan 17, 2018
karosc pushed a commit that referenced this pull request Sep 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants