Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Boilerplate reduction and using a TEST_FINISHED method #26

Closed
claytonketner opened this issue Jun 6, 2019 · 8 comments
Closed

Boilerplate reduction and using a TEST_FINISHED method #26

claytonketner opened this issue Jun 6, 2019 · 8 comments
Labels
enhancement New feature or request

Comments

@claytonketner
Copy link
Contributor

Hi Jakob - thanks for the awesome library! I've been using it for a few days and noted a few minor things. I've implemented these on my machine and can put up a fork if these sound good to you, although I could very well not be seeing some edge cases that this wouldn't work for.

Boilerplate Reduction

There are a few duplication/boilerplate pain points around making tests. When I create a new test FB, I need to remember to add a bunch of things, like:

  • call_after_init pragma
  • EXTENDS and IMPLEMENTS inheritance
  • An instance of FB_Assert
  • Implementing the interface (creating the RunTests method)
  • Creating FB_init and filling it with the required method call

I understand if some things are necessary, but it would be really nice to be able to reduce this list down. Is the RunTests method necessary? Couldn't we just have the test runner execute the body of the function block instead? (This is done by replacing variables that are of the I_TestSuite type with POINTER TO FB_TestSuite) Removing this requirement would also prevent the requirement of adding the IMPLEMENTS .... FB_TestSuite could also contain an instance of FB_Assert already.

After some changes to TcUnit, I was able to get the boilerplate for each new test suite FB down to:

  • Add {attribute 'call_after_init'}
  • Add the inheritance EXTENDS FB_TestSuite

TEST_FINISHED Method

It seems like it would make more sense to have a method to mark a test as finished, rather than using the return value of RunTests. This would be especially helpful for when a test requires multiple cycles to complete. E.g.:

METHOD Test1
---------------------------------
TEST('Test1');

// Test does things

IF endCondition THEN
    TEST_FINISHED();
END_IF
@sagatowski
Copy link
Member

sagatowski commented Jun 10, 2019

Hi Clayton!

Thanks for the excellent feedback.

I fully agree with you that there is a lot of boilerplate code that needs to be written every time you create a test-suite. I try to think of it as the prerequisites that I have to do in other unit testing frameworks as well. Even with this, it is obviously better to try to minimize this. I've used many different programming frameworks for inspiration (JUnit for Java, Google Test for C++), though now done this trip I'm not sure you should compare a unit testing frameworks for PLCs to these (multiple cycles etc).

The only issue I see is that there are quite many users of TcUnit already, and all of them would need to update their code for the tests to run. Obviously the API for the framework is not yet set (the reason for why it's not a 1.x release but just at 0.4.1 now). I guess one way to handle this would be to create a final release with all the current bugs fixed with the current API (as version 0.9.9 or whatever) and then include your changes as the release 1.0.

If you could create a fork and let me look at the code, we could include all your changes through pull requests. Would that be fine for you?

All documentation (website + examples) needs to be updated as well.

@sagatowski sagatowski added the enhancement New feature or request label Jun 10, 2019
@claytonketner
Copy link
Contributor Author

Yep - I totally get that, and especially with PLCs you want to limit changes and be able to set and forget. I'll put my fork up soon and leave it up to you if you decide on merging the changes sometime in the future.

@sagatowski
Copy link
Member

I've now created new versions of the:

  • TcUnit-Verifier
  • AdvancedExampleProject
  • SimpleExampleProject

That are using your forked TcUnit instead (with the new boilerplate-code), and I got the same results as with the current TcUnit. This change significantly improves the usability of TcUnit.
I'll do some more tests, but this looks very promising so if all goes well I'll most likely merge your changes to the main with a pull request.

I want to take the opportunity to thank you very much for your effort.

I'll post updates here once I've done more progress.

@claytonketner
Copy link
Contributor Author

claytonketner commented Jun 12, 2019

Sounds great! I can create the PR now - there are a couple minor things I need to address and clean up still. I forgot I can't make the PRs. I'll just list the things here:

  • An empty test suite causes the test report to never show, likely due to the runner thinking it is still running tests
  • Clean up stray variables (I think there may be some we don't need anymore
  • Fix minor formatting caused by find/replace for tabs --> spaces

@sagatowski
Copy link
Member

I've noticed while running the TcUnit-Verifier that for some reason I don't get consistent results! Sometimes I get 136-errors (not real errors, but amount of actual error-rows) in the visual studio error-list as the output. 136 is what I should get (this is what you always get when using TcUnit-runner with current TcUnit-version). Sometimes I however get less than this! (even though the test-result reports the same result, that is the same amount of finished tests, same amount of failed/successful tests etc). It's completely random. If you run the tests (tcunit-verifier) for ~10 times, you're guaranteed to get different results every time.

I can create a branch of the tcunit-verifier (as it took quite some time to convert it to the new tcunit), and maybe you can take a quick look at it? It might be that all the results are correctly reported, but only that for some reason all ADSLOG are not correctly received by the ADS-router.

@sagatowski
Copy link
Member

sagatowski commented Jun 13, 2019

Alright, I've done some more tests and I've concluded that it's actually not the testing frameworks that is causing this behavior, I got the same behavior (sometimes) with the current framework as well. What I've done is that I've added a variable that increments every time i call the ADSLOGSTR(). I'm supposed to get 136 errors in the visual studio list with the current TcUnit-Verifier. Whatever I do, I can see that this variable is always set to 136, but sometimes the actual errors in the list are less than 136 (even though ~90% of the time it is exactly 136). This means that these ADSLOGSTR() are written/transmitted, but somehow are lost in the AMS/ADS-router. I would say this is most likely a bug in TwinCAT, so conclusion is that the changes that have been made in the fork have not affected this.

Edit: Why can't you create pull requests? I don't see any reason why you shouldn't be able to do it in the upstream?

@claytonketner
Copy link
Contributor Author

claytonketner commented Jun 14, 2019

Huh that's a weird bug - sounds like maybe a timing issue? I wonder if ADSLOGSTR doesn't like it if you call it too quickly. Thanks for looking into that - I've been a bit busy at work lately.

Anyway, I'm not sure why I thought I couldn't create a PR. Making one now.

@sagatowski
Copy link
Member

Implemented in commits 1ca08ab, 48b2a75, 826045f.

dfreiberger pushed a commit to dfreiberger/TcUnit that referenced this issue Oct 22, 2022
… test-results".

Upped version to 0.9.4.0.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants