Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unit-testing framework to SU2 #850

Merged
merged 44 commits into from
Apr 30, 2020
Merged

Add unit-testing framework to SU2 #850

merged 44 commits into from
Apr 30, 2020

Conversation

clarkpede
Copy link
Contributor

@clarkpede clarkpede commented Jan 17, 2020

Proposed Changes

This is intended to be a small PR, but a big design shift. As discussed in issue #698, unit tests can serve an important role in software development. They complement, rather than replace, validation and verification tests. They are most useful when you want to isolate a specific behavior. Unit-testing increases development time, but improves the quality of the code and reduces code-maintenance time [source].

Vision

In my mind, the most efficient way to proceed is to add unit tests as new behaviors are added and bugs are fixed. With each new PR, we should ask "Should this have a unit test to verify this behavior?" Unit tests will provide a clear way to show that each PR does what the developer promises it will do.

This will require refactoring some of the code. Some parts of the code, like the CConfig class, are not unit-testing friendly. Unit testing is hard when you have God-classes and "do-it-all" functions. The hope is that refactoring these classes will lead to more modular and easier-to-maintain code.

Decision to use Catch2

Catch2 was chosen for the unit testing library for the following reasons:

  • It can be included as a header-only library.
  • It has a very clean, easy-to-use syntax
  • It has widespread use, including the FEniCS library.

Design choices

The following design choices are in place. If you disagree with them, then please start a discussion here:

  • All tests are placed in a root-level directory named UnitTests. Inside this directory is a structure just like the existing folder structure (e.g. SU2_CFD/numerics), except that there are not separate src and include folders.
  • Tests are grouped by class, and put in files such as CNumerics_tests.cpp.
  • A single test executable is compiled and run, as opposed to a separate test executable for each group of tests. With the single test executable, you can always filter down to groups of tests or individual tests.
  • When ninja test or meson test is run, the test executable is only run once, sweeping through all the unit tests. The result only shows a single failure or a single success. If more detail is desired, then the test executable can be run manually.
  • The relevant SU2 code is compiled as a library (e.g. libSU2core) and then linked into both the main executable (SU2_CFD) and the test program (test_driver).

How to Run Unit Tests

If you compile SU2 using ninja -C builddir, then the test executable will automatically be compiled. You then have three options for running the tests:

  • meson test (see documentation here)
  • ninja test
  • Run the exectuable manually: ./UnitTests/test_driver

When running the executable manually, there are many customization options. Run ./UnitTests/test_driver --help to see them all.

How to add unit tests

I have already included a single unit-test example (UnitTests/SU2_CFD/src/numerics/CNumerics_tests.cpp). Aside from the headaches with CConfig, this example should be clear. If not, raise questions and I will answer them. I plan on adding a few more example tests, as well as a "How-To" page on the website. Please see the PR on the website repo for more documentation.

To learn more, check out the great tutorial for Catch2. You can also find more in-depth documentation there.

I still don't understand unit tests. Where can I learn more?

I highly recommend reading at least one of these two books:

  • "Working Effectively with Legacy Code," by Michael Feathers
  • "Modern C++ Programming with Test-Driven Development," by Jeff Langr

Remaining work

  • Add unit-testing to Travis CI
  • Add support for unit testing in AD builds
  • Add more examples of unit tests
  • Add "How-To Add Unit Tests" documentation

Related Work

This resolves issue #698.

PR Checklist

  • I am submitting my contribution to the develop branch.
  • My contribution generates no new compiler warnings (try with the '-Wall -Wextra -Wno-unused-parameter -Wno-empty-body' compiler flags).
  • My contribution is commented and consistent with SU2 style.
  • I have added a test case that demonstrates my contribution, if necessary.
  • I have updated appropriate documentation (Tutorials, Docs Page, config_template.cpp) , if necessary.

I have also added a simple unit test to implement the tests.
@talbring
Copy link
Member

Thanks @clarkpede for setting this up! I will add some tests by myself and see how its gonna work out.

Copy link
Member

@economon economon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, @clarkpede, this will be a wonderful addition! Agreed that it's much better to develop against tests, and it will force us to properly refactor. Just a couple comments..

Make sure to include the license file if that is part of the license terms. Otherwise, I think the license is compatible.

Also, once you get it connected to CI, I think it would be useful to print the specific error messages if a test fails. It can sometimes be difficult to reproduce the errors on the remote system (GitHub actions) locally, and it would be good to see the printout on failure.

int provided;
SU2_MPI::Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
#else
SU2_MPI::Init(&argc, &argv);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the driver sets up MPI.. it would be great if we can also use MPI in the tests. Is that working?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tested it myself, but that's the intention.

A lot of classes (like the solver classes) use MPI calls. For example, some constructors set the MPI rank and size. So there's really no getting around MPI, even for non-MPI tests.

@pr-triage pr-triage bot removed the PR: draft label Jan 29, 2020
@pr-triage pr-triage bot added PR: draft and removed PR: draft labels Jan 29, 2020
@pr-triage pr-triage bot added the PR: draft label Jan 29, 2020
@EduardoMolina
Copy link
Contributor

Hi @clarkpede ,

Thanks for the contribution. The example of the central/upwind blending is very clear. Ready to merge.

Eduardo

Copy link
Member

@economon economon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too. Very excited to have the unit testing incorporated. One small comment, but merge when ready.

Consider these TODOs for later:


const int nDim = 3;

su2double **coordinates = new su2double*[8];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use std::array for these to avoid new/delete hassle

@clarkpede
Copy link
Contributor Author

clarkpede commented Apr 30, 2020

@economon I'm documenting my thoughts on mocking here, partially in response to you but also for any collaborators. The biggest hurdle with mocking frameworks is that they usually require versatile abstract base classes and depedency inversion. SU2 is relatively good at that with solvers, variables, and numerics. But classes like CConfig are completely incompatible with mocking. CGeometry seems to be somewhere in the middle. Mocking can also be overused.

I agree, mocking is a great tool, and has its legitimate uses. Martin Fowler recommends mocking in at least two use cases:

One is if you're spending a lot of time debugging when tests fail because they aren't breaking cleanly and telling you where the problem is...The second area is if your objects don't contain enough behavior, mockist testing may encourage the development team to create more behavior rich objects.

But I believe we can get pretty far just with what we have, and start including mocking as needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants