Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unit tests in C++ #110

Open
mindhells opened this issue May 21, 2020 · 6 comments
Open

unit tests in C++ #110

mindhells opened this issue May 21, 2020 · 6 comments

Comments

@mindhells
Copy link
Member

mindhells commented May 21, 2020

A little context: Gnofract4d is built in different layers and different languages. The user interface (whether is graphical or not) is written in Python whereas the "heavy math calculations" (I like to call this layer "the engine") are handled in C and C++. To make everything play together there's a Python extension which provides an interface in between.

One of the things I love about this project is the amount of tests. Recently the test coverage report has been activated. Yo can see some good numbers for that.

Well, that being said, there's something hidden under those numbers: a good amount of the tests written in python are there to prove the C++ layer is playing well. So you don't really know which parts are covered or not there.

But there's something else that annoys me more about that: for the python tests to be able to prove the C++ layer, the Python extension interface has had to be enlarged (artificially because, most of it, it's not even used in the actual application). This is problematic for me for these reasons:

  • Working with python extensions is hard, you have to deal with low level techniques like "reference counting"
  • There's some coupling in the current C++ implementation of "the engine" with the Python extension implementation.

I'm currently working on some refactors on the C++ "engine" and I'm finding difficult to get over to keep the current python extension interface while at the same time decoupling internal classes.
You can see for example the test testFractWorker. That test is meant to prove the fractal worker but you need to create a fractal function afterwards so the internal coupling can take place, because worker and fractal function are coupled.

So my point is: does it make sense to create C++ tests and dispose the python ones that are doing the job ATM, as well as the Python extension interfaces they need?

@dragonmux
Copy link
Member

It certainly makes sense to add C++ unit tests to allow more fine-grained functional unit testing of the native code..

Wrt frameworks, there are a couple you might consider using, though I am biased on this score:

They each have their swings and roundabouts.. I'd suggest a hybrid Python + C++ unit tests scheme as the Python unit tests are doing something the C++ ones never can: detecting bugs in the Python bindings themselves such as missed reference counts, buggy argument handling and Pythton object handling issues.

@mindhells
Copy link
Member Author

I guess there's another reason to have hybrid tests: because they need to load the formula shared library, which is compiled by the fract4d_compiler package.
Still I think we can move part of them to C++ and reduce the footprint of the python extension interface which, in my opinion, is way too large for its real purpose.

I'm working in this branch to integrate Crunch into the pipeline. Had to use Cmake to build the input library for crunch++ (I couldn't manage to do it with crunchMake). And I'm still struggling with the travis config: when I create a job matrix I can't specify the python versions in the same job.
I created a couple of dummy tests and managed that travis ran them using docker though.

@edyoung
Copy link
Member

edyoung commented May 23, 2020

I would be quite happy to have some C++ native tests :-)

Just for historical reference, I saw the Python program as 'the main program' and the C++ code as a library which provided some extension functionality which would be too inefficient to write in Python.

The opposite view is that the C++ code is the 'real program' and the Python code is "Just a Wrapper'.

Both are reasonable. The current testing strategy is based on the former view.

@mindhells
Copy link
Member Author

mindhells commented May 23, 2020

I like historical references, specially in projects with such a long history like Gnofract4d. They let you understand decissions made during the development which otherwise you could only guess about.

From my point of view, and considering I got to know the project only since a few months ago, I share the vision on which there's "the main program". At the same time I see "the compiler" (fract4d_compiler) and "the engine" (the C++ layer for which there's a python extension) as potentially independent and reusable parts. The clear the interfaces of these last 2 the easier the job is for "the main program" to mediate between them.
The hardest part for me to understand Gnofract4d internals has been to identify the responsibilities of each one of those parts.

I could look like I love the C++ part more than the others but it's only because I picked it first to understand it in detail. Having a good set of dedicated and native test on this one I think could help other people with that.

I'll try to finish the exercise of setting up the test infrastructure with Crunch, pick a couple of interesting tests to copy or move from python and create a PR. Hope I can get some feedback from you guys.

@edyoung
Copy link
Member

edyoung commented May 23, 2020

BTW the benchmark lib I have been looking at uses Google's C++ test framework https://github.com/google/googletest/blob/master/googletest/docs/primer.md - that might be worth consideration though I have no personal experience with it.

@dragonmux
Copy link
Member

Google Bench makes an excellent framework and is the basis of quick-bench - I endorse the idea of it whole-heartedly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants