Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve test coverage #800

Closed
vincentvanhees opened this issue Jun 6, 2023 · 3 comments · Fixed by #860
Closed

Improve test coverage #800

vincentvanhees opened this issue Jun 6, 2023 · 3 comments · Fixed by #860
Assignees

Comments

@vincentvanhees
Copy link
Member

The test coverage has gone a bit down in the past year: https://app.codecov.io/gh/wadpac/GGIR

Some functionalities are notoriously difficult to test in GitHub actions:

  • Parallel processing
  • Specific challenges with larger datasets
  • File reading functionality across all variations in file type, recording length and device configuration.

Some of the functionalities are possibly easy to address:

  • Function g.plot5 (visualreport): First page generation is entirely skipped, which may not be needed.
  • Function g.part5:
    • Does not consider light at the moment.
    • Explorative nap/nonwear classification is not well covered here.
  • Function check_params:
    • Several object checks are not tested
  • Function g.analyse.perday:
    • Does not test for qwindow with more than two values, or for activity diary?

Rather then trying to improve each of these tests, it may be easier to:

  • Put milestone data files from GGIR part 1 inside the package, e.g. from a couple of brands and study designs.
  • Add matching activity diary, advanced sleeplog and basic sleeplog.
  • Write separate unit tests to process those milestone data in a way we would do for a project.
  • Review how this improves the test coverage and where necessary revise these unit tests a bit to improve coverage.

In this way, it may be possible to improve the quality of the continuous integration for GGIR parts 2, 3, 4, 5.

@vincentvanhees vincentvanhees changed the title Work on test coverage Improve test coverage Jun 6, 2023
@jhmigueles
Copy link
Collaborator

Here some reflections on this:

Put milestone data files from GGIR part 1 inside the package, e.g. from a couple of brands and study designs.

Agree with that. But I would first test that reading data from different brands/formats result in the same milestone data in g.part1. So, we would test the reading and the generation of the milestone data in part 1 separately in a test including all brands/formats possible (maybe with 4-hour recordings is enough for that). Once we test that the all brands can be read and the output is the same, we can proceed with just 1 milestone data from part 1, where I would use meta data from a geneactive file to include also lux variables in the output and facilitate later testing. This has the challenge that we need to store such raw data somehow, it might be heavy for an R package.

In this separate test for part 1, we should make sure that we include:

  • Different data formats
  • Imputation of gaps
  • Appending of recordings
  • Part 1 output data is similar across brands/formats (e.g., metalong column names differ between axivity (EN for euclidean norm) and actigraph (en for euclidean norm)).
  • Short recordings and corrupted files are identified
  • Calibration process work well (for this we might need a longer test file)

Write separate unit tests to process those milestone data in a way we would do for a project.

After thinking about this, I'm not sure this would improve the test coverage. In a project we usually select a specific GGIR configuration, but the GGIR pipeline is so flexible that using the script from different projects would result in reprocessing the part 1 milestone data an infinite number of times. A potential solution is using the default GGIR configuration over the 5 strategies in part 2 to test the interaction between the 5 parts of the package. And then test the extra functionalities separately.

@vincentvanhees
Copy link
Member Author

I left out part 1 from my proposal because testing part 1 as a whole for real life data does not seem feasible: We will have to add large files to the package or downloaded them, and even if we have those large files it will take time to process them.

So, I think testing the part 1 functionalities is best done based on specific unit test to cover each single functionality separately. This is what we have been doing and we will keeping doing.

Instead, I would like to focus now on creating a more high level unit test (integration test) that runs the other parts (2-5) with more realistic study data compared with our current synthetic data or tiny example files.

Even 10 MB of GGIR milestone data is too large for a package. So, a possible solution could be to include a numeric data.frame with 100 days worth of real ENMO, MAD, nonwear, anglez, temperature, and LUX with values rounded to 1 decimal place. This would then be based on real data from a variety of studies appended to each other. Next as part of the test we could write a function to convert this data.frame in semi-synthetic test milestone data files. For example, split up as multiple recordings.

Advantages I see:

  • Hopefully this offers a quick route to improving the test coverage for parts 2-5 without having to dive into every single function to search for possible opportunities to improve coverage. Ideally, running a realistic study scenario should already help to improve coverage.
  • We can use this to shorten test_chainof5parts.R afterward as some of that will no longer be needed, and by that test_chainof5parts.R will become more tidy.
  • These new unit-tests could even act as showcases of how GGIR can be used for new software contributors.

@vincentvanhees
Copy link
Member Author

After thinking about this, I'm not sure this would improve the test coverage.

If we create a unit-test for let's say the Whitehall study then it will at the very least provide an integrated test of all functionalities they need, including LUX analysis, their specific sleeplog size, their specific, ID format, their specific way of dealing with missing files.

Here, I do not want to loop over all possible functionalities GGIR offers as then we are just doing the same as test_chainof5parts.R. I only want to focus now on testing real life study scenarios which then could boost the test coverage but equally important helps to monitoring that the specific approaches to the data remain reproducible.

@jhmigueles jhmigueles self-assigned this Jun 30, 2023
@jhmigueles jhmigueles mentioned this issue Jul 20, 2023
5 tasks
@jhmigueles jhmigueles mentioned this issue Aug 1, 2023
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants