Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests #48

Open
GilesStrong opened this issue Jun 5, 2020 · 0 comments
Open

Tests #48

GilesStrong opened this issue Jun 5, 2020 · 0 comments
Labels
enhancement New feature or request medium priority Not urgent but should be dealt with sooner rather than later

Comments

@GilesStrong
Copy link
Owner

Current state

Development of new methods and classes is normally done whilst solving a specific problem and only once the code works is it added to the code-base. Still changes and depreciations may cause parts of the code to begin to fail, or perhaps the code does not work for all cases (e.g. edge cases exist and are not accounted for).

In order to help protect against this, the examples are designed to utilise as much of the code-base in realistic scenarios. They then function as tests and are run (at least) prior to the release of a new version and any errors may be fixed.

Concerns

  • The examples can be quite slow to run, and being Jupyter Notebooks, might be difficult to run in an automated fashion, and feedback might be limited
  • Full coverage with unit tests of all methods and classes might be difficult due to the requirement of extensive mocking, and may not accurately represent a realistic test, or capture interdependence of functions
    • My experience with unit testing, though, is only a 3-month industrial secondment, i.e. not extensive. Perhaps approaches exist to better capture interdependence.
  • If examples are used as tests, then as the code-base grows, so must the range of examples
  • Examples by their nature will focus only on common application cases - they may miss edge cases
  • Whilst code may run correctly, some changes may lead to slow-down. This can be difficult to spot without continually monitoring of timing on a fixed task
  • Similarly code may run correctly, but changes may lead to loss of performance. This can be difficult to spot without continually monitoring of performance on a fixed task

Proposals

  • Create test versions (as .py) of the examples which step through each stage of the code, these are then used as continuous integration tests
    • Allows for better coverage of functions and edge cases
    • Timing and performance of each function can be recorded to check for slow-down & degradation in code-base
    • Faster feedback on breaking changes, rather than just prior to deployment
  • Check how other frameworks, like FastAI, approach testing
@GilesStrong GilesStrong added enhancement New feature or request medium priority Not urgent but should be dealt with sooner rather than later labels Jun 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request medium priority Not urgent but should be dealt with sooner rather than later
Projects
None yet
Development

No branches or pull requests

1 participant