Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reform tests according to Good Scientific Code practices #634

Open
Datseris opened this issue Jun 13, 2022 · 4 comments
Open

Reform tests according to Good Scientific Code practices #634

Datseris opened this issue Jun 13, 2022 · 4 comments
Labels
good first issue Good for newcomers / easy to resolve help wanted Help from someone external is needed simplification Reduce the amount of code, make it more clear and more simple, reduce the amount of functionalities tests

Comments

@Datseris
Copy link
Member

Datseris commented Jun 13, 2022

I am preparing a huge workshop that I'll give next month on Good Scientific Code practices. (will upload to youtube and share here)

Unfortunately, our tests miss some good practices regarding a good unit test suite. Specifically:

  • They are not "clean slate", which dictates that every test file should be runnable as-is, by itself, without depending on the global state, and without leaving something on the global state to be used later. Obviously, our test suite completely fails on that given the massive amount of definitions in runtests.jl and the lack of using statements in the individual files.
  • They aren't organized into subfolders for better navigation. In fact, in general the organization of tests suffers with weird file names and content. E.g., what would you expect to be tested in the file graph_tests.jl? I'd expect all of GraphSpace features, but that's nowhere near the case. Each space should have its own test file.
  • They rely way too much on random numbers, where we should be writing analytic tests instead. Deterministic, analytically resolvable tests are the most accurate ones. The expected outcome of a test comes from pen and paper instead of running code. Right now we have a large amount of tests that were generated like "seed an rng, run some model code, write down the result as a test suite.". What we should be doing instead is "devise a distribution of agents. Analytically resolve their neighbors and actions given some rules and distances. Write tests that test again this analytically resolved output." I recall we had problems many times throught the lifespan of Agents.jl with random tests being broken etc. (obviously, random stuff like the RNG in the model should also be tested, but if you can avoid randomness in the tests, you should)
  • They are not computationally minimal, which means that you should be doing the least amount of computer operations to confirm whether some functionality works. E.g., look at the start of continuousSpace_tests.jl. 8 spaces are initialized, but as far as the subsequent tests are concerned, you only need 3. In a similar vein, the test aren't also memory minimal. For the majority of the tests, we use agents that have extra fields like weight, even though we don't actually use these fields anywhere. E.g., see "mutable graphs" tests, which never uses the weight field yet assigns it anyways.
  • They don't use SafeTestsets. In the runtests.jl, the test files should be included like
    @safetestset "Benchmark Tests" begin include("benchmark_tests.jl") end
    
    and the only modules actually used in the runtests.jl file should be Test, SafeTestsets.
@Datseris Datseris added tests clarity Regardig clarity of source code (not docs) labels Jun 13, 2022
@Datseris Datseris added help wanted Help from someone external is needed simplification Reduce the amount of code, make it more clear and more simple, reduce the amount of functionalities and removed clarity Regardig clarity of source code (not docs) labels Jul 1, 2022
Datseris added a commit that referenced this issue Jul 31, 2022
* New `@agent` macro and its docs (doesn't work yet)

* fix incorrect argument order

* rework docstrings of agent subtypes

* move all agent definitions to their own space folder

* remove unecessary old exports

* update docstring with recommended ways

* fix problems of definition within modules!!!

* correct description of parametric types

* add yet one more example with common dispatch

* make all agents from NoSpaceAgent

* re-write tutorial for `@agent`

* update Schelling example to use `@agent`

* use @agent in all examples

* add tests

* add changelog

* Address all review comments

* typo in changelog

* minor improve in type checking

* fix warn log tests

* fix model creation tests, also adhere to #634

* add a couple more tests to `@agent`

* add update messages
@hbsmith
Copy link
Contributor

hbsmith commented Aug 25, 2022

@Datseris I'd be interested to know what was discussed at the workshop if you have more information on that

@Datseris
Copy link
Member Author

@Datseris
Copy link
Member Author

here's a checklist with what properties good tests should have: JuliaDynamics/ChaosTools.jl#264

@Datseris
Copy link
Member Author

Datseris commented Oct 6, 2023

Oh god, sample! tests are ridiculous. We need to re-write them. They shouldn't be more than 10 lines of code.

@Datseris Datseris added the good first issue Good for newcomers / easy to resolve label Oct 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers / easy to resolve help wanted Help from someone external is needed simplification Reduce the amount of code, make it more clear and more simple, reduce the amount of functionalities tests
Projects
None yet
Development

No branches or pull requests

2 participants