Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify testing levels, supporting code and execution #3935

Closed
JJ opened this issue Aug 21, 2021 · 6 comments · Fixed by #4245
Closed

Clarify testing levels, supporting code and execution #3935

JJ opened this issue Aug 21, 2021 · 6 comments · Fixed by #4245
Assignees
Labels
docs Documentation issue (primary issue type) meta RFCs, general discussion, writing style, repository organization, etc.

Comments

@JJ
Copy link
Contributor

JJ commented Aug 21, 2021

Problem or new feature

There are currently 4 levels of testing

  1. POD basic testing: correctness, some stuff that can be done fast such as check for whitespace by the end of lines. This is in the t directory.
  2. POD generation testing, checking that documentable is able to generate a whole documentation set. This is done in Circle-ci, via this config file. This uses a docker container with documentable. We have a limited number of credits in Circle, and they are sometimes exhausted by the end of the month.
  3. POD "author" testing, contained in the xt directory. Those are run occasionally, generally by @coke, and are mainly long-running tests like spell checks and stuff like that.
  4. The 4th level are tests for code used in tests. Meta-tests, if I may. Right now these are included in the xt directory, and run... I have no idea. Never, probably. Besides, it's not totally clear which are "3" and which are these.

Suggestions

Here's my wishlist

  1. Run systematically all tests, from 1 to 4.
  2. Clearly distinguish code needed for 1,3,4 and maybe 2 in some way.

This might be related to #2690; we don't need to test everything in every push, but we definitely would need to test everything before every "release"... If we did one. So working through that issue would really help solving this one.

@JJ JJ added the docs Documentation issue (primary issue type) label Aug 21, 2021
@JJ JJ changed the title Clarify testing levels and Clarify testing levels, supporting code and execution Aug 21, 2021
@JJ JJ added the meta RFCs, general discussion, writing style, repository organization, etc. label Aug 21, 2021
@coke
Copy link
Collaborator

coke commented Aug 21, 2021

I run the xt tests on changed files pretty regularly. I don’t treat tests in xt differently- they are all tests that should be run regularly but shouldn’t break master/main either because they are slow, require extra installs or because the online editor makes it easy to fail them. Based on your comments above, I think only the one examples compilation fits into 4.

@JJ
Copy link
Contributor Author

JJ commented Aug 21, 2021 via email

@JJ
Copy link
Contributor Author

JJ commented Aug 22, 2021

I think that something like dividing the xt/ directory in two, docs/ and helper, would clarify a bit the structure. We could run the "helper" tests when the corresponding code changes, and author when we decide to do so.

@coke
Copy link
Collaborator

coke commented Aug 22, 2021 via email

@JJ
Copy link
Contributor Author

JJ commented Aug 22, 2021 via email

@coke coke self-assigned this Nov 15, 2022
@coke coke added this to the 2023-Quarter 1 milestone Mar 2, 2023
@coke
Copy link
Collaborator

coke commented Mar 4, 2023

There are 3 kinds of tests these days: (Documentable tests are no longer part of the repo)

CORE

Tests that should be run every commit - CI has been disabled on travis, and @dontlaugh is working on getting us an equivalent. Assuming limited cycles, this will be the CI tests.

AUTHOR

The author tests are separated as they require either:

  • special tooling (like aspell)
  • are computationally intensive (like examples compilation)

But are still run on content

MODULE

  • tests for the self-contained modules defined in lib/

Make targets

'test' - CORE
'xtest' - all three types

Frequency

Ideally, all content tests should be run for any changed files in the commit. All tests that can be run on a subset of files respect the TEST_FILES env var, so authors can run:

TEST_FILES="file1 file2 file3" make xtest

And get a very fast test run.

see util/test-modified.sh for a linux/mac script to test only files that are modified in the current system, or util/update-and-test to pull the latest changes and create a retest script and run it; catches any failures from commits since the last time you updated and keeps retest around so you can test changes and rerun.

coke added a commit that referenced this issue Mar 4, 2023
This will close #3935 but includes other cleanup as well.
@coke coke mentioned this issue Mar 4, 2023
@cfa cfa closed this as completed in #4245 Mar 4, 2023
cfa pushed a commit that referenced this issue Mar 4, 2023
* Cleanup READMEs.

This will close #3935 but includes other cleanup as well.

* Note policy on incomplete snippets

Closes #4138
coke added a commit that referenced this issue May 9, 2023
* Cleanup READMEs.

This will close #3935 but includes other cleanup as well.

* Note policy on incomplete snippets

Closes #4138
coke added a commit that referenced this issue Jul 30, 2023
* Cleanup READMEs.

This will close #3935 but includes other cleanup as well.

* Note policy on incomplete snippets

Closes #4138
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Documentation issue (primary issue type) meta RFCs, general discussion, writing style, repository organization, etc.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants