Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add CI test with full realistic workflow on sample data #112

Closed
bertsky opened this issue Jun 17, 2020 · 5 comments
Closed

add CI test with full realistic workflow on sample data #112

bertsky opened this issue Jun 17, 2020 · 5 comments
Labels
enhancement New feature or request

Comments

@bertsky
Copy link
Collaborator

bertsky commented Jun 17, 2020

Most submodules are CI-tested individually. So when we make a new release here, as long as we don't negligently integrate failed versions, we can feel better with each update.

However, there are still two classes of errors which will evade this scheme:

  1. within-module regressions not covered by their automatic unit tests
  2. cross-module regressions (i.e. unmet implicit interdependencies between versions)

For the latter, we could reduce our risk by introducing workflow tests that run many different modules on a small set of sample data. Since the CI job for PRs is already set to use make docker-maximum, one could use any processor(s) after that. And we already have enough data in core/repo/assets/ and workflows in workflow-configuration/. So the test would run that workflow on (say) data/kant_aufklaerung_1784, check it did not crash, check the target file group exists for all pages, and perhaps validate the workspace.

(We don't include any models in the standard distribution though. So one would either have to use a very simple workflow, or need an extra step to install segmentation and recognition models.)

@bertsky bertsky added the enhancement New feature or request label Jun 17, 2020
@stweil
Copy link
Collaborator

stweil commented Jun 17, 2020

Some questions:

  • When should such a CI test be triggered? Only for releases?
  • Should it use a Docker container or use a separate build?
  • On which platform should it run? Circle CI? Travis CI? GitHub Actions?

@bertsky
Copy link
Collaborator Author

bertsky commented Jun 17, 2020

* When should such a CI test be triggered? Only for releases?

As outlined above: for every new commit on a PR branch (via CircleCI rule build).

* Should it use a Docker container or use a separate build?

Good question. I have a feeling they are almost always the same, but you never know. So let's do the "harder" test, running all the above within a docker run ocrd/all:maximum (with appropriate mount points etc).

* On which platform should it run? Circle CI? Travis CI? GitHub Actions?

I am still with CircleCI, as everything has been set up already, and one could easily add more platforms there in the future.

@stweil
Copy link
Collaborator

stweil commented Jun 17, 2020

I am still with CircleCI

Personally I don't have access to the CircleCI results. I started several times to look for an account and always decided against it because I did not like to give them the required rights.

@M3ssman
Copy link
Contributor

M3ssman commented Jun 19, 2020

As this is designed as an OCR-D-Container-Systemtest, the workflow should focus on module integration, especially with known incompatibilities and, as to provide some sort of regression testing, on previously solved problems.

@bertsky
Copy link
Collaborator Author

bertsky commented Jun 20, 2023

As this is designed as an OCR-D-Container-Systemtest, the workflow should focus on module integration, especially with known incompatibilities and, as to provide some sort of regression testing, on previously solved problems.

We probably could do more in this regard, but #362 contains a first-order approximation (make test-workflow).

@bertsky bertsky closed this as completed Jun 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants