New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add enforce support for pipeline schedule #561
Merged
Merged
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
402d637
test: add test case that includes 'enforce: true' config
amimas 17216df
test: add assertion to enfoce for total schedule count
amimas 4795ad5
feat: add enforce config support for pipeline schedules
amimas 573e0ca
test: additional test case for enforce config
amimas 2be4d9f
docs: add schedule enforce documentation
amimas 5606030
chore: reformat code by using black formatter
amimas 621cb45
Merge branch 'main' into issue-539
gdubicki c2df93c
ci: install types-setuptools to fix mypy error analyzing setuptools
amimas a6900e6
Revert "ci: install types-setuptools to fix mypy error analyzing setu…
amimas 2b8b4fe
chore: address type checking issue reported by mypy
amimas 2f6c4b8
docs: remove deprecation warning about delete config
amimas File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gdubicki - It seems the tests are currently setup to be dependent on earlier test case's results. That's why this assertion looks odd to me. I expected each test case to be self-contained and not affecting other tests. I'm not sure if this was done intentionally. Thought I'd mention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a result of the fact that we are often using fixtures that have a pytest scope higher than "function". Particularly, the fixture for a project has scope "class", so the schedules created in one method of a test class such as TestSchedules will in fact be retained in the next test method.
To fix that we would have to use "function" scope for an almost perfect separation of each test but we are are talking about tests on a real-world GitLab instance where creating and deleting projects in GitLab is pretty slow.
I just did a test of how this would work out in practice for
test_schedules.py
:Code as-is, with project and group having scope "class":
1st run - 7.708s
2nd run - 6.280s
3rd run - 7.390s
Code with project and group scope changed to "function":
1st run - 14.556s
2nd run - 18.286s
3rd run - 18.335s
So the tests time effectively increases 2-2.5 times.
The total current time of running tests in GitHub Actions is ~3 minutes for standard and ~1.5 minutes for premium ones, so including the ~4 minutes of GitLab provisioning time the whole pipeline time would increase from ~7 minutes to ~10 minutes...
Because of that we are now using 2 ways of dealing with the issue of tests affecting each other, only where it affects the tests:
a) use "function" scoped fixtures on specific single tests - see the uses of
project_for_function
fixture,b) make the assertions about the initial state more relaxed - see https://github.com/gitlabform/gitlabform/blob/main/tests/acceptance/standard/test_members.py#L12-L12 as an example.
Of course this is not perfect, but it's the best compromise between correctness and performance of the tests that I was able to develop at the time. If you think that we should make changes then feel free to share your ideas, @amimas. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @gdubicki for the details and the performance tests. I also realized this came down to fixture scope after looking into it a bit, but didn't have time to learn what changes are needed if the scope is changed.
I'm starting to think what you've got right now actually makes sense. These are not mocked unit tests. As you said, we're running these against a real GitLab instance. Also not sure if there's a real need for running only individual test as opposed to running all tests in a single file. That would probably make the testing and development faster, but is it worth it? So, it is a balance between isolated tests vs performance of entire test suites.
I don't have any other ideas at the moment. Being new to the code base, it was a bit confusing at the beginning. I had to go through existing test cases and then figure out if the assertions in the new test case makes sense. Maybe what might help others is if we have additional sections in the contributing docs about general setup/expectations of acceptance tests. I have only played with 1 test file through this PR so far. If I can think of any improvements to the docs/development guide in future, I'll open new PR accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, we should at least document it. I'll try to do it soon.