-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execute stale tests #4257
Execute stale tests #4257
Conversation
I've included some more patches to clean up how our CI systems execute tests. Previously, it was a jungle of setting environment variables and subsequently running different tests in a fixed order, which was quite cumbersome to maintain across the two CI systems. |
773b608
to
64b46d3
Compare
4edf64e
to
c60f022
Compare
Tests fail due to the password variable not being set in AppVeyor. We should change the secure variables in AppVeyor to enable execution |
Please hold until #4282 is merged due to rebasing being a major pain |
Rebased due to #4282. This can be merged now. |
739875b
to
5d1475c
Compare
appveyor.yml
Outdated
$env:GITTEST_REMOTE_PROXY_URL = "http://foo:bar@localhost:8080" | ||
ctest -V -R libgit2_clar-proxy_credentials_in_url | ||
$env:GITTEST_REMOTE_PROXY_URL = "http://localhost:8080" | ||
$env:GITTEST_REMOTE_PROXY_URL = "localhost:8080" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was testing two different things: giving the proxy creds in the URL and reading them from the user. This eliminates one of those tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not, as I've refactored the test to avoid exactly that. I have split up the test and we now have one where we put the credentials in the URL and one where we have the user provide it via the credentials.
5a34941
to
8071b80
Compare
I've stripped some parts of this PR to only include the most beneficial fixups. |
Hmm. How many seconds is a few? This adds up a lot on the Windows CI, since this is creating a lot of files on Windows, which is terribly slow on NTFS. Multiply that out by the number of variations we run for every PR times every PR, and ugh, this gets expensive fast. I think we should take all the other changes but this one. Another option is to make this an |
I agree with you that we should strive to keep it simple, so I think a slowness-factor is kind of overengineering the issue. I'll drop that commit |
Our tracing architecture is not built by default, causing the Travis CI to not execute some code and skip several tests. As AppVeyor has already enabled the tracing architecture when building the code, we should do the same for Travis CI to have this code being tested on macOS and Linux. Add "-DENABLE_TRACE=ON" to our release-build options of Travis.
Some function bodies of tests which are not applicable to the Win32 platform are completely #ifdef'd out instead of calling `cl_skip()`. This leaves us with no indication that these tests are not being executed at all and may thus cause decreased scrutiny when investigating skipped tests. Improve the situation by calling `cl_skip()` instead of just doing nothing.
When the function `expect_iterator_items` surpasses the number of expected items, we simply break the loop. This causes us to trigger an assert later on which has message attached, which is annoying when trying to locate the root error cause. Instead, directly assert that the current count is still smaller or equal to the expected count inside of the loop.
The test `iterator::workdir::filesystem_gunk` is usually not executed, as it is guarded by the environment variable "GITTEST_INVASIVE_SPEED" due to its effects on speed. As such, it has become stale and does not account for new references which have meanwhile been added to the testrepo, causing it to fail. Fix this by raising the number of expected references to 15.
Our performance tests (or to be more concrete, our single performance test) are not built by default, as they are always #ifdef'd out. While it is true that we don't want to run performance tests by default, not compiling them at all may cause code rot and is thus an unfavorable approach to handle this. We can easily improve this situation: this commit removes the #ifdef, causing the code to always be compiled. Furthermore, we add `-xperf` to the default command line parameters of `generate.py`, thus causing the tests to be excluded by default. Due to this approach, we are now able to execute the performance tests by passing `-sperf` to `libgit2_clar`. Unfortunately, we cannot execute the performance tests on Travis or AppVeyor as they rely on history being available for the libgit2 repository. As both do a shallow clone only, though, this is not given.
We support two types of passing credentials to the proxy, either via the URL or explicitly by specifying user and password. We test these types by modifying the proxy URL and executing the tests twice, which is in fact unnecessary and requires us to maintain the list of environment variables and test executions across multiple CI infrastructures. To fix the situation, we can just always pass the host, port, user and password to the tests. The tests can then assemble the complete URL either with or without included credentials, allowing us to test both cases in-process.
Right now, we test our credential callback code twice, once via SSH on localhost and once via a non-existent GitHub repository. While the first URL makes sense to be configurable, it does not make sense to hard-code the non-existing repository, which requires us to call tests multiple times. Instead, we can just inline the URL into another set of tests.
Some tests shall be run against our own SSH server we spin up in Travis. As those need to be run separate from our previous tests which run against git-daemon, we have to do this in a separate step. Instead of bundling all that knowledge in the CI script, move it into the test build instructions by creating a new test target.
Thanks as always. |
I've took a quick jab at our tests which are skipped by default and found some things which could be improved. The main outcome of this is that our CI jobs will now execute all invasive tests as well as our performance tests (or rather our performance test, singular). I think it is desirable to always execute as much of our tests as possible on CI jobs. This is especially true if we'd just save a few seconds otherwise, as is the case for our currently skipped tests.