New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tests] Add regression test suite for OS X and Linux #10
Conversation
|
||
1. (Linux & OS X) Add a directory with the name of your test to the `Tests/Fixtures/Sources/` directory. For example, if your test is named "TwoFailingTestCases", make a directory named `Tests/Fixtures/Sources/TwoFailingTestCases`. | ||
1. (Linux & OS X) Add a `main.swift` file to the directory you added above. The file should contain comments, beginning with `// `. The test runner will verify that the output of `main.swift` matches your comments exactly. This is all you need to do to run tests on Linux--you may run the `build_script.py` example above to confirm the tests pass. | ||
1. (OS X) If you want your tests to run on OS X, open the `XCTest.xcodeproj` Xcode project and duplicate the `SingleFailingTestCase` target. Rename the target to match the name of your test. For example, if your test is named "TwoFailingTestCases", name the target `TwoFailingTestCases` as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I don't want to run the tests in Xcode, are steps 3, 4 & 5 needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests/test.py
works by running each executable in the Tests/Fixtures/Products/
directory. Those need to be built before the script is run.
The SingleFailingTestCase
Xcode target compiles Tests/Fixtures/Sources/SingleFailingTestCase/main.swift
into an executable and installs it in the Products
directory. It also ensures that SwiftXCTest.framework
is linked to that executable properly.
So in order to run the test on OS X, adding a new Xcode target (steps 3 and 4) is necessary, because it is responsible for putting an executable in the Products
directory. Adding the new target to the aggregate target (step 5) is necessary to eliminate the need to build the newly added target manually before running Tests/test.py
.
Also, your comment reminded me: because the executables are compiled and placed in the Products
directory every time the tests are run (whether on OS X or Linux), the contents of that directory should be added to the .gitignore
. I'll amend that change to commit b52c686.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but since you can build the executables on Linux, isn't it possible to build them on OS X without adding them to *.xcodeproj
.
(Just trying to understand this better, and perhaps wondering if those steps shouldn't be marked Xcode
instead of OS X
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, no worries, thanks for all the feedback!! 💯
Right, but since you can build the executables on Linux, isn't it possible to build them on OS X without adding them to
*.xcodeproj
.
I actually would love to achieve all this without using Xcode projects, but I can't for the life of me figure out how to compile and link the SwiftXCTest.framework build products to the executables.
(Just trying to understand this better, and perhaps wondering if those steps shouldn't be marked
Xcode
instead ofOS X
)
They could be. In practice, however, the two are the same--the SwiftXCTest.framework build system is tied to Xcode on OS X at the moment. As far as I can tell, it's not a Swift package, and is not meant to be built using llbuild--but correct me if I'm wrong here! 😛
It would be nice to include a tested binary of XCTest in the repo to run the tests instead of unittest. It can be periodically updated as newer versions are tested and trusted. The main reason for this: once there is a better reporting API available (literally anything that isn't plain printing to stdout), it will likely be annoying to call into it and make expectations from Python. But since the build script is already in Python and no such API exists right now, this seems like a good first step. |
Yup! My main motivation for this test runner is to be able to verify the current behavior without first changing the production code. This is an integration test suite, one that will hopefully become less important as swift-corelibs-xctest is refactored to be more unit-testable.
I actually don't think this is the case. Imagine testing a dot reporter: /// Tests/Fixtures/Sources/DotReporterTests/main.swift
// .F..
// %exit-status: 1
#if os(Linux)
import XCTest
#else
import SwiftXCTest
#endif
class OneFailingThreePassingTestCase: XCTestCase {
// ...four test methods here. The second one fails.
}
XCTRegisterReporter(DotReporter()) // Imagine this is the reporter API
XCTMain([OneFailingThreePassingTestCase()]) And there you have it! This is all you need to test on Linux. (You'd have to add an Xcode target to get this tested on OS X.) |
Adds a test runner that compares annotations in the source code of an XCTest file to actual output when that source code is compiled and run. This acts as a regression test suite for the project. Adds two new Xcode targets: 1. SingleFailingTestCase: An executable that runs `XCTMain()` with a failing test case. It is built and installed to the Tests/Fixtures/Products/ directory. 2. SwiftXCTestTests: An aggregate target that builds SingleFailingTestCase (and, in the future, all test fixtures), then runs the test runner, Tests/test.py. Tests/test.py is a Python unit test script that runs all executables in the Tests/Fixtures/Products/ directory, then compares their output to source code annotations in corresponding directories in Tests/Fixtures/Sources/.
By passing `--test` to the build script, the Tests/test.py regression tests are run. This allows the test suite to be run on Linux.
ce4946e
to
1105eb7
Compare
I see, in that case the Python suite doesn't ever need to go away. It can instead stay very thin and only verify that the bare minimum reports back correctly, the rest of the module can be self-tested. In self-tests only methods from the bare minimum can be used (no fancy matchers etc., nothing untested by the integration suite). I am not sure how to nicely systematically enforce this (splitting into 2 modules would work, but that's unacceptable for clients). |
If we are going to use a non-XCTest based solution, then IMHO we should use
It is also pretty conceptually similar to this approach, we shouldn't roll another custom alternative. Of course I am biased. :) |
I'd love to use Let's continue the discussion on the overall approach on the mailing list. I feel that a non-XCTest-based approach is the most effective way to catch regressions without modifying the production code in XCTest itself. It'd also provide value over time, even after we add a unit test suite. Would love to hear more debate on the topic, however. In any case, expect a pull request for a |
Sounds great, thanks Brian. I agree its worth discussing this on the mailing list to see what the core XCTest owners think. |
See the individual commit messages for details, and the instructions added to the README for an idea of how the test suite works.
This approach differs from the one suggested on the mailing list by @ddunbar, in which XCTest itself is used to test XCTest. I believe this approach is simpler to implement and maintain.
In addition, using XCTest to test itself is a dangerous proposition: one or more bugs in XCTest may cause regressions, which may then not be caught because of those same bugs.