-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for writing test cases #791
Merged
Merged
Changes from all commits
Commits
Show all changes
25 commits
Select commit
Hold shift + click to select a range
cf57e50
Docs
smaye81 c9d9888
Docs
smaye81 019b039
Merge main
smaye81 20d31ff
Docs
smaye81 445679e
Formatting
smaye81 ab2b497
Formatting
smaye81 6bd8731
Formatting
smaye81 7979237
Update docs/authoring_test_cases.md
smaye81 1898110
Update docs/authoring_test_cases.md
smaye81 cc90697
Docs
smaye81 72b40b1
Example
smaye81 38cdc89
Docs
smaye81 46ffbdd
Docs
smaye81 8c57f24
Docs
smaye81 f5b642d
Docs
smaye81 7dff23e
Docs
smaye81 55204cb
Docs
smaye81 e44c72c
Merge branch 'main' into sayers/writing_tests_docs
smaye81 5fb8212
Fix tests
smaye81 8b1177d
Merge branch 'main' into sayers/writing_tests_docs
smaye81 dfebe22
Update docs/authoring_test_cases.md
smaye81 c4c2b60
Docs
smaye81 c360950
Docs
smaye81 5d14154
Docs
smaye81 06cd059
Docs
smaye81 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,206 @@ | ||
# Authoring test cases | ||
|
||
Test cases for the conformance runner are configured in YAML files and are located [here][test-suite-dir]. Each file | ||
represents a single suite of tests. | ||
|
||
Authoring new test cases involves either a new file, if a new suite is warranted, or an additional test case in an | ||
existing file/suite. | ||
|
||
For the Protobuf definitions for the conformance runner, see the [connectrpc repository][connectrpc-repo] in the BSR. | ||
|
||
## Test suites | ||
|
||
A suite represents a series of tests that all test the same general functionality (cancellation, timeouts, etc.). Each | ||
test suite YAML file represents a [`TestSuite`][test-suite] message and the schema of this message defines the schema | ||
of the YAML file. The representation of a Protobuf message in YAML is the same as its [JSON format][json-docs]. | ||
|
||
When defining a test suite in a YAML file, the only values that are required are the suite name (which should be unique | ||
across all suites) and at least one test case. | ||
|
||
In addition, each suite can be configured with various directives which will apply to all tests within that suite. These directives | ||
can be used to constrain the tests run by the conformance runner or to signal various configurations to the runner that | ||
may be needed to execute the tests. The runner will use these directives to expand the tests in the suite into multiple | ||
permutations. This means that a single test defined in a suite will be run several times across various permutations. | ||
|
||
The below directives are used to constrain tests within a suite: | ||
|
||
* `mode` can be used to specify a suite as applying only to a specific mode (i.e. client or server). For example, | ||
if you are writing a suite to target only clients under test, you would specify `mode: TEST_MODE_CLIENT` and the | ||
tests in the suite will be run only when the `mode` specified to the runner on the command line is `client`. If not | ||
specified in a suite, tests are run regardless of the command line `mode`. | ||
|
||
* `relevantProtocols` is used to limit tests only to a specific protocol, such as **Connect**, **gRPC**, or **gRPC-web**. If not | ||
specified, tests are run for all protocols. | ||
|
||
* `relevantHttpVersions` is used to limit tests to certain HTTP versions, such as **HTTP 1.1**, **HTTP/2**, or **HTTP/3**. If not | ||
specified, tests are run for all HTTP versions. | ||
|
||
* `relevantCodecs` is used to limit tests to certain codec formats, such as **JSON** or **binary**. If not specified, tests are | ||
run for all codecs. | ||
|
||
* `relevantCompressions` is used to limit tests to certain compression algorithms, such as **gzip**, **brotli**, or **snappy**. If not | ||
specified, tests are run for all compressions. | ||
|
||
* `connectVersionMode` allows you to either require or ignore validation of the Connect version header or query param. This | ||
should be left unspecified if the suite is agnostic to this validation behavior. | ||
|
||
The below `reliesOn` directives are used to signal to the test runner how the reference client or server should be | ||
configured when running tests: | ||
|
||
* `reliesOnTls` specifies that a suite relies on TLS. If `true`, the test cases will not be run against non-TLS server | ||
configurations. Defaults to `false`. | ||
|
||
* `reliesOnTlsClientCerts` specifies that the suite relies on the _client_ using TLS certificates to authenticate with | ||
the server. Note that if this is set to `true`, `reliesOnTls` must also be `true`. Defaults to `false`. | ||
|
||
* `reliesOnConnectGet` specifies that the suite relies on the Connect GET protocol. Defaults to `false`. | ||
|
||
* `reliesOnMessageReceiveLimit` specifies that the suite relies on support for limiting the size of received messages. | ||
When `true`, the `mode` property must be set to indicate whether the client or server should support the limit. Defaults | ||
to `false`. | ||
|
||
## Test cases | ||
|
||
Test cases are specified in the `testCases` property of the suite. Each test case starts with the `request` property | ||
which defines the request which will be sent to a client during the test run. Each `request` must specify the following | ||
fields: | ||
|
||
* `testName` - For naming conventions, see [below](#naming-conventions). | ||
* `service` - The fully-qualified name of the service this test will interact with. For now, this is always `connectrpc.conformance.v1.ConformanceService`. | ||
* `method` - This is a string specifying the method on `service` that will be called. | ||
* `streamType` - One of `STREAM_TYPE_UNARY`, `STREAM_TYPE_CLIENT_STREAM`, `STREAM_TYPE_SERVER_STREAM`, | ||
`STREAM_TYPE_HALF_DUPLEX_BIDI_STREAM`, or `STREAM_TYPE_FULL_DUPLEX_BIDI_STREAM`. | ||
|
||
Once the above are specified, you can then define your request. For a full list of fields to specify in the request, | ||
see the [`ClientCompatRequest`][client-compat-request] message in the Conformance Protobuf definitions. | ||
|
||
### Raw payloads | ||
|
||
There are two message types in the test case schema worth noting here - [`RawHTTPRequest`][raw-http-request] and | ||
[`RawHTTPResponse`][raw-http-response]. Both allow for the ability to define a round-trip outside the scope of the | ||
Connect framework. They are used for sending or receiving anomalous payloads during a test. | ||
|
||
#### `RawHTTPRequest` | ||
|
||
The [`RawHTTPRequest`][raw-http-request] message can be set on the `request` property in a test case. Its purpose is to model a raw HTTP | ||
request. This can be used to craft custom requests with odd properties (including certain kinds of malformed requests) | ||
to test edge cases in servers. This value is only handled by the reference client and should only appear in files where | ||
`mode` is set to `TEST_MODE_SERVER`. | ||
|
||
#### `RawHTTPResponse` | ||
|
||
The [`RawHTTPResponse`][raw-http-response] message is the analog to [`RawHTTPRequest`][raw-http-request]. It can be set in the response definition for a unary | ||
or streaming RPC type and its purpose is to model a raw HTTP response. This can be used to craft custom responses with | ||
odd properties (including returning aberrant HTTP codes or certain kinds of malformed responses) to test edge cases in | ||
clients. This value is only handled by the reference server and should only appear in files where `mode` is set to | ||
`TEST_MODE_CLIENT`. | ||
|
||
> [!IMPORTANT] | ||
> The `ClientCompatRequest` message contains some fields that should _not_ be specified in test cases. | ||
> * Fields 1 through 8 in the message definition are automatically populated by the test runner. | ||
> If a test is specific to one of these values, it should instead be indicated in the directives for the test suite | ||
> itself. | ||
|
||
## Naming conventions | ||
|
||
Test suites and their tests within follow a loose naming convention. | ||
|
||
### Test files (Suites) | ||
|
||
Test files should be named according to the suite inside and the general functionality being tested. Additionally: | ||
|
||
* If a suite applies only to a certain protocol, the file name should be prefixed with that protocol. If the suite applies | ||
to all protocols, this can be omitted. | ||
* If a suite only contains client or server tests, the file name prefix should include `client` or `server`. If the suite is | ||
for both client and server, this can be omitted. | ||
|
||
For example: | ||
* `connect_with_get.yaml` contains tests that test GET requests for the Connect protocol only. | ||
* `grpc_web_client.yaml` contains client tests for the gRPC-web protocol. | ||
* `client_message_size.yaml` file contains a suite of client tests only across all protocols. | ||
|
||
### Tests | ||
|
||
Test names should be hyphen-delimited. If a suite contains tests of multiple stream types, the test name should be | ||
prefixed with the stream type and a backslash (`/`). | ||
|
||
> [!NOTE] | ||
> In the case of Bidi tests with separate cases for half-duplex operation vs. full-duplex | ||
> operation, you should also add `full-duplex` or `half-duplex` to the test name. | ||
|
||
For example: | ||
|
||
* `unary/error` | ||
* `server-stream/error-with-responses` | ||
* `bidi-stream/full-duplex/stream-error-returns-success-http-code` | ||
|
||
If a suite contains tests for just a single stream type, the stream type can be omitted from the test name. | ||
|
||
These conventions allow for more granular control over running tests via the conformance runner, such as only running tests | ||
for a specific protocol or only running the unary tests within a suite. | ||
|
||
## Expected responses | ||
|
||
The expected response for a test is auto-generated based on the request details. The conformance runner will determine | ||
what the response should be according to the values specified in the test suite and individual test cases. | ||
|
||
You also have the ability to explicitly specify your own expected response directly in the test definition itself. However, | ||
this is typically only needed for exception test cases. If the expected response is mostly re-stating the response definition | ||
that appears in the requests, you should rely on the auto-generation if possible. Otherwise, specifying an expected response | ||
can make the test YAML overly verbose and harder to read, write, and maintain. | ||
|
||
If the test induces behavior that prevents the server from sending or client from receiving the full response definition, it | ||
will be necessary to define the expected response explicitly. Timeouts, cancellations, and exceeding message size limits are | ||
good examples of this. | ||
|
||
If you do need to specify an explicit response, simply define an `expectedResponse` block for your test case and this will | ||
override the auto-generated expected response in the test runner. | ||
|
||
To see tests denoting an explicit response, search the [test suites][test-suite-dir] directory for the word `expectedResponse`. | ||
|
||
## Example | ||
|
||
Taking all of the above into account, here is an example test suite that verifies a server returns the specified headers | ||
and trailers. The below test will only run when `mode` is `server` and will be limited to the Connect protocol using the | ||
JSON format and identity compression. Also, it will require the server verify the Connect version header. | ||
|
||
```yaml | ||
name: Example Test Suite | ||
# Constrain to only servers under test. | ||
mode: TEST_MODE_SERVER | ||
# Constrain these tests to only run over the Connect protocol. | ||
relevantProtocols: | ||
- PROTOCOL_CONNECT | ||
# Constrain these tests to only use 'identity' compression. | ||
relevantCompressions: | ||
- COMPRESSION_IDENTITY | ||
# Constrain these tests to only use the JSON codec. | ||
relevantCodecs: | ||
- CODEC_JSON | ||
# Require Connect version header verification. | ||
connectVersionMode: CONNECT_VERSION_MODE_REQUIRE | ||
testCases: | ||
- request: | ||
testName: returns-headers-and-trailers | ||
service: connectrpc.conformance.v1.ConformanceService | ||
method: Unary | ||
streamType: STREAM_TYPE_UNARY | ||
requestMessages: | ||
- "@type": type.googleapis.com/connectrpc.conformance.v1.UnaryRequest | ||
responseDefinition: | ||
responseHeaders: | ||
- name: x-custom-header | ||
value: ["foo"] | ||
responseData: "dGVzdCByZXNwb25zZQ==" | ||
responseTrailers: | ||
- name: x-custom-trailer | ||
value: ["bing"] | ||
``` | ||
|
||
[test-suite-dir]: https://github.com/connectrpc/conformance/tree/main/internal/app/connectconformance/testsuites/data | ||
[client-compat-request]: https://buf.build/connectrpc/conformance/docs/main:connectrpc.conformance.v1#connectrpc.conformance.v1.ClientCompatRequest | ||
[raw-http-request]: https://buf.build/connectrpc/conformance/docs/main:connectrpc.conformance.v1#connectrpc.conformance.v1.RawHTTPRequest | ||
[raw-http-response]: https://buf.build/connectrpc/conformance/docs/main:connectrpc.conformance.v1#connectrpc.conformance.v1.RawHTTPResponse | ||
[test-suite]: https://buf.build/connectrpc/conformance/docs/main:connectrpc.conformance.v1#connectrpc.conformance.v1.TestSuite | ||
[connectrpc-repo]: https://buf.build/connectrpc/conformance | ||
[json-docs]: https://protobuf.dev/programming-guides/proto3/#json |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -160,7 +160,7 @@ func (lib *testCaseLibrary) expandSuite(suite *conformancev1.TestSuite, configCa | |
if _, ok := configCases[cfgCase]; ok { | ||
namePrefix := generateTestCasePrefix(suite, cfgCase) | ||
if err := lib.expandCases(cfgCase, namePrefix, suite.TestCases); err != nil { | ||
return err | ||
return fmt.Errorf("failed to expand test cases for suite %s: %w", suite.Name, err) | ||
} | ||
} | ||
} | ||
|
@@ -173,7 +173,19 @@ func (lib *testCaseLibrary) expandSuite(suite *conformancev1.TestSuite, configCa | |
} | ||
|
||
func (lib *testCaseLibrary) expandCases(cfgCase configCase, namePrefix []string, testCases []*conformancev1.TestCase) error { | ||
for _, testCase := range testCases { | ||
for i, testCase := range testCases { | ||
if testCase.Request.TestName == "" { | ||
return fmt.Errorf("test case #%d: test case has no name", i+1) | ||
} | ||
if testCase.Request.Service == "" { | ||
return fmt.Errorf("test case #%d: test %s has no service specified", i+1, testCase.Request.TestName) | ||
} | ||
if testCase.Request.Method == "" { | ||
return fmt.Errorf("test case #%d: test %s has no method specified", i+1, testCase.Request.TestName) | ||
} | ||
if testCase.Request.StreamType == conformancev1.StreamType_STREAM_TYPE_UNSPECIFIED { | ||
return fmt.Errorf("test case #%d: test %s has no stream type specified", i+1, testCase.Request.TestName) | ||
} | ||
if testCase.Request.StreamType != cfgCase.StreamType { | ||
continue | ||
} | ||
|
@@ -441,6 +453,7 @@ func populateExpectedResponse(testCase *conformancev1.TestCase) error { | |
// message in this situation, where the response data value is some fixed string (such as "no response definition") | ||
// and whose request info will still be present, but we expect it to indicate zero request messages. | ||
if len(testCase.Request.RequestMessages) == 0 { | ||
testCase.ExpectedResponse = &conformancev1.ClientResponseResult{} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This was causing a panic if you omitted |
||
return nil | ||
} | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a bit more validation so that we could explicitly state these fields were required. Notably, there was no validation at all that a test name be specified. If it was omitted, the tests ran fine. If the tests happened to fail, the output just didn't show a test name, which probably isn't good. For example:
Also for the other validation I added, the tests would run the whole suite but then fail with a cryptic error like:
I figured this validation was better because it catches it before any tests are actually run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could provide a little bit more context for these errors if you change the loop vars from
_, testCase
toi, testCase
: