Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for writing test cases #791

Merged
merged 25 commits into from Feb 16, 2024
Merged

Add docs for writing test cases #791

merged 25 commits into from Feb 16, 2024

Conversation

smaye81
Copy link
Member

@smaye81 smaye81 commented Feb 13, 2024

This adds some documentation for writing test cases, including:

  • Location of files
  • Test and suite naming conventions
  • Directives for suites as well as links to the BSR proto file and other helpful examples.

@smaye81 smaye81 requested a review from jhump February 13, 2024 18:45
Copy link
Member

@jhump jhump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a link to the schema for the YAML files, so users know where to look for more relevant docs.

We might also consider replicating some of the content from the proto comments here so this page can provide a little more "how to" vs. requiring the user to read comments for every field in the test suite schema.

In particular, we should re-iterate the fields in the schema that should not be put into the test case -- like the host name and port, TLS details, and even standard/protocol-oriented request and response headers and trailers (only custom headers belong in the test case).

I think it would also be useful to show a (small-ish) example in full, and maybe also describe some of the basics, like the HTTP versions, protocols, compression algorithms, and stream types.

And in the section mentioning how the test can be constrained with the "relevant ..." fields, you might also describe briefly how permutations of each test case will be computed, based on the features supported by the client or server under test and the relevant properties of the test suite.

Though perhaps the above -- a section on test case permutations -- belongs in the page about configuration, since it's most relevant to someone configuring and running the tests for their client or server implementation. I guess ideally it could be mentioned here and then linked over to more information in the other page. So maybe we can come back to it and that breadcrumb once the target doc is written...

docs/authoring_test_cases.md Outdated Show resolved Hide resolved
docs/authoring_test_cases.md Outdated Show resolved Hide resolved
Comment on lines 68 to 70
The expected response for a test is auto-generated based on the request details. The conformance runner will determine what the response
should be by the values specified in the test suite and individual test cases. However, you have the ability to explicitly specify your
own expected response directly in the test definition itself.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could use some clarity on exactly when you need to provide the expected response. It currently sounds like it's up to the user's whim.

I think the advice to give is to not include an explicit expected response unless it is necessary, as it can make the test YAML overly verbose and harder to read (and to write/maintain). And that is not necessary if the expected response is simply re-stating the response definition that appears in the requests. So if the test induces behavior that prevents the server from sending or client from receiving the full response definition (timeouts, cancellations, and exceeding message size limits are good examples of this), it will be necessary to define the expected response explicitly. Otherwise, omit it.

smaye81 and others added 7 commits February 13, 2024 16:33
Co-authored-by: Joshua Humphries <2035234+jhump@users.noreply.github.com>
Co-authored-by: Joshua Humphries <2035234+jhump@users.noreply.github.com>
@@ -174,6 +174,18 @@ func (lib *testCaseLibrary) expandSuite(suite *conformancev1.TestSuite, configCa

func (lib *testCaseLibrary) expandCases(cfgCase configCase, namePrefix []string, testCases []*conformancev1.TestCase) error {
for _, testCase := range testCases {
if testCase.Request.TestName == "" {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a bit more validation so that we could explicitly state these fields were required. Notably, there was no validation at all that a test name be specified. If it was omitted, the tests ran fine. If the tests happened to fail, the output just didn't show a test name, which probably isn't good. For example:

FAILED: TLS Client Certs/HTTPVersion:3/Protocol:PROTOCOL_GRPC_WEB/Codec:CODEC_TEXT/Compression:COMPRESSION_ZSTD/TLS:true:
        expecting 0 response messages but instead got 1

Also for the other validation I added, the tests would run the whole suite but then fail with a cryptic error like:

FAILED: TLS Client Certs/HTTPVersion:3/Protocol:PROTOCOL_GRPC_WEB/Codec:CODEC_TEXT/Compression:COMPRESSION_ZSTD:
        service name  is not a valid service

I figured this validation was better because it catches it before any tests are actually run.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could provide a little bit more context for these errors if you change the loop vars from _, testCase to i, testCase:

return fmt.Errorf("test case #%d: test case has no name", i+1)

@@ -441,7 +453,7 @@ func populateExpectedResponse(testCase *conformancev1.TestCase) error {
// message in this situation, where the response data value is some fixed string (such as "no response definition")
// and whose request info will still be present, but we expect it to indicate zero request messages.
if len(testCase.Request.RequestMessages) == 0 {
return nil
return fmt.Errorf("test %s has no request messages specified", testCase.Request.TestName)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was actually causing a panic because the expected response was nil and we were bypassing all calculation of expected responses. So line 170 in results.go was blowing up on the nil expected.

@smaye81 smaye81 requested a review from jhump February 14, 2024 19:31
@smaye81
Copy link
Member Author

smaye81 commented Feb 14, 2024

@jhump expanded the docs a bit. lmk what you think.

@@ -441,6 +453,7 @@ func populateExpectedResponse(testCase *conformancev1.TestCase) error {
// message in this situation, where the response data value is some fixed string (such as "no response definition")
// and whose request info will still be present, but we expect it to indicate zero request messages.
if len(testCase.Request.RequestMessages) == 0 {
testCase.ExpectedResponse = &conformancev1.ClientResponseResult{}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was causing a panic if you omitted requestMessages from a test case. Since ExpectedResponse was nil here, once we returned, the assert function in results.go was blowing up on line ~170.

Copy link
Member

@jhump jhump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good. I left a few more comments, but I think this is close.

docs/authoring_test_cases.md Outdated Show resolved Hide resolved
docs/authoring_test_cases.md Outdated Show resolved Hide resolved
docs/authoring_test_cases.md Outdated Show resolved Hide resolved
docs/authoring_test_cases.md Outdated Show resolved Hide resolved
docs/authoring_test_cases.md Outdated Show resolved Hide resolved
@@ -174,6 +174,18 @@ func (lib *testCaseLibrary) expandSuite(suite *conformancev1.TestSuite, configCa

func (lib *testCaseLibrary) expandCases(cfgCase configCase, namePrefix []string, testCases []*conformancev1.TestCase) error {
for _, testCase := range testCases {
if testCase.Request.TestName == "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could provide a little bit more context for these errors if you change the loop vars from _, testCase to i, testCase:

return fmt.Errorf("test case #%d: test case has no name", i+1)

smaye81 and others added 2 commits February 15, 2024 09:11
Co-authored-by: Joshua Humphries <2035234+jhump@users.noreply.github.com>
@smaye81
Copy link
Member Author

smaye81 commented Feb 15, 2024

@jhump Updated w/ latest feedback. Lmk what you think.

Copy link
Member

@jhump jhump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG!

@smaye81 smaye81 merged commit 0dd15a8 into main Feb 16, 2024
4 checks passed
@smaye81 smaye81 deleted the sayers/writing_tests_docs branch February 16, 2024 15:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants