You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to suggest that in addition to the formal specification we include test files in this repository that give examples of files that are considered spec-compliant or non-compliant.
Use case
A comprehensive test suite makes it much easier for developers to build spec-compliant implementations. Writing tests takes a lot of time and there is a high risk that edge cases of the spec go unnoticed. Being able to test an implementation against a set of known valid / invalid files helps to reduce misunderstandings and reduces the barrier of building implementations that are actually spec-compliant.
Extra info/examples/attachments
I'm not sure what the best way of implementing a test suite is. In the spoiler below I outline a first idea. I'm also not sure if this is the right repository for a test suite or if it should get its own repository.
I think there are two points of view on a test suite: One from the person writing the tests and one from the person using the suite to validate an implementation. For the person writing the tests it's very advantageous to have the test input and the expected result very close to each other (read: in the same file). From an implementor's point of view it's desirable to have the test input as an individual file that can be read as-is.
To satisfy both points of view I think we should have a build step for the test suite.
Structure of the test suite
The test files are placed in a test folder. The test suite is written using yaml files. These files contain the tests, the expected output and metadata about the tests (see below). This makes it easy to write tests.
During a build step these files are then transformed into a directory structure containing:
The raw test input (an UltraStar TXT file)
A JSON file containing the expected output, or
An error file, indicating that the input is expected to produce an error
Example
A test file could look like this. This example includes 2 test cases (one expected success and one expected failure).
name: Valid Song With 2 Notesdescription: >- This is an optional description of the test case.input: | #VERSION:1.0.0 #title:Foobar #ARTIST:Barfoo : 15 2 2 1 Hello : 17 3 1 1 Worldheaders:
VERSION: 1.0.0TITLE: FoobarARTIST: BarfooP1:
- {type: ":", start: 12, duration: 1, pitch: 2, text: "Hello"}
- {type: ":", start: 17, duration: 3, pitch: 1, text: " World"}—name: Invalid Notedescription: >- This is an example of a failing test case. The description could include helpful tips why this is not considered a valid input.fail: trueinput: | #TITLE:Foobar : 12 1 2 : 31 3 2 1 World
Open Questions
I'm currently unsure about the following questions:
Should we include partial expected results (e.g. in the second case should we include expected headers)?
Is there a better way of encoding the expected parse results for note? This seems quite verbose
How can special characters in the input be encoded? I'm currently thinking that adding a replacement mechanism for \uXXXX sequences might be sensible to make test cases more understandable.
If there is interest in this feature I'm happy to submit a PR containing the build system and some first test cases. Subsequent cases can be added as the details of the spec are decided.
The text was updated successfully, but these errors were encountered:
Hi @codello, this sounds good to me please go ahead. I'm not a test engineer or someting but we really need a proper and standardized test suite - so I highly appreciate any reasonable efforts.
Let's put the test files in this repo for a start.
I'm actually thinking more about edge cases that are relevant when implementing parsers for the format. Consider these two examples:
# VERSION : 1.0.0
#RELATIVE: yes
* 1 2 3 Foo
- 12
Note the following:
Here the #RELATIVE: yes must be ignored, because the header has been removed in version 1.0.0. Because of this - 12 is syntactically valid here.
There is extraneous whitespace around the version headers which must be ignored
#VERSION:1.2.8
#title:Foo:Bar
#P1: Foo
#P01: Bar
Note the following:
The version 1.2.8 is not currently defined. Implementations supporting the 1.0.0 standard should still process this file correctly.
The #TITLE header is lower case and contains a colon. Implementations should be able to parse this correctly.
The headers #P1 and #P01 are different. In particular the value of #P01 should not overwrite the value of #P1
These are just some examples but there are a lot more edge cases that aren't immediately obvious. I'd like to build a test suite to cover these to hopefully make it easier for developers to test their implementations against the spec.
I realize that these edge cases are unlikely to appear in the wild. But I think this is can be a valuable part in ensuring that implementations interpret the spec correctly. (This potentially also relates to #32)
Suggestion
I'd like to suggest that in addition to the formal specification we include test files in this repository that give examples of files that are considered spec-compliant or non-compliant.
Use case
A comprehensive test suite makes it much easier for developers to build spec-compliant implementations. Writing tests takes a lot of time and there is a high risk that edge cases of the spec go unnoticed. Being able to test an implementation against a set of known valid / invalid files helps to reduce misunderstandings and reduces the barrier of building implementations that are actually spec-compliant.
Extra info/examples/attachments
I'm not sure what the best way of implementing a test suite is. In the spoiler below I outline a first idea. I'm also not sure if this is the right repository for a test suite or if it should get its own repository.
Possible Implementation
This proposal is heavily inspired by the YAML test suite.
General considerations
I think there are two points of view on a test suite: One from the person writing the tests and one from the person using the suite to validate an implementation. For the person writing the tests it's very advantageous to have the test input and the expected result very close to each other (read: in the same file). From an implementor's point of view it's desirable to have the test input as an individual file that can be read as-is.
To satisfy both points of view I think we should have a build step for the test suite.
Structure of the test suite
The test files are placed in a
test
folder. The test suite is written using yaml files. These files contain the tests, the expected output and metadata about the tests (see below). This makes it easy to write tests.During a build step these files are then transformed into a directory structure containing:
error
file, indicating that the input is expected to produce an errorExample
A test file could look like this. This example includes 2 test cases (one expected success and one expected failure).
Open Questions
I'm currently unsure about the following questions:
headers
)?input
be encoded? I'm currently thinking that adding a replacement mechanism for\uXXXX
sequences might be sensible to make test cases more understandable.If there is interest in this feature I'm happy to submit a PR containing the build system and some first test cases. Subsequent cases can be added as the details of the spec are decided.
The text was updated successfully, but these errors were encountered: