You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For each aspect that is tested by the validator, I created a tileset JSON file that was supposed to cause that exact issue. The result is a set of serveral hundred JSON files in the specs/data directory, for tilesets, metadata, and subtrees.
The current approach of most of the unit tests (e.g. in TilesetValidatorSpec ) is to read each of these files, and check whether the validator generates the expected issue.
For some tests, it would probably make more sense to not read the data from a file, but to perform the validation call on an object that is directly declared in the test code itself. But having the set of files (that can easily be checked individually, at the command line) turned out to be useful occasionally, even though it may be considered to be more of an "integration test" than a "unit test".
I also considered the option to summarize these test cases in their own JSON file, roughly like in
[
{
"inputFile": "boundingVolumeRegionArrayInvalidElementType.json",
"description": "One element of the region array is not a number",
"expectedIssues": [ "ARRAY_ELEMENT_TYPE_MISMATCH" ]
}
...
]
This could simplify testing in some ways, and allow documenting the test via the description. But it might turn out to be an unnecessary maintenance burden.
If somebody has preferences, thoughts, or alternative approaches, feel free to discuss them here.
The text was updated successfully, but these errors were encountered:
For each aspect that is tested by the validator, I created a tileset JSON file that was supposed to cause that exact issue. The result is a set of serveral hundred JSON files in the
specs/data
directory, for tilesets, metadata, and subtrees.The current approach of most of the unit tests (e.g. in
TilesetValidatorSpec
) is to read each of these files, and check whether the validator generates the expected issue.For some tests, it would probably make more sense to not read the data from a file, but to perform the validation call on an object that is directly declared in the test code itself. But having the set of files (that can easily be checked individually, at the command line) turned out to be useful occasionally, even though it may be considered to be more of an "integration test" than a "unit test".
I also considered the option to summarize these test cases in their own JSON file, roughly like in
This could simplify testing in some ways, and allow documenting the test via the
description
. But it might turn out to be an unnecessary maintenance burden.If somebody has preferences, thoughts, or alternative approaches, feel free to discuss them here.
The text was updated successfully, but these errors were encountered: