Skip to content

Discussion: Test generator design #675

Open
@mk-mxp

Description

@mk-mxp

To keep away lengthy discussions from tracking issue #631 / pull requests #674 and #663

There are lots of experiences and information about the test generators of various tracks in the forum discussion started by Cool-Katt for the JS track test generator.

My conclusion based on the information gathered there is: It takes 2 test generators. Or at least 2 largely different approaches (commands) with a small overlapping code base.

  • First time creation of test code "from scratch"
  • Production-ready test code creation for updates

This is my sketchy design for those two for discussion:

"from scratch"

  • Identify the most likely kind of test case to support
    • I think it is "single function call with arguments, assert on return value"
    • We wrap those functions into classes making them method calls
    • Class instantiation happens without arguments
  • Identify the data structure variations around test cases to support
    • This overlaps with "for updates", but there are differences
    • There is documentation in problem spec
    • I think there are
      • The outermost object CanonicalData
        • required
      • An array of Item objects I called InnerGroup
        • There are instances of Group and TestCase in the same array (anyOf, not oneOf in JSON schema terms)
        • required
      • A recursively occuring Group implements Item object with an InnerGroup and additional data
        • 0 - 3 nesting levels discovered so far
        • required
      • A TestCase implements Item object with variations like "single function call with arguments, assert on return value", "error", "multiple function calls with arguments, assert on state / return value"
        • Support only "single function call with arguments, assert on return value" and "error"
    • The structure also contains scenarios as a virtual grouping
      • ignored
    • TestCases use field reimplements for test case replacement
      • ignored
    • tests.toml (or another source of information) used for ignoring test cases is ignored, as there cannot be information in it about the new exercise
  • Implement a parser to enable a simple "one fits all" template
    • Extract the data required to render the structure to support
    • Extract the data required to render the test cases to support
    • Provide any other data to render into comments for the contributor to design the exercise
  • Implement a "one fits all" template
    • This template will rarely change
    • Use a technique every contributor should know well
    • Test it well with all variations in structure, no data shall be lost

"for updates"

  • Identify all kinds of test cases to support
    • I think this will be done incrementally
    • There must be extensibility for the templating engine (maybe per exercise?)
    • We already have students interfaces which add to the required "kinds of test cases" (i.e. allergies)
  • Identify the data structure variations around test cases to support
    • This overlaps with "from scratch", but there are differences
    • There is documentation in problem spec
    • I think there are
      • The outermost object CanonicalData
        • required
      • An array of Item objects I called InnerGroup
        • There are instances of Group and TestCase in the same array (anyOf, not oneOf in JSON schema terms)
        • required
      • A recursively occuring Group implements Item object with an InnerGroup and additional data
        • 0 - 3 nesting levels discovered so far
        • required
      • A TestCase implements Item object with variations
        • "single function call with arguments, assert on return value"
        • "error"
        • "multiple function calls with arguments, assert on state / return value"
        • All must be supported, maybe per exercise
    • The structure also contains scenarios as a virtual grouping
      • May be ignored if tests.toml (or another source of information) is used for ignoring test cases
      • Must be supported if no test cases with scenario shall be rendered accidentally
    • TestCases use field reimplements for test case replacement
      • Must support replacement
      • There are different strategies like "replace always", "replace when forced to", "use tests.toml to ignore replaced test cases" (works like a baseline for known test issues)
  • Implement a parser to enable a "per exercise" template
    • Extract the data required to render the structure to support
    • Extract the data required to render the test cases to support
    • Warn about or silently drop any other data (commandline flag for this?)
  • Implement the "per exercise" templates
    • There could be a library of template variations to select from
    • These templates will change whenever the data contains new things to support (like adding "error" test cases when there were none before)
    • Use a technique every contributor should know well
    • It is developed by comparison to the original test code, there should not be tests per exercise but for the possible template variations

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions