"System tests" on SpineOpt #848
LouisFliche
started this conversation in
Show and tell
Replies: 2 comments
-
Sounds amazing (and the final part even sounds a bit familiar ;) ), I'm looking forward to try this out some day. Good work! |
Beta Was this translation helpful? Give feedback.
0 replies
-
Great! Converting this to a Show & Tell discussion. :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We'd like to run some tests on SpineOpt to check that everything works properly. More specifically, in addition to the existing tests that check the consistency and correct behaviour of the code, we'd like to run tests that actually take into account the results after execution of the model and physical and technical considerations.
To do that, we took a simple 6-unit system translated from Backbone to make some tests on it. More infos on this system here. The idea was to have a system that both was as simple as possible and included many different features.
I have carried out a few tests by way of example. This branch contains the code of these tests (test/system_tests.jl) The explanations for each of them can be found in the comments of the code. There are essentially two types of test (although the boundary between the two is not well defined), namely :
It should be noted that these two types of test assume that you know what result you are expecting. In reality, for the future, and for potentially more complex test systems, we would also like to compare the results of SpineOpt with those obtained with another model. We'll come back to the considerations concerning this third type of test in a moment.
To simplify things, given that the architecture of these tests is always similar, I've coded a generic function that makes it easy to run tests. It works as follows: it takes as parameters the 'inputs', which specify the parameters of the system that we're modifying, the 'outputs', which specify which parameters we're going to look at at the output of the system, and the 'tests', which will simply be evaluated - these are therefore the tests that we carry out in practice on the outputs.
To be more precise, until the notion of entity is fully implemented, the function takes two distinct parameters as input for objects and relationships. However, the code is normally designed to simplify the change to a structure with only "entities" as much as possible.
When it is executed, the function loads the data from the 6-unit test system, adds the parameters for the objects or relationships, then executes SpineOpt, stores the results, accesses those specified by the outputs, and performs the tests from "tests" on them. The following example gives an idea of the used data format.
Example :
This generic function allows you to create multiple tests and run them simply by iterating the generic function over the data you create. For this purpose, and to simplify the creation of these tests still further, a database could be created outside the code and imported. Even more, we could import directly from a database in SpineOpt format, in which each scenario would correspond to a different test. This remains to be done.
Important note about the current function: the generic function directly evaluates the data (in string form) that is entered. I've run into scoping problems with this data evaluation: I can't get the local variables back the way I've coded things. For the moment I've temporarily created global variables to overcome this problem. However, this could theoretically cause conflicts with all the rest of the SpineOpt code, and could prove dangerous. This is an important issue to resolve before potentially using these tests in the actual SpineOpt code.
Ideally, we'd like to be able to compare automatically with the results of another model, Backbone for example. For the moment, I've set up a workflow in Spine-Toolbox to compare the results of a system previously translated into the SpineOpt and Backbone data formats (see here)
However, if we want to automate this process to be able to compare the results of the two models on different system configurations (typically, to be able, as in the tests described above, to add/modify certain parameters, and check that the results with Backbone and SpineOpt remain equal), we would need to be able to create the correct inputs for Backbone. The rest wouldn't be a problem - just run SpineOpt and Backbone independently, then compare the results, for example with the comparison tool I've coded in this repository (which doesn't, in itself, require the use of Spine-Toolbox - it can be run in a workflow completely in code). On the other hand, giving the right inputs to Backbone and SpineOpt raises translation problems (as described in the repository mentioned above) between these two models. In this case, the solution could be a generic data format: the inputs would be written in this format, then translated into the Backbone and SpineOpt formats, which would then make it possible to compare the results of the two models on the various tests.
Beta Was this translation helpful? Give feedback.
All reactions