New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a function that generates a report comparing results (e.g. between SCT versions) #1603
Comments
My feedback:
|
@zougloub Thank you for your feedback. I would like to stress out that this issue reflects my intention to offer a quick, incremental solution for comparing our methods on large testing with older/other versions of the software, based on existing tools in SCT (such as Specific answers to your comments:
Here is an example of comparison results that were generated using As you can see, it is fairly easy to assess which methods is better than the other when comparing two versions of the software. The objective of this issue is simply to generate a PDF report that provide this kind of comparison results. |
@benjamindeleener I like the overall approach you describe, however i would not integrate the result management/visualisation inside My suggestion would be to create a third party function, which takes as input the panda structure (output of So, something like:
This approach would also enable possible use of this function for other purposes, not necessarily specific to Another advantage is that we could generate the report independently from running |
@jcohenadad Agreed. Let's do this! |
I concur. As to "let's do this!" the thing is, in order to do something that's not a future liability, it would be good to be starting off from a solid base:
|
When it comes to generating the actual report generation, once we have gathered the figures/KPI/values from the magical meta-data structure filled by the process execution, I'd be more inclined to piggy-backing on docutils, ie. generating reStructuredText and compiling that into pdf (or other), than say using reportlab. |
@benjamindeleener do you have a working branch on this? If not I can take care of this, as we need it soon (#1757, #1746). |
I see that many of you are using |
@jcohenadad: Indeed |
It seems like we implemented this feature for (However, we then transitioned from So, in the context of current SCT, this issue is actually more akin to "Add database creation and results pickling to |
Description
When there is a new release or a new function, it would be good to generate a PDF report when using
sct_pipeline
that would allow comparing the results of the current functions with specific release or results.For example, if one wants to test the results of a new version
sct_deepseg_sc
and compare them to an old version (let's say master), we can currently run the command below with both versions of SCT and gather and compare the results (using the pickle files that are generated).This new tool would provide an easy way to generate a PDF report that compares two versions of SCT, by simply adding a parameter to
sct_pipeline
. Examples of commands could be:The report should contain tables with the results for each subject as well as graphs that provide a quick visual assessment of the results (like violin plots).
Points to discuss:
sct_pipeline
or should we create a new function?The text was updated successfully, but these errors were encountered: