-
Notifications
You must be signed in to change notification settings - Fork 4
Conversation
Notice that the build failure is probably due to the fact that its not finding the data test directory. Lets please put all the test data in timestreamlib so we can avoid this extra step. Additionally, the failure can also come from the dependencies in the pipeline test. This PR adds testing for the whole pipeline and therefore everything is used: opencv, skimage, numpy, scipy..... If one of these is not correctly installed in the Travis test machine, it will blow up. |
@@ -62,7 +62,7 @@ def genConfig(opts): | |||
tsConfPath = os.path.join(plConf.general.inputRootPath, | |||
'_data', 'timestream.yml') | |||
if os.path.isfile(tsConfPath): | |||
plConf.append(tsConfPath, 1) | |||
plConf.append(tsConfPath, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment as to what this magic number is? Ta
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I'll add it. FYI, its the depth of the configuration file. After 2 levels PCFGConfig stops trying to make configuration sections and just handles everything as a dict or list.
This looks awesome man!! cheers. Some small comments, but nothing major. |
* util/derandui.py (DerandomizeGUI._addTS): Raise RuntimeError if ts has less than 10 images.
* util/derandui.py (DerandomizeGUI._addCsv): We cancel the CSV upload if we find repeated headers.
* timestream/manipulate/configuration.py (PCFGConfig.argNames): The dictionary in the configuration file will be translated into a PCFGSection.
* timestream/manipulate/configuration.py (PCFGConfig.autocomplete): When we don't have enough data to autocomplete date and time, we error out. This happens when the user specifies {start,end}Date or {start,end}RangeHour and is missing one component.
This in order to get to general.metas.id. * scripts/run-pipeline.py (genConfig): Call append with depth = 2. * timestream/manipulate/configuration.py (PCFGConfig.createSection): Include a section as a dictionary only when the dictionary is all indexed by strings. Create PCFGListSection for pipeline and outstreams only.
* scripts/run-pipeline.py (genContext): Subject. * timestream/manipulate/pipecomponents.py (ResultFeatureWriter.__init__): Subject. We add a "-" separator in the csv output as it is no longer in the suffix.
* scripts/run-pipeline.py (genInputTimestream): We allow the user to specify None for start and end time specifications. This will mean consider all times.
* timestream/parse/__init__.py (read_image): Add a WindowsError to the possible errors that can occur when reading an image. When in a platform other that windows we use None.
* scripts/run_pipeline.py: Rename run-pipeline.py to run_pipeline.py to allor `from run_pipeline import maincli`. * scripts/run_pipeline.ui: Rename run-pipeline.ui for consistency. * tests/helpers.py: Add directory in tests/data that contains the test images. * tests/test_pipeline.py: New test. We: 1. Run the pipeline 2. Create a corrected Timestream 3. Create a segmented Timestream 4. Create CSV Feature output 5. Check creation of directories and images 6. Check creation of pickle files 7. Check internals from pickle files.
* tests/data: We include the new test directories from the data repository.
This PR now has the tests/data update |
One more thing we need: add pyQT4 to the See http://biojenkins.anu.edu.au/job/timestreamlib/default/98/console |
OK, since Joel's gone, I'm going to merge this to a different branch, and continue the PR there, so I can add commits to fix things. |
Main push is for pipeline test cases. Pretty self explanatory in the commit messages. We still need to address the fact that we can't push to tests/data because its another git repository.
I suggest we put all the stuff in tests/data in this (timestreamlib) repo. The work-flow for pushing a commit that has to do with these two repositories is just crazy: I have to fork the data repo, create a PR, get it reviewed (separately though they are linked), then merge in data. Only when that happens can I really finish the PR in timestreamlib by updating the data submodule pointer. Additionally the reviews (of the same logic) will happen in two different places making it difficult to keep track of what is going on.