Skip to content
This repository has been archived by the owner on May 4, 2021. It is now read-only.

Pipeline test and Misc fixes #150

Closed
wants to merge 10 commits into from
Closed

Conversation

Joelgranados
Copy link
Contributor

Main push is for pipeline test cases. Pretty self explanatory in the commit messages. We still need to address the fact that we can't push to tests/data because its another git repository.

I suggest we put all the stuff in tests/data in this (timestreamlib) repo. The work-flow for pushing a commit that has to do with these two repositories is just crazy: I have to fork the data repo, create a PR, get it reviewed (separately though they are linked), then merge in data. Only when that happens can I really finish the PR in timestreamlib by updating the data submodule pointer. Additionally the reviews (of the same logic) will happen in two different places making it difficult to keep track of what is going on.

@coveralls
Copy link

Coverage Status

Coverage increased (+6.78%) when pulling 21f78de on Joelgranados:next into 6adeb64 on borevitzlab:next.

@Joelgranados
Copy link
Contributor Author

Notice that the build failure is probably due to the fact that its not finding the data test directory. Lets please put all the test data in timestreamlib so we can avoid this extra step.

Additionally, the failure can also come from the dependencies in the pipeline test. This PR adds testing for the whole pipeline and therefore everything is used: opencv, skimage, numpy, scipy..... If one of these is not correctly installed in the Travis test machine, it will blow up.

@@ -62,7 +62,7 @@ def genConfig(opts):
tsConfPath = os.path.join(plConf.general.inputRootPath,
'_data', 'timestream.yml')
if os.path.isfile(tsConfPath):
plConf.append(tsConfPath, 1)
plConf.append(tsConfPath, 2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a comment as to what this magic number is? Ta

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I'll add it. FYI, its the depth of the configuration file. After 2 levels PCFGConfig stops trying to make configuration sections and just handles everything as a dict or list.

@kdm9
Copy link
Member

kdm9 commented Dec 3, 2014

This looks awesome man!! cheers.

Some small comments, but nothing major.

* util/derandui.py (DerandomizeGUI._addTS): Raise RuntimeError if ts has less
than 10 images.
* util/derandui.py (DerandomizeGUI._addCsv): We cancel the CSV upload if we find
repeated headers.
* timestream/manipulate/configuration.py (PCFGConfig.argNames): The dictionary
in the configuration file will be translated into a PCFGSection.
* timestream/manipulate/configuration.py (PCFGConfig.autocomplete): When we
don't have enough data to autocomplete date and time, we error out. This happens
when the user specifies {start,end}Date or {start,end}RangeHour and is missing
one component.
This in order to get to general.metas.id.
* scripts/run-pipeline.py (genConfig): Call append with depth = 2.
* timestream/manipulate/configuration.py (PCFGConfig.createSection): Include a
section as a dictionary only when the dictionary is all indexed by strings.
Create PCFGListSection for pipeline and outstreams only.
* scripts/run-pipeline.py (genContext): Subject.
* timestream/manipulate/pipecomponents.py (ResultFeatureWriter.__init__):
Subject. We add a "-" separator in the csv output as it is no longer in the
suffix.
* scripts/run-pipeline.py (genInputTimestream): We allow the user to specify
None for start and end time specifications. This will mean consider all times.
* timestream/parse/__init__.py (read_image): Add a WindowsError to the possible
errors that can occur when reading an image. When in a platform other that
windows we use None.
* scripts/run_pipeline.py: Rename run-pipeline.py to run_pipeline.py to allor
`from run_pipeline import maincli`.
* scripts/run_pipeline.ui: Rename run-pipeline.ui for consistency.
* tests/helpers.py: Add directory in tests/data that contains the test images.
* tests/test_pipeline.py: New test. We:
    1. Run the pipeline
    2. Create a corrected Timestream
    3. Create a segmented Timestream
    4. Create CSV Feature output
    5. Check creation of directories and images
    6. Check creation of pickle files
    7. Check internals from pickle files.
* tests/data: We include the new test directories from the data repository.
@Joelgranados
Copy link
Contributor Author

This PR now has the tests/data update

@coveralls
Copy link

Coverage Status

Coverage increased (+6.78%) when pulling 7e735a4 on Joelgranados:next into 799b900 on borevitzlab:next.

@kdm9
Copy link
Member

kdm9 commented Dec 8, 2014

One more thing we need: add pyQT4 to the requirements.txt file.

See http://biojenkins.anu.edu.au/job/timestreamlib/default/98/console

@kdm9
Copy link
Member

kdm9 commented Dec 8, 2014

OK, since Joel's gone, I'm going to merge this to a different branch, and continue the PR there, so I can add commits to fix things.

@kdm9 kdm9 closed this Dec 8, 2014
@kdm9 kdm9 mentioned this pull request Dec 9, 2014
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants