Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write doctests for muscle_model #36

Open
2 tasks
travs opened this issue Jun 21, 2015 · 16 comments
Open
2 tasks

Write doctests for muscle_model #36

travs opened this issue Jun 21, 2015 · 16 comments
Assignees
Labels

Comments

@travs
Copy link

travs commented Jun 21, 2015

  • Make sure examples in each of the README files exist and are being tested
  • Integrate with TravisCI
@cheelee
Copy link

cheelee commented Jun 21, 2015

All the tests in this repository are currently non-python invocations, dependent on 2 executables pynml and jnml (Java). The question is how we should approach this:

  1. Wrap the non-python test calls in python (may not work because the jnml tests actually opens a java GUI window on which it draws an image.)
  2. Write a consistent interface for testing, that works with python doctest, as well as non-python stuff.

@cheelee
Copy link

cheelee commented Jun 22, 2015

The instructions for both muscle_model/README.md and muscle_model/NeuroML2/README.md for installing the binaries for running examples in the repository have now been completed and verified on my MAC OS X Yosemite machine.

Points of note:

  1. On Yosemite (need to test this on other platforms), the instruction "pip install lxml" should be "STATIC_DEPS=true pip install lxml"
  2. Prior comments on testing infrastructure via Python doctest stands. In particular, muscle_model/NeuroML2/analyse_k_fast.sh will generate 6 (?) different images for its single experiment. This begs the question of how we can automate the process of checking that each of these images are "correct" with respect to the experiment parameters encapsulated by the script.

@travs
Copy link
Author

travs commented Jun 23, 2015

@cheelee
Thanks for that; I've also noted some installation issues with lxml and , as this subproject matures, a unified installation script will become more necessary.

#29 deals with section 2 of the README, and I believe @net239 took a shot at this once. This could certainly be used/adapted to fit into a unified script as well.

As for the doctest issue, what you say is true. Perhaps we could test that the script has an exit code of 0? I think comparing the plots would be unnecessarily difficult at this stage.

@travs
Copy link
Author

travs commented Jul 3, 2015

@cheelee Relative to the discussion we're having over here, if we change those jnml calls to pynml, we can do pynml -nogui and the command will not produce gui windows, but will still run.

I think we can then do something like

def a_test():
    exit_code = subprocess.call(['pynml', '-nogui', 'LEMS_Figure2A.xml'])
    assert exit_code == 0

We just have to be sure that the arguments in that call are updated dynamically from the README, rather than hard-coded into the testing script.

There is also this module which is more fine-grained than what I am suggesting above.

What do you think of these approaches, and would you be comfortable taking on either?

@cheelee
Copy link

cheelee commented Jul 4, 2015

@travs Sure thing. I should be able to try some of that out and see what I find before the Hackathon Sunday.

@pgleeson
Copy link
Member

pgleeson commented Jul 7, 2015

Do have a look at the testing that's already being run with OMV.

This framework installs jNeuroML on Travis, and runs tests in .test.* files. For example .testA.omt runs LEMS_Figure2A.xml with jnml; .test.validate.omt validates the NML files; .test.nm.omt runs LEMS_NeuronMuscle.xml and checks that the recorded voltage traces have spike times as laid out in .test.nm.mep

@cheelee
Copy link

cheelee commented Jul 9, 2015

@pgleeson Cool, I'll check this out! My current scripts encapsulates the java command into a python script but it is a little clunky, and the output generated includes timing information from the subprocess call which makes it hard to validate simply by output comparison alone.

Nymeria:NeuroML2 cheelee$ cat pynml_test.py
import subprocess
import sys

var_command = 'pynml'
var_gui = '-nogui'
var_input = ''

def a_test(avar_command, avar_gui, avar_input):
    if avar_gui == '':
        exit_code = subprocess.call([avar_command,avar_input])
    else:
        exit_code = subprocess.call([avar_command,avar_gui,avar_input])
    assert exit_code == 0
    return

if len(sys.argv) < 2:
    print 'Usage: ' + sys.argv[0] + ' <input file> [ withgui ] [ <alt exec tool> ]\n'
    sys.exit(-1)
else:
    var_input = str(sys.argv[1])
    if len(sys.argv) > 2:
        var_gui = ''
    if len(sys.argv) > 3:
        var_command = str(sys.argv[3])
    a_test(var_command,var_gui,var_input)

@travs
Copy link
Author

travs commented Jul 26, 2015

@cheelee Let's chat about this one at the hackathon tomorrow!

@cheelee
Copy link

cheelee commented Jul 26, 2015

Hey sure thing! I had forgotten about this! This is definitely hackathon material. Sorry about that!

@slarson
Copy link
Member

slarson commented Jan 13, 2016

Hi @cheelee and @travs -- we were having a chat with @brijeshm39 and @VahidGh today about next steps on the muscle model after integrating #55 and this issue came up as a logical next step. If you guys have any desire to keep moving on this, reply here. Otherwise we're going to see what we can do next to move this forward. Thanks!

@cheelee
Copy link

cheelee commented Jan 14, 2016

Hi @slarson I still have an interest to contribute again to the project, but I'm also knee-deep trying to get my divorce settled for good and also to interview for a teaching position with SUNY. So where I am concerned, you guys please go ahead and move forward on this issue and I'll keep my eyes on it and try to keep up. Thanks!

I'm so sorry about all of this, but my ability to focus has been severely limited over the last few months dealing with these issues and the associated bouts of depression that I have to fight through at the same time.

@souravsingh
Copy link
Collaborator

@cheelee I am interested in working on the issue.

@cheelee
Copy link

cheelee commented Nov 7, 2016

@souravsingh You are most welcome to! I have lost track of the context of this issue after having left it alone for a while, but I can try and see if I can package this in a way that will help you get started.

Are you already a contributor? If not, you can get in touch by filling out the form here - http://docs.openworm.org/en/0.9/#contributing-to-openworm and we can help you get hooked up. Thanks!

@souravsingh
Copy link
Collaborator

Thanks @cheelee. I have filled the form, but at the end, I was asked to schedule a meeting with @slarson and I couldn't find a day for the meeting.

@cheelee
Copy link

cheelee commented Nov 7, 2016

@souravsingh Not a problem! I'm in touch with him regularly, we'll work something out and then get back to you real soon! Stay tuned and thanks!

@pgleeson
Copy link
Member

@souravsingh The scripts in the NeuroML2 folder are well tested at the moment with OMV tests and additional scripts being run in the .travis.yml file as part of the NON_OMV_TESTS.

Note that many of the other directories contain obsolete code (look at READMEs), being kept for information purposes only.

To finish this off what's required is to add some tests on the BoyleCohen2008 code. This would be:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants