Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sciunit judge no longer works with static Models. How can I opt of using the ProtocolToFeaturesTest? #235

Open
russelljjarvis opened this issue Feb 13, 2020 · 8 comments

Comments

@russelljjarvis
Copy link
Contributor

russelljjarvis commented Feb 13, 2020

from neuronunit.optimisation import get_neab
import dask.bag as db

test_frame = get_neab.process_all_cells()
tt = test_frame['Neocortex pyramidal cell layer 5-6']#
tt.judge(SM)


from neuronunit.optimisation.optimization_management import OptMan,TSD,dtc_to_rheo#,get_neab
from neuronunit.optimisation.data_transport_container import DataTC
from neuronunit.optimisation import get_neab
import dask.bag as dbtest_frame = get_neab.process_all_cells()
tt = test_frame['Neocortex pyramidal cell layer 5-6']#
tt.judge(SM)

AssertionError Traceback (most recent call last)
in
6 test_frame = get_neab.process_all_cells()
7 tt = test_frame['Neocortex pyramidal cell layer 5-6']#
----> 8 tt.judge(SM)
9 #tt = switch_logic(tt)
10 #not_suite = TSD({t.name:t for t in tt.tests})

~/git/sciunit/sciunit/suites.py in judge(self, models, skip_incapable, stop_on_error, deep_error, parallel, log_norm)
161 for test in self.tests:
162 score = self.judge_one(model, test, sm, skip_incapable,
--> 163 stop_on_error, deep_error)
164 if log_norm:
165 if score.get_raw() != 0:

~/git/sciunit/sciunit/suites.py in judge_one(self, model, test, sm, skip_incapable, stop_on_error, deep_error)
202 score = test.judge(model, skip_incapable=skip_incapable,
203 stop_on_error=stop_on_error,
--> 204 deep_error=deep_error)
205 log('Score is ' % score.color()
206 + '%s' % score)

~/git/sciunit/sciunit/tests.py in judge(self, model, skip_incapable, stop_on_error, deep_error)
330 score.test = self
331 if isinstance(score, ErrorScore) and stop_on_error:
--> 332 raise score.score # An exception.
333 return score
334

~/git/sciunit/sciunit/tests.py in judge(self, model, skip_incapable, stop_on_error, deep_error)
319 else:
320 try:
--> 321 score = self._judge(model, skip_incapable=skip_incapable)
322 except CapabilityError as e:
323 score = NAScore(str(e))

~/git/sciunit/sciunit/tests.py in _judge(self, model, skip_incapable)
259
260 # 2.
--> 261 prediction = self.generate_prediction(model)
262 self.check_prediction(prediction)
263 self.last_model = model

~/git/sciunit/sciunit/tests.py in generate_prediction(self, model)
624 run_method = getattr(model, "run", None)
625 assert callable(run_method),
--> 626 "Model must have a run method to use a ProtocolToFeaturesTest"
627 self.setup_protocol(model)
628 result = self.get_result(model)

AssertionError: Model must have a run method to use a ProtocolToFeaturesTest

@rgerkin
Copy link
Contributor

rgerkin commented Feb 13, 2020

@russelljjarvis
I've just fixed this in the backend-refactor branch (the one with the open PR). So now I can do:

m = StaticModel(vm)  # Make a static model based on this response
t = InputResistanceTest({'mean': 450*pq.MOhm, 'std': 50*pq.MOhm})
s = t.judge(m)

and get the expected score for a given vm, in the backend-refactor branch of this fork.

Origin of the problem and its solution:
Yes ProtocolToFeaturesTest requires a run method, and the easy way to achieve that is to make StaticModel inherit from sciunit.models.runnable.RunnableModel instead of from just sciunit.Model. But the other issue is that the tests itself (e.g. InputResistanceTest) requires the ability to call methods of the backend (like set_stop_time). Most of the backends have this, but of course the whole point of StaticModel is to not have to have any actual backend. And I noticed that all of those methods (like set_stop_time) where defined in all of the backends but without a reference backend class which defines the signatures of those methods. So I created an empty Backend class which StaticModel can use, which doesn't do anything at all (those methods are implemented as pass), consistent with the nature of StaticModel.

So if this is a blocking issue for you, I propose that we merge that branch today in our meeting, and see then address any new issues that arise from that.

@russelljjarvis
Copy link
Contributor Author

With the newer version of sciunit dev installed

RheobaseTest
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
~/git/sciunit/sciunit/models/base.py in __getattr__(self, attr)
    108         try:
--> 109             result = super(Model, self).__getattribute__(attr)
    110         except AttributeError:

AttributeError: 'StaticModel' object has no attribute 'run_params'

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
~/git/sciunit/sciunit/models/base.py in __getattr__(self, attr)
    111             try:
--> 112                 result = self._backend.__getattribute__(attr)
    113             except:

AttributeError: 'EmptyBackend' object has no attribute 'run_params'

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
<ipython-input-7-9eeb01b46f3a> in <module>
      8 for t in tt:
      9     print(t.name)
---> 10     score = t.judge(SM)
     11 #tt = switch_logic(tt)
     12 #not_suite = TSD({t.name:t for t in tt.tests})

~/git/sciunit/sciunit/tests.py in judge(self, model, skip_incapable, stop_on_error, deep_error)
    330                 score.test = self
    331         if isinstance(score, ErrorScore) and stop_on_error:
--> 332             raise score.score  # An exception.
    333         return score
    334 

~/git/sciunit/sciunit/tests.py in judge(self, model, skip_incapable, stop_on_error, deep_error)
    319         else:
    320             try:
--> 321                 score = self._judge(model, skip_incapable=skip_incapable)
    322             except CapabilityError as e:
    323                 score = NAScore(str(e))

~/git/sciunit/sciunit/tests.py in _judge(self, model, skip_incapable)
    259 
    260         # 2.
--> 261         prediction = self.generate_prediction(model)
    262         self.check_prediction(prediction)
    263         self.last_model = model

~/safe2/neuronunit/neuronunit/tests/fi.py in generate_prediction(self, model)
    103         # Method implementation guaranteed by
    104         # ProducesActionPotentials capability.
--> 105         self.condition_model(model)
    106         prediction = {'value': None}
    107         model.rerun = True

~/safe2/neuronunit/neuronunit/tests/fi.py in condition_model(self, model)
     97             self.params['tmax'] = 2000.0*pq.ms
     98         else:
---> 99             model.set_run_params(t_stop=self.params['tmax'])
    100 
    101     def generate_prediction(self, model):

~/git/sciunit/sciunit/models/runnable.py in set_run_params(self, **run_params)
     84     def set_run_params(self, **run_params):
     85         """Set run-time parameters, e.g. the somatic current to inject."""
---> 86         self.run_params.update(run_params)
     87         self.check_run_params()
     88         self._backend.set_run_params(**run_params)

~/git/sciunit/sciunit/models/base.py in __getattr__(self, attr)
    113             except:
    114                 raise AttributeError("Model %s has no attribute %s"
--> 115                                      % (self, attr))
    116         return result
    117 

AttributeError: Model None has no attribute run_params```

@rgerkin
Copy link
Contributor

rgerkin commented Feb 18, 2020

@russelljjarvis I don't know what you are doing to get this because you printed the traceback but not the code that produced it.

@rgerkin
Copy link
Contributor

rgerkin commented Feb 18, 2020

@russelljjarvis OK, I think I figured it out based on the traceback. RheobaseTest does not inherit from TestPulseTest so it needed a few bug fixes to work. I applied them in the rick branch of this repository, so check that out and it should work (or at least you shouldn't get that bug). For example, I can run a RheobaseTest on a static model (whereas it resulted an error before). Be aware that if you are running a RheobaseTest on a static model, you'll need to have a cached waveform for every value of current injected -- not sure that is actually possible?

I am assuming you were running a RheobaseTest since that is what was printed at the very top of the stack trace.

@russelljjarvis
Copy link
Contributor Author

@rgerkin
Copy link
Contributor

rgerkin commented Feb 18, 2020

@russelljjarvis Does the fix in the rick branch work for you? I am able to run most of your notebook above with that branch, although I needed to make a few changes in the notebook:

Specific to me:

  • import os; os.environ['NEURON_HOME'] = '/opt/conda' # The path to my NEURON installation
  • In terminal: cd ~/neuronunit/neuronunit/models/NeuroML2; nrnivmodl

General

  • Replace all calls to set_params(var) with set_params(**var) (set_params takes key-value pairs, so a dict much be passed with **).
  • Replace all calls to model.inject_square_current(iparams) with model.inject_square_current(iparams['injected_square_current']) (inject_square_current expects an injected_square_current dict, not a whole dict of all params)
  • Not sure where model.results['sim_time'] is coming from, so I just got rid of all of those.
  • Several cells need to be reordered since earlier cells use variables created in later cells.
  • Fixes required in get_neab.

I'll push a new version of the notebook to my branch soon.

@rgerkin
Copy link
Contributor

rgerkin commented Feb 18, 2020

Actually, I cannot finish because of an issue in get_neab. You wouldn't have encountered it because you have the all_tests.p pickle file, but I cannot reconstruct it because of this line, which I don't understand, and which raises an Exception (you are trying to call a dictionary as a function). I see that you are trying to build a list of observations and tests. I get the observations (lists of NeuroElectroSummary objects for each cell), but I'm not sure whether you are actually trying instantiate tests here or what.

In any case, if you want to move forward, you can apply the same fixes as above in my last message, on the rick branch (which has all of your commits from master) and see if it works for you, since you already have all_tests.p.

@russelljjarvis
Copy link
Contributor Author

The method get_neab.process_all_cells(). Seems to be the only method that runs without failing. Even then, it can't access every cell data set it was designed to obtain.

I put in a link to the wrong file.
https://github.com/russelljjarvis/NeuronunitOpt/blob/master/neuronunit/unit_test/working/NeuroML2_HH.ipynb

I will test out the rick branch later tonight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants