-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preliminary support for multi-objective optimization #38
Conversation
Do you want feedback from me? If so, November OK?
|
Thanks, but I think @romainbrette will have a look, it's a feature that he needs :) |
It looks nice, but we need to be able to use refine (gradient descent, right?). It would seem ok to combine into a single error actually, don't you think? (eg just have an L2 metric that is a weighted sum of errors on both variables). Then another issue is that the behavioral variable is measured at 30 Hz while the electrophysiology at 40 kHz. A simple trick of course is to upsample the behavioral variable, that would work to some extent but it's not ideal. I suppose it's a matter of defining a metric that is applied on a series of time points (the trigger times of the camera). Does that make it complicated for the gradient descent maybe? |
I feel like they are a few different issues here: yes, I want to make Regarding the sampling: this is a bit trickier. Currently, there is only a global dt, both for the simulation and the input/output variables. This is already too restrictive even for single variables, e.g. you might want to simulate with a smaller time step than your recordings for numerical reasons. The limitation that all dts are the same is currently hardcoded, but you are right that we could leave it up to the metric to deal with this. For |
Yes it was indeed for the gradient descent with symbolic calculation, that's the one that really works well. |
It's still a bit rough and needs more error checking, testing, etc., but |
Great! I'll try it soon. |
Each objective should use its own metric with a normalization
As discussed with @romainbrette, having both metric_v = MSEMetric(t_start=5*ms, normalization=10*mV)
metric_m = MSEMetric(t_start=5*ms, normalization=0.1) I think this is fairly intuitive: it means that a 10mV difference in the membrane potential should be comparable to a 0.1 difference in the gating variable Another recent change: the individual errors of each objective are now accessible as part of the data structures returned by From my side, this finishes the basic work on the features for this PR. If no one runs into further issues I'll clean up the code and add tests and documentation. |
It's nice. I wonder whether it would then makes sense to use dictionaries instead of lists. eg in TraceFitter we have output_var and output, it could simply be output = {'v' : ..., 'v2' : ...}. In fit() metrics is a list. |
I had the same thought, but wondered whether there are maybe some use cases like using both |
Ah I didn't think about that. Wouldn't be better in this case to have instead a combined metric class? (CombinedMetric(MSEMetric,FeatureMetric) or even MSEMetric+FeatureMetric with operator overloading). |
Yes, I think something like this would make more sense. In a way the |
Instead, use dictionaries
# Conflicts: # brian2modelfitting/fitter.py
[ci skip]
I think this is good enough to merge as it is now. @romainbrette I hope I did not break anything for you with my latest changes? Anything else (related to multiobjective fitting) that I forgot to implement? |
It runs! |
Great, thanks for checking. Your current code probably gets a warning about if you use a list of metrics, this should now be a dictionary (like |
The details of the syntax are still unclear, but here's a first attempt to get multi-objective optimization working, i.e. to make it possible to fit several variables at the same time (e.g. the membrane potential and a behavioural variable). This comes with a lot of restrictions at the moment, e.g. both variables have to be recorded with the same time step, but it should basically work. The main approach is the following (see examples/multiobjective.py for a full example):
TraceFitter
, specify a list of fitted variables (output_var
) and a list of target traces (output
), instead of a single one.TraceFitter.fit
, provide either a singleMetric
(which will be used for both traces) or a list ofMetric
objects (this would allow you for example to ignore different parts of the respective traces by specifyingt_start
ort_weights
), and a list/array ofmetric_weights
that combines the two errors into a single one.The combination of the errors currently ignores units, I don't quite know what an elegant way of handling this would be. Also note that:
TraceFitter.refine
does not work with more than one fitted variableThere seems to be some preliminary/work-in-progress support for multi-objective optimization in Nevergrad itself (see its documentation), but it also simply seems to combine the error into a single error, so not sure that this is very helpful.