-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorizing particle filter and supporting changes. #76
Conversation
…filter example. See PR documentatioon for details.
tests/test_predictors.py
Outdated
states = [list(range(10)), list(range(1, 11)), list(range(-1, 9))] | ||
p = UnweightedSamplesPrediction(times, states) | ||
p = Prediction(times, states) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
p = Prediction(times, states) | |
p = UnweightedSamplesPrediction(times, states) |
Here we'll want to use UnweightedSamplesPrediction to take advantage of the latest classes (it differentiates from MultivariateNormalDistPrediction
state_estimator_template.py
Outdated
@@ -40,6 +40,7 @@ def estimate(self, t, u, z): | |||
e.g., z = {'t':12.4, 'v':3.3} given inputs = ['t', 'v'] | |||
""" | |||
# REPLACE WITH UPDATE STATE ESTIMATION | |||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pass |
Minor thing: Pass isn't needed here and was removed to keep CodeFactor from complaining.
Changes suggested in comments Co-authored-by: Christopher Teubert <christopher.a.teubert@nasa.gov>
There's some strange behavior I'm investigating when using the updated PF. Is the basic example, using the PF after only one iteration of the estimate function all the samples are identical. I've confirmed looking at the underlying data that every sample is identical. @MikeAndSpencer Do you see this behavior as well? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great, thank you all so much! I completed my initial review.
I'm still looking into the strange behavior I observed in PF. I'm also readying a PR into this branch with some changes in UnweightedSamples to allow it to accept your new format for particles.
Let me know if you have any questions on my comments.
Thanks again!
|
||
self.default_parameters = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was your reason for moving the default_parameters to be a property of an instance instead of the class?
The pattern that we're using here is that default_parameters are set as a property of the class, and if you want to change the parameters for an instance you use filt.parameters (which is set in the parent (StateEstimator) initializer. This pattern is also used for models and predictors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding we moved the default_parameters into the instance because we wanted the default parameters of 'n' and 'x0_uncertainty' to be the process noise for the particle filter automatically rather than having it as the same value for all the states.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow up. I think it's fine if we move default parameters back to a property of the class. I just think it adds some unnecessary complexity having to specify the process noise parameters explicitly because the default for the particle filter should be the model process noise.
Spencer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can disregard these comments. Getting rid of 'n' and moving the default parameters up to a property of the class fixes the problem we ran into.
if isinstance(self.parameters['x0_uncertainty'], Number): | ||
# Build array inplace | ||
x = array(list(x0.values())) | ||
if isinstance(self.parameters['x0_uncertainty'], dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, good catch! I didn't realize we weren't supporting dict x0_uncertainty format. I better add a test to make sure this doesn't happen again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some strange behavior I'm investigating when using the updated PF.
Is the basic example, using the PF after only one iteration of the estimate function all the samples are identical. I've confirmed looking at the underlying data that every sample is identical. @MikeAndSpencer Do you see this behavior as well?
Good question. This doesn't happen when we I run the particle filter for the electrochemical battery in the particle_filter_battery_example.py script. This is because I set the measurement noise to defined values. The issue you are running into for the base example is that the measurement noise is 0 (because that is the default value given in prog_models). When the measurement noise is 0 the log weights go out to -inf when the particle isn't on the exact mean value. I was able to fix the problem in the basic example by just adding default parameters for the measurement noise in the battery model like so
# Measurement noise
R_vars = {
't': 2,
'v': 0.02
}
batt = Battery(measurement_noise = R_vars)
I'll include that in a commit.
Spencer
|
||
self.default_parameters = { | ||
'n': model.parameters['measurement_noise'], # Sensor Noise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'n': model.parameters['measurement_noise'], # Sensor Noise |
I would rather remove this completely and require the user to set the measurement noise in the model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below we would also have to replace self.parameters['n'] with self.model.parameters['measurement_noise']
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree with that. I think it makes more sense to get rid of 'n' here. That means we would also get rid of the checks for n in the init
Spencer
num_measurements = len(model.outputs) | ||
points = kalman.MerweScaledSigmaPoints(num_states, alpha=self.parameters['alpha'], beta=self.parameters['beta'], kappa=self.parameters['kappa']) | ||
self.filter = kalman.UnscentedKalmanFilter(num_states, num_measurements, self.parameters['dt'], measure, state_transition, points) | ||
self.filter.x = array(list(x0.values())) | ||
self.filter.x = array(list(self.x0.values())).ravel() | ||
self.filter.P = self.parameters['Q'] / 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the basis for this definition of P as 1/10 of Q?
Also, could you help me understand the reason for calculating the weights in the log-domain? Someone on my team shared a paper where this was done as well, but I can't say I follow the full reasoning behind it. Thanks, |
OK, my team explained the log-domain, but I don't understand the point of the shift (-max(log_weights). Could you explain that for me? Thanks, |
Hey Chris, good question. We subtract the max log weights is for numerical stability. Sometimes the log weights can be a large negative value and when you exponentiate that value the computer will round the result to 0 for most of the weights (sometimes all of them) which causes problems when trying to sample from the particles. We shift them up by the max log weight (essentially making the max log weight 0) to help avoid that problem. When we normalize the weights by dividing by the sum of all the weights, that constant cancels out. I can show you the math in our call if you want to see it canceled out. I can't find the paper off the top of my head but we followed a method used in other particle filter libraries. Thanks, Spencer |
Code review changes for particle filter Co-authored-by: Christopher Teubert <christopher.a.teubert@nasa.gov>
…the class, and fix basic_example measurement noise
UKF suggested changes Co-authored-by: Christopher Teubert <christopher.a.teubert@nasa.gov>
Updated UnweightedSamples to support updated particles format
Note for the record: /test-prog-models-released are expected to fail, as the dev branch of prog_algs is not compatible with the currently released version (i.e., they include breaking changes). |
Description
particle_filter_battery_example.py:
particle_filter.py
unscented_kalman_filter.py