Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vectorizing particle filter and supporting changes. #76

Merged
merged 13 commits into from
Nov 5, 2021
Merged

Conversation

MikeAndSpencer
Copy link
Collaborator

Description

particle_filter_battery_example.py:

  • Added an example script that runs the particle filter for each time step on simulated battery data and returns a plot of the estimated particles for each time step

particle_filter.py

  • Added numpy modules to allow vectorization
  • Put default parameters inside the class
  • Included if isinstance of x0uncertainty being a dictionary
  • Calculated sum of log pdfs
  • Added numerical stability for calculating weights by subtracting the max log weight

unscented_kalman_filter.py

  • Included self.x0 inside the class

…filter example. See PR documentatioon for details.
@MikeAndSpencer MikeAndSpencer changed the title Vectorizing particle filter and supporting changes. Added a particle … Vectorizing particle filter and supporting changes. Oct 18, 2021
states = [list(range(10)), list(range(1, 11)), list(range(-1, 9))]
p = UnweightedSamplesPrediction(times, states)
p = Prediction(times, states)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
p = Prediction(times, states)
p = UnweightedSamplesPrediction(times, states)

Here we'll want to use UnweightedSamplesPrediction to take advantage of the latest classes (it differentiates from MultivariateNormalDistPrediction

@@ -40,6 +40,7 @@ def estimate(self, t, u, z):
e.g., z = {'t':12.4, 'v':3.3} given inputs = ['t', 'v']
"""
# REPLACE WITH UPDATE STATE ESTIMATION
pass
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pass

Minor thing: Pass isn't needed here and was removed to keep CodeFactor from complaining.

@teubert teubert linked an issue Oct 21, 2021 that may be closed by this pull request
@teubert
Copy link
Collaborator

teubert commented Oct 28, 2021

There's some strange behavior I'm investigating when using the updated PF.

Is the basic example, using the PF after only one iteration of the estimate function all the samples are identical. I've confirmed looking at the underlying data that every sample is identical. @MikeAndSpencer Do you see this behavior as well?

Screen Shot 2021-10-28 at 4 05 13 PM

Copy link
Collaborator

@teubert teubert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, thank you all so much! I completed my initial review.

I'm still looking into the strange behavior I observed in PF. I'm also readying a PR into this branch with some changes in UnweightedSamples to allow it to accept your new format for particles.

Let me know if you have any questions on my comments.

Thanks again!


self.default_parameters = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was your reason for moving the default_parameters to be a property of an instance instead of the class?

The pattern that we're using here is that default_parameters are set as a property of the class, and if you want to change the parameters for an instance you use filt.parameters (which is set in the parent (StateEstimator) initializer. This pattern is also used for models and predictors

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my understanding we moved the default_parameters into the instance because we wanted the default parameters of 'n' and 'x0_uncertainty' to be the process noise for the particle filter automatically rather than having it as the same value for all the states.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow up. I think it's fine if we move default parameters back to a property of the class. I just think it adds some unnecessary complexity having to specify the process noise parameters explicitly because the default for the particle filter should be the model process noise.
Spencer

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can disregard these comments. Getting rid of 'n' and moving the default parameters up to a property of the class fixes the problem we ran into.

src/prog_algs/state_estimators/particle_filter.py Outdated Show resolved Hide resolved
src/prog_algs/state_estimators/particle_filter.py Outdated Show resolved Hide resolved
if isinstance(self.parameters['x0_uncertainty'], Number):
# Build array inplace
x = array(list(x0.values()))
if isinstance(self.parameters['x0_uncertainty'], dict):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, good catch! I didn't realize we weren't supporting dict x0_uncertainty format. I better add a test to make sure this doesn't happen again

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's some strange behavior I'm investigating when using the updated PF.

Is the basic example, using the PF after only one iteration of the estimate function all the samples are identical. I've confirmed looking at the underlying data that every sample is identical. @MikeAndSpencer Do you see this behavior as well?

Screen Shot 2021-10-28 at 4 05 13 PM

Good question. This doesn't happen when we I run the particle filter for the electrochemical battery in the particle_filter_battery_example.py script. This is because I set the measurement noise to defined values. The issue you are running into for the base example is that the measurement noise is 0 (because that is the default value given in prog_models). When the measurement noise is 0 the log weights go out to -inf when the particle isn't on the exact mean value. I was able to fix the problem in the basic example by just adding default parameters for the measurement noise in the battery model like so
# Measurement noise
R_vars = {
't': 2,
'v': 0.02
}
batt = Battery(measurement_noise = R_vars)
I'll include that in a commit.
Spencer


self.default_parameters = {
'n': model.parameters['measurement_noise'], # Sensor Noise
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'n': model.parameters['measurement_noise'], # Sensor Noise

I would rather remove this completely and require the user to set the measurement noise in the model

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Below we would also have to replace self.parameters['n'] with self.model.parameters['measurement_noise']

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree with that. I think it makes more sense to get rid of 'n' here. That means we would also get rid of the checks for n in the init
Spencer

num_measurements = len(model.outputs)
points = kalman.MerweScaledSigmaPoints(num_states, alpha=self.parameters['alpha'], beta=self.parameters['beta'], kappa=self.parameters['kappa'])
self.filter = kalman.UnscentedKalmanFilter(num_states, num_measurements, self.parameters['dt'], measure, state_transition, points)
self.filter.x = array(list(x0.values()))
self.filter.x = array(list(self.x0.values())).ravel()
self.filter.P = self.parameters['Q'] / 10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the basis for this definition of P as 1/10 of Q?

src/prog_algs/state_estimators/unscented_kalman_filter.py Outdated Show resolved Hide resolved
src/prog_algs/state_estimators/unscented_kalman_filter.py Outdated Show resolved Hide resolved
src/prog_algs/state_estimators/unscented_kalman_filter.py Outdated Show resolved Hide resolved
src/prog_algs/state_estimators/unscented_kalman_filter.py Outdated Show resolved Hide resolved
@teubert
Copy link
Collaborator

teubert commented Oct 29, 2021

Also, could you help me understand the reason for calculating the weights in the log-domain? Someone on my team shared a paper where this was done as well, but I can't say I follow the full reasoning behind it.

Thanks,
Chris

@teubert
Copy link
Collaborator

teubert commented Oct 29, 2021

Also, could you help me understand the reason for calculating the weights in the log-domain? Someone on my team shared a paper where this was done as well, but I can't say I follow the full reasoning behind it.

OK, my team explained the log-domain, but I don't understand the point of the shift (-max(log_weights). Could you explain that for me?

Thanks,
Chris

@teubert teubert linked an issue Oct 29, 2021 that may be closed by this pull request
@MikeAndSpencer
Copy link
Collaborator Author

Also, could you help me understand the reason for calculating the weights in the log-domain? Someone on my team shared a paper where this was done as well, but I can't say I follow the full reasoning behind it.

OK, my team explained the log-domain, but I don't understand the point of the shift (-max(log_weights). Could you explain that for me?

Thanks, Chris

Hey Chris, good question. We subtract the max log weights is for numerical stability. Sometimes the log weights can be a large negative value and when you exponentiate that value the computer will round the result to 0 for most of the weights (sometimes all of them) which causes problems when trying to sample from the particles. We shift them up by the max log weight (essentially making the max log weight 0) to help avoid that problem. When we normalize the weights by dividing by the sum of all the weights, that constant cancels out. I can show you the math in our call if you want to see it canceled out. I can't find the paper off the top of my head but we followed a method used in other particle filter libraries.

Thanks, Spencer

SpencerREbert and others added 7 commits November 5, 2021 00:46
Code review changes for particle filter

Co-authored-by: Christopher Teubert <christopher.a.teubert@nasa.gov>
…the class, and fix basic_example measurement noise
UKF suggested changes

Co-authored-by: Christopher Teubert <christopher.a.teubert@nasa.gov>
Updated UnweightedSamples to support updated particles format
@teubert
Copy link
Collaborator

teubert commented Nov 5, 2021

Note for the record: /test-prog-models-released are expected to fail, as the dev branch of prog_algs is not compatible with the currently released version (i.e., they include breaking changes).

@teubert teubert merged commit 127d85c into dev Nov 5, 2021
@teubert teubert deleted the ng_to_nasa branch November 5, 2021 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Vectorize Particle Filter Update particle filter to use measurement_noise from model
2 participants