Skip to content

Importance sampling paper

max-dax edited this page Jul 26, 2022 · 4 revisions

Overleaf

Story

We address a ubiquitous problem of machine learning (ML) models for high-accuracy inference applications in science. ML methods using deep neural networks are often more efficient than their traditional (likelihood-based) counterparts, but their failure modes are less predictable and their accuracy is hard to monitor. We combine a likelihood-free inference approach with importance sampling for exact inference, which improves potentially inaccurate results and provides comprehensive performance metrics. Applied to GW inference, we get the best of both worlds: efficiency and speed of ML methods with accuracy and interpretable diagnostic metrics of classical methods.

Contributions

  • Conceptual contribution (see above; also maybe generic method to improve/evaluate MCMC samples?)
  • GW inference: bridge gap between ML methods and classic methods, big step towards using ML in practice
  • First low-latency demonstration of GW Inference with IMRPhenomXPHM and SEOBNRv4PHM (takes 6 months with MCMC!)
  • Evidence with much smaller uncertainty than classic methods
  • Potentially better coverage of multimodal posteriors

Results (with IMRPhenomXPHM and SEOBNRv4PHM)

  • O1 -- done (ESS 10%-30%)
  • O2 -- figure out issue with ESS
  • O3 -- evaluate current models, start runs with larger kernel
  • GW190521 -- evaluate XPHM model, start EOB training run

Evaluation

  • bilby runs for at least a few events, as reference for posterior + evidence
  • potentially perform importance sampling for bilby, for an event where we suspect bilby to be off
Clone this wiki locally