Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS review: sinusoid examples and plots in paper #298

Merged
merged 6 commits into from
Apr 4, 2024
Merged

Conversation

lm2612
Copy link
Collaborator

@lm2612 lm2612 commented Apr 3, 2024

Purpose

Addresses some remaining points from JOSS review issues #294 and #293

To-do

Edits to example pages (#294)

  • For comparison it would be better if the color scale would cover the same range from figure to figure, especially when plotting the errors.
  • Maybe add a table with the numeric results, like in the Gaussian process section. This would help to compare the results from Gaussian process and random features.

Edits to paper (#293)

p. 2 L72
"ABC can be used in more general contexts than CES, but suffers greater
approximation error and more stringent assumptions, especially in
multi-dimensional problems."

  • Can you provide a reference for this statement? Or is this something that is immediately clear to a statistician? Does it not also depend on the chosen emulator in CES?

Figure 1

  • Rather a cosmetic problem: the legend says true amplitude = 6.99, but the text in the paper says 7.0 (seems to be an artifact from the signal being made up of finite amount of samples, examples/Sinusoid/sinusoid_setup.jl:61).
    Ok, the legend is the mean, the text is the vertical shift, so perhaps one could consider it a property of the model (or finite number of "measurements"), but it is a bit confusing.

  • I think it would make more sense if the observed range (blue double arrow) would be centered with respect to the observed mean (blue dashed line). But your version can be compared to the true values more easily, so feel free to ignore.

Figure 4, caption: "... trained on the re-used calibration pairs"

  • The points (scatter plots) do not seem to be the same as in Figure 3. Should they be the same?

See docs edits here


  • I have read and checked the items on the review checklist.

Copy link

codecov bot commented Apr 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 88.26%. Comparing base (acf1051) to head (ff7990c).
Report is 7 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #298      +/-   ##
==========================================
+ Coverage   88.17%   88.26%   +0.08%     
==========================================
  Files           8        8              
  Lines        1184     1184              
==========================================
+ Hits         1044     1045       +1     
+ Misses        140      139       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

… range adjusted to be centered on the observed mean following reviewer comments
….0 in legend, blue osberved range moved to be cented ont he observed mean); Contour and error plots all fixed to have the same contour limits
…aim. Instead we just comment on some llimiations of ABC and CES and we also include a reference to the ABC Handbook
@lm2612 lm2612 marked this pull request as ready for review April 4, 2024 19:43
@lm2612 lm2612 requested a review from odunbar April 4, 2024 19:44
Copy link
Collaborator

@odunbar odunbar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lm2612

Thanks for this! Sinusoid changes look great,
I left two tiny comments on the ABC stuff, please merge after you have taken a look/addressed this.

paper.md Outdated
@@ -67,7 +67,7 @@ Computationally expensive computer codes for predictive modelling are ubiquitous

In Julia there are a few tools for performing non-accelerated uncertainty quantification, from classical sensitivity analysis approaches, e.g., [UncertaintyQuantification.jl](https://zenodo.org/records/10149017), GlobalSensitivity.jl [@Dixit:2022], and MCMC, e.g., [Mamba.jl](https://github.com/brian-j-smith/Mamba.jl) or [Turing.jl](https://turinglang.org/). For computational efficiency, ensemble methods also provide approximate sampling (e.g., the Ensemble Kalman Sampler [@Garbuno-Inigo:2020b;@Dunbar:2022a]) though these only provide Gaussian approximations of the posterior.

Accelerated uncertainty quantification tools also exist for the related approach of Approximate Bayesian Computation (ABC), e.g., GpABC [@Tankhilevich:2020] or [ApproxBayes.jl](https://github.com/marcjwilliams1/ApproxBayes.jl?tab=readme-ov-file); these tools both approximately sample from the posterior distribution. In ABC, this approximation comes from bypassing the likelihood that is usually required in sampling methods, such as MCMC. Instead, the goal of ABC is to replace the likelihood with a scalar-valued sampling objective that compares model and data. In CES, the approximation comes from learning the parameter-to-data map, then following this it calculates an explicit likelihood and uses exact sampling via MCMC. Some ABC algorithms also make use of statistical emulators to further accelerate sampling (GpABC). ABC can be used in more general contexts than CES, but suffers greater approximation error and more stringent assumptions, especially in multi-dimensional problems.
Accelerated uncertainty quantification tools also exist for the related approach of Approximate Bayesian Computation (ABC), e.g., GpABC [@Tankhilevich:2020] or [ApproxBayes.jl](https://github.com/marcjwilliams1/ApproxBayes.jl?tab=readme-ov-file); these tools both approximately sample from the posterior distribution. In ABC, this approximation comes from bypassing the likelihood that is usually required in sampling methods, such as MCMC. Instead, the goal of ABC is to replace the likelihood with a scalar-valued sampling objective that compares model and data. In CES, the approximation comes from learning the parameter-to-data map, then following this it calculates an explicit likelihood and uses exact sampling via MCMC. Some ABC algorithms also make use of statistical emulators to further accelerate sampling (GpABC). ABC encounters challenges due to the subjective selection of summary statistics and distance metrics, as well as the risk of approximation errors, particularly in high-dimensional settings [@Sisson:2018]. CES addresses these issues by employing direct sampling using an emulator, although is restricted to an explicit Gaussian likelihood, unlike in ABC.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • As @sisson is a book, could you point perhaps to specific section/subsection that might help.

  • Suggested paragraph - feel free to ignore

Accelerated uncertainty quantification tools also exist for the related approach of Approximate Bayesian Computation (ABC), e.g., GpABC [@Tankhilevich:2020] or ApproxBayes.jl; these tools both approximately sample from the posterior distribution. In ABC, this approximation comes from bypassing the likelihood that is usually required in sampling methods like MCMC. Instead, the likelihood is replaced with sampling objectives based on chosen summary statistics to compare model and data; statistical emulators accelerate this sampling (GpABC). Though flexible, ABC encounters challenges due to the subjectivity of summary statistics and distance metrics, that may lead to approximation errors particularly in high-dimensional settings [@sisson:2018]. CES is more restrictive due to use of an explicit Gaussian likelihood, but also leverages this structure to deal with high dimensional data.

… 2018 for Chapter 8 on High Dimensional ABC of Sisson book.
@lm2612 lm2612 merged commit 2827e71 into main Apr 4, 2024
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants