Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vignettes failed that require INLA #11

Closed
matteodelucchi opened this issue Mar 16, 2024 · 3 comments · Fixed by #14
Closed

vignettes failed that require INLA #11

matteodelucchi opened this issue Mar 16, 2024 · 3 comments · Fixed by #14
Assignees
Labels
bug Something isn't working

Comments

@matteodelucchi
Copy link
Contributor

Think about:

  1. Don't run vignettes that need INLA.
  2. Precompute and compile vignettes based on precompute
Flavor: r-devel-linux-x86_64-debian-gcc
Check: re-building of vignette outputs, Result: ERROR
  Error(s) in re-building vignettes:
    ...
  --- re-building 'data_simulation.Rmd' using rmarkdown
  
  Quitting from lines 29-58 [fit_model] (data_simulation.Rmd)
  Error: processing vignette 'data_simulation.Rmd' failed with diagnostics:
  there is no package called 'INLA'
  --- failed re-building 'data_simulation.Rmd'
  
  --- re-building 'mixed_effect_BN_model.Rmd' using rmarkdown
  --- finished re-building 'mixed_effect_BN_model.Rmd'
  
  --- re-building 'model_specification.Rmd' using rmarkdown
  --- finished re-building 'model_specification.Rmd'
  
  --- re-building 'multiprocessing.Rmd' using rmarkdown
  
  Quitting from lines 88-130 [benchmarking] (multiprocessing.Rmd)
  Error: processing vignette 'multiprocessing.Rmd' failed with diagnostics:
  worker initialization failed: there is no package called 'INLA'
  --- failed re-building 'multiprocessing.Rmd'
  
  --- re-building 'paper.Rmd' using rmarkdown
  --- finished re-building 'paper.Rmd'
  
  --- re-building 'parameter_learning.Rmd' using rmarkdown
  
  Quitting from lines 67-72 [unnamed-chunk-3] (parameter_learning.Rmd)
  Error: processing vignette 'parameter_learning.Rmd' failed with diagnostics:
  there is no package called 'INLA'
  --- failed re-building 'parameter_learning.Rmd'
  
  --- re-building 'quick_start_example.Rmd' using rmarkdown
  --- finished re-building 'quick_start_example.Rmd'
  
  --- re-building 'structure_learning.Rmd' using rmarkdown
  --- finished re-building 'structure_learning.Rmd'
  
  SUMMARY: processing the following files failed:
    'data_simulation.Rmd' 'multiprocessing.Rmd' 'parameter_learning.Rmd'
  
  Error: Vignette re-building failed.
  Execution halted
@matteodelucchi matteodelucchi added the bug Something isn't working label Mar 16, 2024
@matteodelucchi matteodelucchi added this to the CRAN submission 3.0.6 milestone Mar 16, 2024
@matteodelucchi matteodelucchi self-assigned this Mar 16, 2024
@matteodelucchi
Copy link
Contributor Author

  1. Don't run vignettes that need INLA.

Defeats the purpose of a vignette.

  1. Precompute and compile vignettes based on precompute

Here are different options possible:

a) compute the data sets when INLA is available and store them in /data/. The issue here is,

  1. that this clutters the package with data and we're already on the upper end of the recommended size limit.
  2. The code in the vignette becomes cluttered due to if ... else if ... else ... and makes it hard to understand for a general user / beginner what's going on here.

b) Precompile the vignettes with knitr. This generates vignettes with their output "hardcoded" (see here).

  1. requires a workflow to rebuild the vignettes regularly.
  2. plots are a bit tricky (see blog post).

@j-i-l
Copy link
Collaborator

j-i-l commented Mar 16, 2024

a) compute the data sets when INLA is available and store them in /data/. The issue here is,

1. that this clutters the package with data and we're already on the upper end of the recommended size limit.

2. The code in the vignette becomes cluttered due to `if ... else if ... else ...` and makes it hard to understand for a general user / beginner what's going on here.

I agree, the data should not be part of the repository. On option might be to move the computation into an gh action and keep the data in an artifact. In the vignette we would then have to fetch the data from an url, this should be doable in 1-2 lines that one should be able to digest if we add a comment like: "Here we get the pre-computed data from github".

Since I haven't used knitr so far, I cannot judge which option is better, but using artifacts seems doable to me.

@matteodelucchi
Copy link
Contributor Author

a) compute the data sets when INLA is available and store them in /data/. The issue here is,

1. that this clutters the package with data and we're already on the upper end of the recommended size limit.

2. The code in the vignette becomes cluttered due to `if ... else if ... else ...` and makes it hard to understand for a general user / beginner what's going on here.

I agree, the data should not be part of the repository. On option might be to move the computation into an gh action and keep the data in an artifact. In the vignette we would then have to fetch the data from an url, this should be doable in 1-2 lines that one should be able to digest if we add a comment like: "Here we get the pre-computed data from github".

Since I haven't used knitr so far, I cannot judge which option is better, but using artifacts seems doable to me.

With efe9302, I implemented "option b" from above. This requires running the precompile.R script that generates the .Rmd files that include the static output. So, CRAN no longer has to compute anything; they just need to render the Rmd into HTML. In #7, I made a note to consider this in the implementation of the workflow to publish the site with GH actions.

@matteodelucchi matteodelucchi linked a pull request Mar 16, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants