Skip to content

Releases: jenfb/bkmr

bkmr 0.2.1

04 Mar 18:15
Compare
Choose a tag to compare

Bug fixes

  • allowable values for starting parameter for r[m] parameters updated as follows

    • no longer truncated to a single value (when varsel = FALSE and rmethod = "varying")

    • can be equal to 0 (when varsel = TRUE)

  • Error no longer generated if starting values for h.hat are not positive

  • When checking class of an object, use inherits() instead of class()

bkmr 0.2.0

24 Mar 20:59
Compare
Choose a tag to compare

Major changes

  • Added ability to have binomial outcome family by implementing probit regression within kmbayes()

  • Changed default settings of kmbayes() to speed up computation, by removing computation of the subject-specific effects h[i], as this is not always desired and greatly slows down model fitting

    • This could still be done by setting the option est.h = TRUE in the kmbayes function

    • posterior samples of h[i] can now be obtained via the post-processing SamplePred function; alternatively, posterior summaries (mean, variance) can be obtained via the post-processing ComputePostmeanHnew function

  • Added ability to use exact estimates of the posterior mean and variance by specifying the argument method = 'exact' within the post-processing functions (e.g., OverallRiskSummaries(), PredictorResponseUnivar())

Bug fixes

  • Fixed PredictorResponseBivarLevels() when argument both_pairs = TRUE (#4)

Intial CRAN release

01 Jul 15:49
Compare
Choose a tag to compare

Provides initial functionality for:

  • fitting the BKMR model using the main kmbayes function
  • post-processing functions to visualize cross-sections of the exposure-response function
  • post-processing functions to generate summary statisitics of the exposure-response function

Implementations of BKMR via the main kmbayes function:

  • normally distributed (Gaussian) outcome data
  • Gaussian kernel function
  • model fitting with or without variable selection
  • allows for component-wise or hierarchical (grouped) variable selection
  • can include a random intercept in the model
  • can use a Gaussian predictive process to speed up the computation (after supplying a matrix of knots)