Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Cox Proportional Hazards models #230

Closed
David-Hervas opened this issue Jun 27, 2017 · 38 comments
Closed

Add Cox Proportional Hazards models #230

David-Hervas opened this issue Jun 27, 2017 · 38 comments

Comments

@David-Hervas
Copy link

Hi, any chance of adding Cox Proportional Hazards models? They are widely used for survival analysis in medical research and would be a nice addition to the package. Thanks!

@paul-buerkner
Copy link
Owner

This issue is kind of duplicate with #175. Are you aware of any nice implementation of Cox Proportional Hazards models in Stan? This would really help in getting them implemented.

@David-Hervas
Copy link
Author

David-Hervas commented Jun 27, 2017

Hi, sorry for the duplicate. There is an example model at github.com/stan-dev

@paul-buerkner
Copy link
Owner

Thanks for the example! I have one follow up question: What is the "response" variable of such a model? I mean what should be returned by methods such as predict, fitted etc?

@David-Hervas
Copy link
Author

This is from the function predict.coxph from survival package (frequentist), but I think the explanation could help with the expected returned values from predict and fitted functions.

@crsh
Copy link

crsh commented Jun 28, 2017

Maybe looking at SurvivalStan is informative.

@paul-buerkner
Copy link
Owner

paul-buerkner commented Jun 28, 2017

After reading a bit about Bayesian survival regression, it seems we have two ways of doing it:

  1. by directly modelling the (possibly censored) time to event as the outcome variable. This is already possible in brms.

  2. By modelling the hazard function. Here, the baseline hazard function is what mainly complicates things.

The cox regression stuff appears to fall in category 2, which is yet to be implemented in brms. Do I understand the situation correctly?

@charlesmalpas
Copy link

Hi Paul,

From what I know of Cox regression, your understanding is correct. If I understand correctly myself, implementing (1) in brms would be an accelerated failure time model. Being able to model proportional hazards as in (2) would be fantastic!

Many thanks!

Charles

@HansTierens
Copy link

Hi Paul,

Maybe adding the Cox Proportional Hazards model may be 'circumvented'.

I often use the following approach:
First, organize the survival data in the long-format (splitting the time horizon by unique failure times or even unique observation times).
Then fit a Poisson regression with the failure status as a dependent variable and 'interval length' as an offset variable.
The baseline hazard can be flexibly modelled using a smoothing spline of time.

The results are very comparable or even identical to the coxph results.

@paul-buerkner
Copy link
Owner

paul-buerkner commented Oct 17, 2017 via email

@charlesmalpas
Copy link

Hi Hans,

This sounds like fantastic approach! I would be very interested in seeing your code for this, if you are happy to share.

Kind regards,
Charles

@HansTierens
Copy link

Hi Charles,

I quickly composed a R-markdown file explaining the approach.
Forgive me if it doesn't look extremely nice (I am still playing around with it (read: fiddling)).

Cox_as_Poisson_markdown.zip

Kind regards,
Hans

@charlesmalpas
Copy link

Thanks Hans,

That is very helpful - thanks for taking the time to send it through!

Kind regards,

Charles

@paul-buerkner
Copy link
Owner

Given that the proportional hazards models can be fitted using poisson regression as explained by @HansTierens above (see also http://discourse.mc-stan.org/t/using-stan-jm-for-parametric-proportional-hazards-regression-only/3931/3) and that brms now allows custom families if one wants to use a different representation of the model, I think this issue can be closed.

@paul-buerkner
Copy link
Owner

Reopening this issue following some discussions with users. Maybe we can get proportional hazards models working in brms at some point.

@EmmanuelCharpentier
Copy link

A possibly relevant contribution on Discourse...

@paul-buerkner
Copy link
Owner

The Cox model is now implemented and can be accessed via brmsfamily("cox"). The implementation is still kind of experimental and not yet officially supported (hence also not documented), but here is a simply example for users to try it out:

library(simsurv)
library(survival)
set.seed(1234)

# simulated data from the rstanarm::stan_surv example
covs <- data.frame(id  = 1:200, trt = stats::rbinom(200, 1L, 0.5))
d1 <- simsurv(lambdas = 0.1,
              gammas  = 1.5,
              betas   = c(trt = -0.5),
              x       = covs,
              maxt    = 5)
d1 <- merge(d1, covs)

fit_coxph <- coxph(Surv(eventtime, status) ~ trt, data = d1)
summary(fit_coxph)

fit_brm <- brm(eventtime | cens(1 - status) ~ 1 + trt, 
            data = d1, family = brmsfamily("cox"))
summary(fit_brm)

@EmmanuelCharpentier
Copy link

The Cox model is now implemented and can be accessed via brmsfamily("cox"). The implementation is still kind of experimental and not yet officially supported (hence also not documented),

Nice ! Thank you very much !

but here is a simply example for users to try it out:

I have to disagree with the results :

> library(survival)
> data(leukemia)
> library(brms)
Le chargement a nécessité le package : Rcpp
Registered S3 method overwritten by 'xts':
  method     from
  as.zoo.xts zoo 
Loading 'brms' package (version 2.9.3). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').

Attachement du package : ‘brms’

The following object is masked from ‘package:survival’:

    kidney

> B1 <- brm(time|cens(1-status)~1+x, data=leukemia, family=brmsfamily("cox"))
Compiling the C++ model
Start sampling

SAMPLING FOR MODEL '45bf4e8879a40f68ff7ae2b00ff636c3' NOW (CHAIN 1).

[ Yadda, yadda.... : Snip ]

Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.118257 seconds (Warm-up)
Chain 4:                0.118097 seconds (Sampling)
Chain 4:                0.236354 seconds (Total)
Chain 4: 
> summary(B1)
 Family: cox 
  Links: mu = log 
Formula: time | cens(1 - status) ~ 1 + x 
   Data: leukemia (Number of observations: 23) 
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup samples = 4000

Population-Level Effects: 
               Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept          0.17      0.58    -1.02     1.28       1862 1.00
xNonmaintained    -1.09      0.53    -2.14    -0.12       2937 1.00

Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
is a crude measure of effective sample size, and Rhat is the potential 
scale reduction factor on split chains (at convergence, Rhat = 1).
> summary(coxph(Surv(time, status)~x, data=leukemia))
Call:
coxph(formula = Surv(time, status) ~ x, data = leukemia)

  n= 23, number of events= 18 

                 coef exp(coef) se(coef)     z Pr(>|z|)  
xNonmaintained 0.9155    2.4981   0.5119 1.788   0.0737 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

               exp(coef) exp(-coef) lower .95 upper .95
xNonmaintained     2.498     0.4003    0.9159     6.813

Concordance= 0.619  (se = 0.063 )
Likelihood ratio test= 3.38  on 1 df,   p=0.07
Wald test            = 3.2  on 1 df,   p=0.07
Score (logrank) test = 3.42  on 1 df,   p=0.06

A sign error somewhere ? Or something more worrying ?

Questions :

  • Did you implement the "long" format (Surv(tstart, time, status)) used by Terry Therneau to implement time-dependent covariates|effects, iterative events, etc... ?

  • In this case, did you implement "by unit" estimation ?

@paul-buerkner
Copy link
Owner

paul-buerkner commented Jun 24, 2019 via email

@EmmanuelCharpentier
Copy link

EmmanuelCharpentier commented Jun 25, 2019

This is not a sign error but intentional for consistency with other
families in brms: higher values = longer survival times. Sorry I should
have said this.

Interesting rationale. It just goes against half a century of relative risk estimation, disturbing readers' (and reviewers') habits... At least in medicine, these habits are deeply ingrained. Could that be controlled by an option to the cox family ?

I did not implement time dependent covariates on purpose as this costs so
much extra work I currently cannot effort.

Would you be open to a proposal (in the form of a customfamily, for simplicity's sake...) in about 2-4 weeks ?

But rstanarm::stan_surv (https://github.com/stan-dev/rstanarm/tree/feature/survival) can do this.

I'm more interested in brms, if only for the extreme ease of expression of complex relationships of variables offered by:

Mod <- brm(bf(...) + bf(...) + ..., ...)

which essential in real-world medical problems.

@paul-buerkner
Copy link
Owner

paul-buerkner commented Jun 25, 2019

Interesting rationale. It just goes against half a century of relative risk estimation, disturbing readers' (and reviewers') habits... At least in medicine, these habits are deeply ingrained. Could that be controlled by an option to the cox family ?

I will think about this. Since the family is not offically supported yet, we can make changes at any point if we like. In any case, this will be clearly documented and explained.

Would you be open to a proposal (in the form of a customfamily, for simplicity's sake...) in about 2-4 weeks ?

Of course! The only problem I currently see is that we have to use numerical integration of the (now predictor dependent) baseline hazard in this case, as we have no general analytic solution to integrate it was we have with M-Splines and I-Splines. But I may be mistaken.

@EmmanuelCharpentier
Copy link

The only problem I currently see is that we have to use numerical integration of the (now predictor dependent) baseline hazard in this case, as we have no general analytic solution to integrate it was we have with M-Splines and I-Splines. But I may be mistaken.

The whole point of Cox's proportional hazard model was to integrate out the (unknow) base hazard function.

  • One way to do this is to postulate a piecewise-constant hazard and concentrate on the risk proportions. One example of this approach translates very well to the "long data" format ; a further refinement is to re-express the probability of an event as a Bernoulli variable (I'm strongly uncomfortable with using a pmf (Poisson) with values in N to model a RV with values in {0, 1}...).

  • A second way to do this is to approximate the unknown h(t) by a spline (which we can integrate), as you do in your proposal. I'm not (yet) convinced that we do not the capital sin of "using the future to estimate the past", as underscored in Terry Thernau's vignette on time-dependent covariates (see pp 2--4). Your solution looks fine, is very elegant and should allow for plotting nice curves (e. g. for displaying its variaboility as estimated by MCMC sampling), but I'm not yet conviced that the estimates of the risk ratios (postulated constant) are really independent of h(t) estimates. I'll need a bit of time to work out the maths (I'm very rusty in that...).

In any way, I think that, in a PH modeling problem, the shape of h(t) can't be but a secondary goal, the main goal being the relative risks estimation(s) ; if this shape becomes the main goal, other, more parametric models should be used.

@paul-buerkner
Copy link
Owner

The way I see the Cox model from a brms perspective is as inducing a probability distribution via
h(t, eta) S(t, eta) where eta is our linear predictor term. As we cannot just "remove" the bazelind hazard as we do in classical estimation methods, its shape is relevant because a baseline hazard function that (1) is not flexible enough will effect the estimates we may be interested in more and/or (2) is hard to sample from will affect the sampling efficiency and convergence very strongly.

The piecewise constant hazard looks like something that will likely have problems with (2) but I haven't tried it out myself thoroughly enough to be sure about this.

@paul-buerkner
Copy link
Owner

I just reverted the cox parameterization back to the standard one, that is, higher values of eta imply higher hazard and thus lower survival times. The improved consistency with other families in brms is likely not worth the confusion caused when users compare the brms results to the results of other packages and find the signs to be inverted.

@EmmanuelCharpentier
Copy link

I just reverted the cox parameterization back to the standard one, that is, higher values of eta imply higher hazard and thus lower survival times. The improved consistency with other families in brms is likely not worth the confusion caused when users compare the brms results to the results of other packages and find the signs to be inverted.

Indeed. The inconsistency isn't yours : for half a century, the risk analysis that the Cox model (and AFT models, BTW) embody has been taught under the name of "survival analysis". Which might be the misnomer of the century... OTOH, Kaplan-Meier estimation is (part of survival analysis.

Piecewise-constant vs. spline estimation of risk as a function of time : I think that this is but a marginal issue. I used piecewise-constant model because I understood it. I have yet to wrap my mind around your clever use of "auto-integrating" splines (which indeed fullfill the need of estimating h(t) and using its integral H = \int_t0}^{t1] h(t) dt. I'm reading about this kind of possibility.

The fundamental issue is that in the case of your present implementation, the only information available about one observational unit (one subject) are

  • the observation duration,
  • the status at end of observation, and
  • the associated covariate values,

which fits well in the "general scheme" of regression analysis : (scalar observation) ~ (vector of covariates)(vector of coefficients) or, equivalently (vector of observations) ~ (covariates matrix)(design matrix). In this scheme, one observation <==> one line of inputs <==> one increment to the log-likelihood.

This cannot stand with the "extensions to Cox models" such as time-varying covariates or multistage models : these models are, IMHO, examples of what I'll call "history analysis", where the observational unit is a subject, and the observation a (varying length) time-ordered collection of records. The likelihood of any of these records has no meaning in itself ; what is pertinent to the analysis is the likelihood of the collection of records pertaining to one subject.

Using Terry Therneau's presentation of {Id (start end] status} records, it is relatively easy to come up with a Stan program estimating the various element of a complicated model. The log-likelihood is best computed in a "transformed parameters" section where a subject-wise array of LLs is updated by the likelihood of the individual records.

What is difficult is to find a way to wrap the various elements indicating to the program:

  • what is the "Id" variable,
  • what are the starting and stopping times,
  • what is the final (and possibly the initial) status(es),

without overcomplicating the presentation.

Multistage models are but another complication, since one does not try to estimate a single (scalar) risk, but a matrix of state transition risks. Furthermore, in most cases, supplementary info should be passed to the program, if only to indicate possible and impossible transition (e. g. transitioning from "dead" to "pregnant" is impossible...). That is the kind of book-keeping that computers are supposed to excel at...

So I think that the "regresssion presentation" of brms may or may not be the "best" solution of encompassing these extensions:

  • It is currently the best presentation for the Cox model stricto sensu (i. e. no time varying covariates or effects).
  • It might be adaptable to time-varying covariates or effects,if we can pass subject id, starting and stopping times and status(es) to the underlying program.
  • It will probably break for full "history analysis", where the amount of relevant information (such as possible transition matrices, correlation of risks between state transitions, etc...) becomes too large fore a one-liner.

The possibility of passing a complicated program "in pieces", as brms does in specifying a whole DAG by specifying one regression per edge, should be kept, somehow...

Your thoughts ?

@paul-buerkner
Copy link
Owner

I agree with you. brms functionality is amendable to time-varying covariates for instance in the way rstanarm::stan_surv does it, that is by basically wrapping time-dependent covariates inside tde(). Syntactically, this is simple to implement but since we also need to integrate over these time-dependent effects, we need numerical integration inside Stan (again see how rstanarm::stan_surv does it). This is a lot of code to write which I don't plan to invest in the foreseeable future.

What you call history analysis is out of scope of brms I believe and might actually be strongly related to what rstanarm::stan_jm does.

Both rstanarm and brms are part of the stan supported interfaces and I don't plan to duplicate much of what rstanarm can already do very well via stan_jm and stan_surv. The reason I implemented the basic cox model is because it is so commone and users have been asking for it multiple times.

All the advanced stuff is nice but likely goes into a very specific direction beyond the scope of brms.

@Tafex
Copy link

Tafex commented Jul 22, 2019

Hello, all of this Group
what is the assumption of bayesian Cox Proportional hazard model and If you have examples of this models in R-codes sharing me for model checking ......
OR what is the difference with Classical?in classical using Global Test and Graphical can be checking.....

@paul-buerkner
Copy link
Owner

You see an example for the cox model as implemented in brms in the above posts. Please keep in mind that the implementation is experimental and there still remains stuff to be implemented for instance surival curves etc.

@Tafex
Copy link

Tafex commented Aug 2, 2019

ok thanks paul-buerkner .
But i need Brinla package. its not in R cran.
please share me..

@Tafex
Copy link

Tafex commented Aug 2, 2019

brms package can do bayesian parametric survival model?if yes how?

@paul-buerkner
Copy link
Owner

The Cox proportional hazards model is now offically supported via family cox.

@EmmanuelCharpentier
Copy link

The Cox proportional hazards model is now offically supported via family cox.

Extremely nice.

But, as of brms 2.13.3 (last version avilable from CRAN), The cox brmsfamily is not yet documented.

Is there an upcoming update ?

@paul-buerkner
Copy link
Owner

You have to install brms from github for now to make use of the latested (exported) version of the feature.

@EmmanuelCharpentier
Copy link

The results obtained in analyzing survival data via `brm` and the `cox` family seem extremely sensitive to the baseline hazard parametrization.

This can be illustrated by the analysis of the extremely simple dataset `Leuk`, from `survival`

library(survival)
data(leukemia, package="survival")
print(head(leukemia))
length(table(leukemia$time))
  time status          x
1    9      1 Maintained
2   13      1 Maintained
3   13      0 Maintained
4   18      1 Maintained
5   23      1 Maintained
6   28      0 Maintained
[1] 18

A landmark can be given by the frequentist analysis :

CPH <- coxph(Surv(time,status)~x, data=leukemia)
summary(CPH)
Call:
coxph(formula = Surv(time, status) ~ x, data = leukemia)

  n= 23, number of events= 18 

                 coef exp(coef) se(coef)     z Pr(>|z|)  
xNonmaintained 0.9155    2.4981   0.5119 1.788   0.0737 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

               exp(coef) exp(-coef) lower .95 upper .95
xNonmaintained     2.498     0.4003    0.9159     6.813

Concordance= 0.619  (se = 0.063 )
Likelihood ratio test= 3.38  on 1 df,   p=0.07
Wald test            = 3.2  on 1 df,   p=0.07
Score (logrank) test = 3.42  on 1 df,   p=0.06

Bayesian analysis via `brms` with the default parameters gives consonant results :

invisible({
  library(brms)
  options(mc.cores = parallel::detectCores())
  rstan::rstan_options(auto_write = TRUE) })
B1 <-brm(time|cens(Cens)~x,
   data=within(leukemia, Cens <- factor(c("none","right")[2-status])),
   family=cox, iter=10000, silent=TRUE, open_progress=FALSE, refresh=0)
summary(B1)
Compiling Stan program...
Start sampling
 Family: cox 
  Links: mu = log 
Formula: time | cens(Cens) ~ x 
   Data: within(leukemia, Cens <- factor(c("none", "right") (Number of observations: 23) 
Samples: 4 chains, each with iter = 10000; warmup = 5000; thin = 1;
         total post-warmup samples = 20000

Population-Level Effects: 
               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept          0.88      0.46    -0.08     1.70 1.00    15191    13820
xNonmaintained     1.03      0.52     0.05     2.09 1.00    15638    12852

Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

However, these results seem exremely sensitive to the parametrisation of the baseline hazard function, whose only control, as far as I can tell, is the number of nodes ("degrees of freedom") of the spline:

B2 <- do.call(
  rbind,
  lapply(
    3*(2:6),
    function(d)
      data.frame(DF=d,
       as.data.frame(
         summary(
           brm(
             time|cens(Cens)~x,
             data=within(
           leukemia,
           Cens <- factor(c("none","right")[2-status])),
             family=cox(bhaz=list(df=d)),
             iter=10000, silent=TRUE,
             open_progress=FALSE, refresh=0))$fixed))))
format(B2, digits=3)
DF Estimate Est.Error l.95..CI u.95..CI Rhat Bulk_ESS Tail_ESS
Intercept 6 0.776 0.453 -0.1681 1.61 1 14628 13662
xNonmaintained 6 0.961 0.514 -0.0195 2.01 1 16931 12917
Intercept1 9 0.623 0.436 -0.2804 1.44 1 15308 14213
xNonmaintained1 9 0.887 0.506 -0.0801 1.9 1 19772 14191
Intercept2 12 0.457 0.419 -0.4052 1.22 1 15428 13217
xNonmaintained2 12 0.806 0.495 -0.1449 1.8 1 18909 14460
Intercept3 15 0.368 0.413 -0.506 1.13 1 16508 14117
xNonmaintained3 15 0.78 0.495 -0.1639 1.78 1 19049 14199
Intercept4 18 0.308 0.412 -0.554 1.05 1 16193 12958
xNonmaintained4 18 0.762 0.495 -0.1829 1.74 1 19249 14439

@paul-buerkner
Copy link
Owner

Interesting. Could you try what happens if you exclude the spline intercept via intercept = FALSE in the bhaz list?

@EmmanuelCharpentier
Copy link

Interesting. Could you try what happens if you exclude the spline intercept via intercept = FALSE in the bhaz list?

About the same :

B3 <- do.call(
  rbind,
  lapply(
    3*(2:6),
    function(d)
      data.frame(DF=d,
       as.data.frame(
         summary(
           brm(
             time|cens(Cens)~x,
             data=within(
           leukemia,
           Cens <- factor(c("none","right")[2-status])),
             family=cox(bhaz=list(df=d, intercept=FALSE)),
             iter=10000, silent=TRUE,
             open_progress=FALSE, refresh=0))$fixed))))
format(B3, digits=3)
DF Estimate Est.Error l.95..CI u.95..CI Rhat Bulk_ESS Tail_ESS
Intercept 6 0.788 0.447 -0.13826 1.61 1 14917 13787
xNonmaintained 6 0.972 0.513 -0.00878 2.01 1 18426 14671
Intercept1 9 0.618 0.439 -0.2916 1.43 1 16373 15113
xNonmaintained1 9 0.877 0.506 -0.1008 1.9 1 19112 13780
Intercept2 12 0.47 0.43 -0.43147 1.25 1 18377 13308
xNonmaintained2 12 0.811 0.505 -0.17212 1.83 1 20843 14589
Intercept3 15 0.391 0.417 -0.48124 1.15 1 17796 13493
xNonmaintained3 15 0.783 0.5 -0.1728 1.79 1 20058 13139
Intercept4 18 0.325 0.414 -0.55221 1.09 1 18709 13672
xNonmaintained4 18 0.759 0.496 -0.20065 1.76 1 20576 13218

HTH,

@EmmanuelCharpentier
Copy link

BTW : the way the bazeline hazard is modeled should be documented (along with a pointer to a basic introduction to M-splines) : I still do not understand it, even after reading the generated stan code...

A vignette illustrating the way the time-to-event analysis is implemented in brms, along with a rerun of analyses of "classical" data sets, would also be useful.

I still think that survival analysis "in full" (including time-dependent covariates and, possibly, accelerated failure time modeling) is a valid target for brms : the many possibilities offered by brms over rstanarm's "canned" models offer an easy way to estimate Bayesian models for "real life" data, without having to dive in the complexities an minutiae ofstan ; the importance of time-to-event modeling in many applications (medicine, but also physics, biology, engineering...) whold make such an extension a very worthwhile one...

OTOH, further reflections on the possible extensions to multistage/multistate models (what I called "history" earlier in this thread) led me to thonk that this is a fundamentally different (more complex) problem, harder to fit in the regression straitjacket. I still have to think further about this.

HTH,

@paul-buerkner
Copy link
Owner

paul-buerkner commented Jul 26, 2020

Isn't most of this already implemented in rstanarm::stan_surv and described in https://arxiv.org/abs/2002.09633 ?

I am not sure if adding all of these features to brms is worthwile, because it requires a lot of special case code.

Edit: I have some quick doc in the brms_families vignette.

@HansTierens
Copy link

I (recently) published an article which fits time-dependent covariates (and mentions an approach to extend this towards time-dependent effects too). My approach would still be the Poisson approach. I think that the strength of using the Poisson approach lies in the easier use of an offset (when time intervals have unequal lengths) and its interpretation.
I think, though, that both a Bernouilli approach and the Poisson can co-exist and might fit the PH model better in some cases (see e.g., here).

The article is uses an organizational (labor-market) data example, but the whole code can be easily extended to much more applications in medicine, physics, biology, engineering, ... The main contribution of this article for organizational sciences is the multiple-membership setting, which I absolutely love!!! Since brms has the most straightforward implementation and easy-to-use approach in R so far!

I hope this might be somewhat helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants