Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

instrumental variables: can they play a role in dynamite? #81

Open
palmierieugenio opened this issue Apr 18, 2024 · 8 comments
Open

instrumental variables: can they play a role in dynamite? #81

palmierieugenio opened this issue Apr 18, 2024 · 8 comments

Comments

@palmierieugenio
Copy link

In other packages about dynamic panels, based on GMM or similar tools, instrumental variables play often a central role, I wonder if instrumental variables could play a role also in this framework. Simultaneous systems of equations approaches like the one of dynamite seems a more transparent way of dealing with endogeneity than GMM, since they allow to express explicitly an equation for each endogenous variable, but I am not sure these methods deal with the same sources of endogeneity and if the inclusion of instrumental variables can improve dynamite.

Also I think that understanding better the similarities and differences of these different approaches could allow to show light on the different assumptions implied by these different approaches and make better comparison between these models and allow to choose between them, based on the context and the set of assumptions that seems more realistic.

Some example of other packages for dynamical panel that rely on GMM or GMM-inspired models:

I have to mention that in my experience GMM (I am referring to pgmm) tends to be difficult to use in practice, since it depends on a lot of hyperparameters and the set of instruments chosen, which can alter the results substantially: it seems very difficult to choose between different specifications of the model. Also it is not clear to me if the existing implementation of pgmm support categorical variables. But most importantly GMM seems to me a less transparent way of dealing with endogenity than explicitly modelling each endogenous variables like in dynamite.

@santikka
Copy link
Collaborator

This is an interesting question, I'm not sure how much we can do with IV in dynamite outside of the classical linear Gaussian setting, as we would have to most likely model the correlation structure (similar to what brms does with set_rescor). I just pushed a commit that allows for contemporaneous dependency across different components with multivariate models to try this out, and it seems to work ok:

library("dynamite")
set.seed(0)

# Classic IV setting with Z -> X -> Y and a latent confounder X <- U -> Y
n <- 100
t <- 30
x <- matrix(0, n, t)
y <- matrix(0, n, t)
z <- matrix(0, n, t)
R <- chol(matrix(c(1, 0.5, 0.5, 1), ncol = 2))
for (i in seq_len(t)) {
  u <- t(R) %*% matrix(rnorm(n * 2), 2)
  z[, i] <- rnorm(n)
  x[, i] <- 0.8 * z[, i] + u[1, ]
  y[, i] <- 0.6 * x[, i] + u[2, ]
}

d <- data.frame(
  y = c(y), x = c(x), z = c(z),
  time = rep(seq_len(t), each = n),
  id = rep(seq_len(n), t)
)

fit <- dynamite(
  obs(c(x, y) ~ -1 + z | -1 + x, family = "mvgaussian"),
  data = d,
  time = "time",
  group = "id"
)

summary(fit, parameters = "beta_y_x")
# A tibble: 1 × 10
  parameter  mean     sd    q5   q95  time group category response type 
  <chr>     <dbl>  <dbl> <dbl> <dbl> <int> <int> <chr>    <chr>    <chr>
1 beta_y_x  0.617 0.0233 0.579 0.655    NA    NA NA       y        beta 

@palmierieugenio
Copy link
Author

palmierieugenio commented Apr 23, 2024

Thank you very much, I will try it.

Anderson-Hsiao, Arellano-Bond and System GMM estimators are often used for models with the lagged dependent variables (for example: y_t~y_t-1 + x) and a small number of T (times of the panel) to avoid bias in the estimates. It is possible that dynamite could show a similar bias without some similar kind of correction for small T, maybe it is possible to check if that is the case by simulation, varying T in models with the lagged outcome as covariate.

In principle the two approaches (instrumental variables-type estimators, such as Anderson-Hsiao and GMM, and simultaneous sistems of equations, such as dynamite) seems more complementary than alternatives and I agree that the first place where to start to check is for normal responses (univariate and multivariate), the implementations of these type of estimators I have found focus on normal outcomes too, so it could be also a normal-only feature of dynamite or a separate function if it results in a way to reduce the bias in small T panels with lagged outcomes as covariate.

@santikka
Copy link
Collaborator

santikka commented Apr 24, 2024

I'm not sure if the following is what you had in mind, but I did a small simulation experiment to try this out, and it does not seem that there is much bias (although this should be repeated multiple times with different data, but this would take time).

library("dynamite")
set.seed(0)

n <- 100
t <- 30
x <- matrix(0, n, t)
y <- matrix(0, n, t)
for (i in seq(2, t)) {
  x[, i] <- rnorm(n)
  y[, i] <- 0.5 * y[, i - 1] + 0.6 * x[, i] + rnorm(n)
}

d <- data.frame(
  y = c(y), x = c(x),
  time = rep(seq_len(t), each = n),
  id = rep(seq_len(n), t)
)

bias <- matrix(0, 4000, 10)
for (i in seq_len(10)) {
  d_small <- d[d$time <= 3 * i, ]
  fit <- dynamite(
    dformula = obs(y ~ lag(y) + x, family = "gaussian"),
    data = d_small,
    time = "time",
    group = "id",
    refresh = 0
  )
  bias[, i] <- as_draws(fit)$beta_y_y_lag1 - 0.5
}

d_bias <- data.frame(
  bias = c(bias),
  t = rep(seq(3, 30, by = 3), each = 4000)
)

d_bias |>
  dplyr::group_by(t) |>
  dplyr::summarise(
    mean = mean(bias),
    median = median(bias),
    sd = sd(bias),
    lwr = quantile(bias, 0.025),
    upr = quantile(bias, 0.975)
  )
# A tibble: 10 × 6
       t     mean   median     sd     lwr    upr
   <dbl>    <dbl>    <dbl>  <dbl>   <dbl>  <dbl>
 1     3  0.0346   0.0347  0.0815 -0.126  0.190 
 2     6 -0.0403  -0.0406  0.0390 -0.115  0.0362
 3     9 -0.0132  -0.0130  0.0299 -0.0727 0.0454
 4    12  0.00450  0.00440 0.0240 -0.0416 0.0517
 5    15  0.0148   0.0151  0.0203 -0.0260 0.0545
 6    18  0.0172   0.0171  0.0191 -0.0201 0.0549
 7    21  0.0141   0.0139  0.0165 -0.0180 0.0467
 8    24  0.0152   0.0152  0.0158 -0.0156 0.0471
 9    27  0.0143   0.0142  0.0149 -0.0148 0.0431
10    30  0.00720  0.00700 0.0138 -0.0200 0.0346

@santikka
Copy link
Collaborator

A better simulation below. With small N and small T there is indeed bias, which diminishes with increasing N.

set.seed(0)

simulate_data <- function(n, t) {
  x <- matrix(0, n, t)
  y <- matrix(0, n, t)
  for(i in seq(2, t)) {
    x[, i] <- rnorm(n)
    y[, i] <- 0.5 * y[, i - 1] + 0.6 * x[, i] + rnorm(n)
  }
  data.frame(
    y = c(y), x = c(x),
    time = rep(seq_len(t), each = n),
    id = rep(seq_len(n), t)
  )
}

estimate_coefs <- function(n, t, fit) {
  d <- simulate_data(n, t)
  fit <- update(fit, data = d)
  coef(fit)$mean
}

d <- simulate_data(10, 3)
fit <- dynamite(
  dformula = obs(y ~ lag(y) + x, family = "gaussian"),
  data = d,
  time = "time",
  group = "id",
  refresh = 0
)

out1 <- replicate(100, estimate_coefs(n = 10, t = 3, fit))
mean(out1[2, ] - 0.5)

out2 <- replicate(100, estimate_coefs(n = 25, t = 3, fit))
mean(out2[2, ] - 0.5)

out3 <- replicate(100, estimate_coefs(n = 100, t = 3, fit))
mean(out3[2, ] - 0.5)
[1] -0.09835761
[1] -0.03820989
[1] -0.002235641

@palmierieugenio
Copy link
Author

I agree repeated simulations are needed to draw conclusions, but these results are promising.

I want to specify that I am not sure if the simulate_data is the right function to replicate the kind of problems of endogeneity GMM and other instrumental variables approaches are trying to solve, I am trying to research more on the topic too, but I still have not found some other example of data generating process, this seems the more intuitive, it is what I would have done too, but maybe we are missing something.

I think a comparison of the estimates of dynamite without instrumental variable correction, dynamite with instrumental variable correction and pgmm from the plm package could give more insights on the issue.

I have tried to test this data generating process on pgmm too and it seems actually more biased than dynamite even in this scenario with very small T, which was designed to show the merits of GMM, which could be seen as a sign that there are no issues even for small T, or most likely we are missing something important in the simulate_data function.

library(dynamite)
library(plm)

set.seed(0)
true_pam<-c(0.5, 0.6)

simulate_data <- function(n, t, pam) {
  x <- matrix(0, n, t)
  y <- matrix(0, n, t)
  for(i in seq(2, t)) {
    x[, i] <- rnorm(n)
    y[, i] <- pam[1] * y[, i - 1] + pam[2] * x[, i] + rnorm(n)
  }
  data<-data.frame(
    y = c(y), x = c(x),
    time = rep(seq_len(t), each = n),
    id = rep(seq_len(n), t)
  )
  data[data$time!=1,] # i remove the first time made of zeroes
}

simulation<-function(iter, n=100, t=4, param=true_pam){
    bias_dynamite<-bias_gmm<-NULL
    for(i in 1:iter){
        d <- simulate_data(n, t, param)
        print(i)
        # DYNAMITE 
        fit_dynamite <- dynamite(
            dformula = obs(y ~ lag(y) + x, family = "gaussian"),
            data = d,
            time = "time",
            group = "id",
            refresh = 0
        )
        
        # GMM
        fit_gmm<-pgmm(y ~lag(y, 1)+x | lag(y, 2), index = c("id","time"),
                      data=d)
        
        bias_dynamite<-rbind(bias_dynamite, coef(fit_dynamite)$mean[2:3]-param)
        bias_gmm<-rbind(bias_gmm, coef(fit_gmm)[1:2]-param)

    }
   res<-cbind(bias_dynamite=bias_dynamite, bias_gmm=bias_gmm)
   colnames(res)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm")
   res<-res[, c("y_lag_dynamite",  "y_lag_gmm","x_dynamite", "x_gmm")]
   res
}

res<-simulation(100)
colMeans(res)
 colMeans(res)
y_lag_dynamite      y_lag_gmm     x_dynamite          x_gmm 
  -0.006960073    0.081687357   -0.003165146    0.022966538 

@palmierieugenio
Copy link
Author

palmierieugenio commented May 3, 2024

I've tried to repeat the experiment with a fixed effect for each individual u and the results of dynamite are indeeed biased for small T.

They are closer to the estimates of an ols than a within estimator. I report also an ols estimator and within estimator because tipically a consistent estimator such as GMM lies between the two (ols and within estimators have opposite biases, Panel Data Econometrics with R, Yves Croissant, Giovanni Millo, chapter 7):

library(dynamite)
library(plm)

set.seed(0)
true_pam<-c(0.3, -10, 10)

simulate_data <- function(n, t, pam) {
    x <- matrix(0, n, t)
    y <- matrix(0, n, t)
    u<-rnorm(n,0, pam[3])
    for(i in seq(2, t)) {
        x[, i] <- rnorm(n)
        y[, i] <- pam[1] * y[, i - 1] + pam[2] * x[, i] + rnorm(n)+u
    }
    data<-data.frame(
        y = c(y), x = c(x),
        time = rep(seq_len(t), each = n),
        id = rep(seq_len(n), t)
    )
    data[data$time!=1,] # i remove the first time made of zeroes
}

simulation<-function(iter, n=100, t=4, param=true_pam){
    par_gmm<-par_dynamite<-bias_dynamite<-bias_gmm<-NULL
    par_ols<-par_within<-bias_ols<-bias_within<-NULL
    for(i in 1:iter){
        d <- simulate_data(n, t, param)
        print(i)
        # DYNAMITE 
        fit_dynamite <- dynamite(
            dformula = obs(y ~ lag(y) + x, family = "gaussian"),
            data = d,
            time = "time",
            group = "id",
            refresh = 0
        )
        
        # GMM
        fit_gmm<-pgmm(y ~lag(y, 1)+x | lag(y, 2), index = c("id","time"),
                      data=d)
        
        # ols
        ols_fit <- plm(y ~ lag(y) +x,
                       data = d,
                       index = c("id","time"),
                       model = "within",
                       effect = "time")
        # within
        within_fit <- update(ols_fit, effect = "twoways")

        
        par_dynamite<-rbind(par_dynamite, coef(fit_dynamite)$mean[2:3])
        par_gmm<-rbind(par_gmm, coef(fit_gmm)[1:2])
        par_ols<-rbind(par_ols, coef(ols_fit)[1:2])
        par_within<-rbind(par_within, coef(within_fit)[1:2])
        
        bias_dynamite<-rbind(bias_dynamite, coef(fit_dynamite)$mean[2:3]-param[1:2])
        bias_gmm<-rbind(bias_gmm, coef(fit_gmm)[1:2]-param[1:2])
        bias_ols<-rbind(bias_ols, coef(ols_fit)[1:2]-param[1:2])
        bias_within<-rbind(bias_within, coef(within_fit)[1:2]-param[1:2])
        
    }
    res_bias<-cbind(bias_dynamite=bias_dynamite, bias_gmm=bias_gmm,
                    bias_ols=bias_ols, bias_within=bias_within)
    colnames(res_bias)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm",
                          "y_lag_ols", "x_ols", "y_lag_within", "x_within")
    res_bias<-res_bias[, c("y_lag_dynamite",  "y_lag_gmm","y_lag_ols", "y_lag_within",
                           "x_dynamite", "x_gmm", "x_ols", "x_within")]
    
    res_pam<-cbind(par_dynamite=par_dynamite, par_gmm=par_gmm,
                   par_ols=par_ols, par_within=par_within)
    colnames(res_pam)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm",
                         "y_lag_ols", "x_ols", "y_lag_within", "x_within")
    res_pam<-res_pam[, c("y_lag_dynamite",  "y_lag_gmm","y_lag_ols", "y_lag_within",
                         "x_dynamite", "x_gmm", "x_ols", "x_within")]
    
    res<-list(bias=res_bias, pam=res_pam)
    res
}

res<-simulation(100,100)
round( cbind(sapply(res, colMeans), mae=colMeans(abs(res$bias)),
              mse=colMeans((res$bias)^2)), 3)

                 bias     pam   mae   mse
y_lag_dynamite  0.473   0.773 0.473 0.225
y_lag_gmm       0.002   0.302 0.051 0.007
y_lag_ols       0.474   0.774 0.474 0.226
y_lag_within   -0.007   0.293 0.013 0.000
x_dynamite     -0.044 -10.044 0.407 0.270
x_gmm          -0.012 -10.012 0.254 0.144
x_ols          -0.031 -10.031 0.413 0.274
x_within        0.030  -9.970 0.093 0.014

Something interesting is that the bias in this context doesn't seem to propagate to the other parameter x, just the autoregressive parameter y_lag seems to be biased with this data generating process.

@palmierieugenio
Copy link
Author

If there is a correlation between u and x also the estimates for the other covariates will be be biased:

library(dynamite)
library(plm)

set.seed(0)
true_pam<-c(0.3, -10, 10)

simulate_data <- function(n, t, pam) {
    x <- matrix(0, n, t)
    y <- matrix(0, n, t)
    u<-rnorm(n,0, pam[3])
    for(i in seq(2, t)) {
        x[, i] <-2*u+ rnorm(n)
        y[, i] <- pam[1] * y[, i - 1] + pam[2] * x[, i] + rnorm(n)+u
    }
    data<-data.frame(
        y = c(y), x = c(x),
        time = rep(seq_len(t), each = n),
        id = rep(seq_len(n), t)
    )
    data[data$time!=1,] # i remove the first time made of zeroes
}

simulation<-function(iter, n=100, t=4, param=true_pam){
    par_gmm<-par_dynamite<-bias_dynamite<-bias_gmm<-NULL
    par_ols<-par_within<-bias_ols<-bias_within<-NULL
    for(i in 1:iter){
        d <- simulate_data(n, t, param)
        print(i)
        # DYNAMITE 
        fit_dynamite <- dynamite(
            dformula = obs(y ~ lag(y) + x, family = "gaussian"),
            data = d,
            time = "time",
            group = "id",
            refresh = 0
        )
        
        # GMM
        fit_gmm<-pgmm(y ~lag(y, 1)+x | lag(y, 2), index = c("id","time"),
                      data=d)
        
        # ols
        ols_fit <- plm(y ~ lag(y) +x,
                       data = d,
                       index = c("id","time"),
                       model = "within",
                       effect = "time")
        # within
        within_fit <- update(ols_fit, effect = "twoways")
        
        
        par_dynamite<-rbind(par_dynamite, coef(fit_dynamite)$mean[2:3])
        par_gmm<-rbind(par_gmm, coef(fit_gmm)[1:2])
        par_ols<-rbind(par_ols, coef(ols_fit)[1:2])
        par_within<-rbind(par_within, coef(within_fit)[1:2])
        
        bias_dynamite<-rbind(bias_dynamite, coef(fit_dynamite)$mean[2:3]-param[1:2])
        bias_gmm<-rbind(bias_gmm, coef(fit_gmm)[1:2]-param[1:2])
        bias_ols<-rbind(bias_ols, coef(ols_fit)[1:2]-param[1:2])
        bias_within<-rbind(bias_within, coef(within_fit)[1:2]-param[1:2])
        
    }
    res_bias<-cbind(bias_dynamite=bias_dynamite, bias_gmm=bias_gmm,
                    bias_ols=bias_ols, bias_within=bias_within)
    colnames(res_bias)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm",
                          "y_lag_ols", "x_ols", "y_lag_within", "x_within")
    res_bias<-res_bias[, c("y_lag_dynamite",  "y_lag_gmm","y_lag_ols", "y_lag_within",
                           "x_dynamite", "x_gmm", "x_ols", "x_within")]
    
    res_pam<-cbind(par_dynamite=par_dynamite, par_gmm=par_gmm,
                   par_ols=par_ols, par_within=par_within)
    colnames(res_pam)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm",
                         "y_lag_ols", "x_ols", "y_lag_within", "x_within")
    res_pam<-res_pam[, c("y_lag_dynamite",  "y_lag_gmm","y_lag_ols", "y_lag_within",
                         "x_dynamite", "x_gmm", "x_ols", "x_within")]
    
    res<-list(bias=res_bias, pam=res_pam)
    res
}

res<-simulation(100,100)
round( cbind(sapply(res, colMeans), mae=colMeans(abs(res$bias)),
             mse=colMeans((res$bias)^2)), 3)

                 bias     pam   mae   mse
y_lag_dynamite -0.005   0.295 0.005 0.000
y_lag_gmm       0.000   0.300 0.002 0.000
y_lag_ols      -0.005   0.295 0.005 0.000
y_lag_within    0.000   0.300 0.002 0.000
x_dynamite      0.441  -9.559 0.441 0.195
x_gmm          -0.004 -10.004 0.077 0.010
x_ols           0.441  -9.559 0.441 0.195
x_within       -0.003 -10.003 0.076 0.010

I think the bias for ols and dynamite estimates reduces because in my naive simulation process x[, i] <-2*u+ rnorm(n), so x is almost a linear combination of u.

@santikka
Copy link
Collaborator

santikka commented May 3, 2024

Thanks for testing things out. In both of these cases, the issue is that now the dynamite model is no longer correctly specified. We need to account for the group-specific effect by including a group-specific random intercept in the model by adding the + random(~1) term in the model formula. With this, the bias is much diminished (below is a modification of your code, with slight tuning of the priors and parameters of the estimation algorithm and using the update method for dynamite as well):

library("plm")

set.seed(0)
true_pam<-c(0.3, -10, 10)

simulate_data <- function(n, t, pam) {
  x <- matrix(0, n, t)
  y <- matrix(0, n, t)
  u<-rnorm(n,0, pam[3])
  for(i in seq(2, t)) {
    x[, i] <- rnorm(n)
    y[, i] <- pam[1] * y[, i - 1] + pam[2] * x[, i] + rnorm(n)+u
  }
  data<-data.frame(
    y = c(y), x = c(x),
    time = rep(seq_len(t), each = n),
    id = rep(seq_len(n), t)
  )
  data[data$time!=1,] # i remove the first time made of zeroes
}

# Run once to get the compiled model
d <- simulate_data(100, 4, true_pam)
p <- get_priors(
  obs(y ~ lag(y) + x + random(~1), family = "gaussian"),
  data = d,
  time = "time",
  group = "id"
)
p$prior[] <- "normal(0, 2)"
fit_dynamite <- dynamite(
  dformula = obs(y ~ lag(y) + x + random(~1), family = "gaussian"),
  data = d,
  time = "time",
  group = "id",
  priors = p,
  refresh = 0,
  control = list(adapt_delta = 0.99), cores = 4, chains = 4, iter = 5000
)

simulation<-function(iter, n=100, t=4, param=true_pam){
  par_gmm<-par_dynamite<-bias_dynamite<-bias_gmm<-NULL
  par_ols<-par_within<-bias_ols<-bias_within<-NULL
  for(i in 1:iter){
    d <- simulate_data(n, t, param)
    print(i)
    # DYNAMITE 
    fit_dynamite <- update(fit_dynamite, data = d)
    
    # GMM
    fit_gmm<-pgmm(y ~lag(y, 1)+x | lag(y, 2), index = c("id","time"),
                  data=d)
    
    # ols
    ols_fit <- plm(y ~ lag(y) +x,
                   data = d,
                   index = c("id","time"),
                   model = "within",
                   effect = "time")
    # within
    within_fit <- update(ols_fit, effect = "twoways")
    
    
    par_dynamite<-rbind(par_dynamite, coef(fit_dynamite)$mean[2:3])
    par_gmm<-rbind(par_gmm, coef(fit_gmm)[1:2])
    par_ols<-rbind(par_ols, coef(ols_fit)[1:2])
    par_within<-rbind(par_within, coef(within_fit)[1:2])
    
    bias_dynamite<-rbind(bias_dynamite, coef(fit_dynamite)$mean[2:3]-param[1:2])
    bias_gmm<-rbind(bias_gmm, coef(fit_gmm)[1:2]-param[1:2])
    bias_ols<-rbind(bias_ols, coef(ols_fit)[1:2]-param[1:2])
    bias_within<-rbind(bias_within, coef(within_fit)[1:2]-param[1:2])
    
  }
  res_bias<-cbind(bias_dynamite=bias_dynamite, bias_gmm=bias_gmm,
                  bias_ols=bias_ols, bias_within=bias_within)
  colnames(res_bias)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm",
                        "y_lag_ols", "x_ols", "y_lag_within", "x_within")
  res_bias<-res_bias[, c("y_lag_dynamite",  "y_lag_gmm","y_lag_ols", "y_lag_within",
                         "x_dynamite", "x_gmm", "x_ols", "x_within")]
  
  res_pam<-cbind(par_dynamite=par_dynamite, par_gmm=par_gmm,
                 par_ols=par_ols, par_within=par_within)
  colnames(res_pam)<-c("y_lag_dynamite", "x_dynamite", "y_lag_gmm", "x_gmm",
                       "y_lag_ols", "x_ols", "y_lag_within", "x_within")
  res_pam<-res_pam[, c("y_lag_dynamite",  "y_lag_gmm","y_lag_ols", "y_lag_within",
                       "x_dynamite", "x_gmm", "x_ols", "x_within")]
  
  res<-list(bias=res_bias, pam=res_pam)
  res
}

res<-simulation(100,100)
round( cbind(sapply(res, colMeans), mae=colMeans(abs(res$bias)),
             mse=colMeans((res$bias)^2)), 3)
                 bias     pam   mae   mse
y_lag_dynamite  0.016   0.316 0.019 0.001
y_lag_gmm      -0.005   0.295 0.056 0.014
y_lag_ols       0.475   0.775 0.475 0.227
y_lag_within   -0.009   0.291 0.013 0.000
x_dynamite     -0.039 -10.039 0.096 0.014
x_gmm           0.031  -9.969 0.277 0.310
x_ols           0.000 -10.000 0.455 0.305
x_within        0.060  -9.940 0.101 0.015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants