-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the bootstrap really drawing randomly? #138
Comments
I guess I also find it very strange how not-centered the bootstrap distribution is with regards to the point estimate...seems weird |
I don't yet have an explanation for the not-centered bootstraps, but it does appear like the draws are random. LP solver: Gurobi ('gurobi')
Obtaining propensity scores...
Generating target moments...
Integrating terms for control group...
Integrating terms for treated group...
Generating IV-like moments...
Moment 1...
Moment 2...
Moment 3...
Moment 4...
Point estimate of the target parameter: -0.09162913
Bootstrap iteration 1...
worked hours morekids samesex yob
24878 0 0 0 0 50
176122 1 48 0 0 49
69889 0 0 0 1 48
79158 1 35 1 0 50
197251 1 10 0 0 45
164615 0 0 0 1 46
Point estimate:-0.133147
Bootstrap iteration 2...
worked hours morekids samesex yob
51217 0 0 1 0 47
11771 1 99 0 0 47
40525 1 40 0 1 52
140642 0 0 0 0 44
164804 1 6 1 0 45
83796 0 0 1 0 46
Point estimate:-0.1327468
Bootstrap iteration 3...
worked hours morekids samesex yob
59999 1 40 0 1 49
184270 0 0 0 1 44
75519 0 0 1 0 45
128428 0 0 1 1 47
133152 0 0 1 1 47
175270 0 0 0 1 48
Point estimate:-0.1292102
Bootstrap iteration 4...
worked hours morekids samesex yob
139550 1 12 1 1 46
47927 1 30 0 1 46
110541 1 28 0 0 44
132399 0 0 0 0 48
49826 1 40 1 1 44
41545 1 24 0 0 48
Point estimate:-0.1326532
Bootstrap iteration 5...
worked hours morekids samesex yob
149038 1 24 0 1 49
42803 1 5 0 0 45
57340 1 20 0 1 49
16556 0 0 1 1 49
50123 1 2 0 0 53
160977 1 26 0 0 54
Point estimate:-0.1258843
Bootstrap iteration 6...
worked hours morekids samesex yob
95693 0 0 0 1 52
67412 0 0 0 0 50
159955 1 20 0 1 47
46488 1 46 0 0 45
13576 0 0 1 1 47
145078 1 15 0 0 47
Point estimate:-0.1325626
Bootstrap iteration 7...
worked hours morekids samesex yob
2388 1 23 1 1 45
12314 0 0 0 1 54
67216 0 0 0 1 49
86487 1 24 1 0 52
158633 1 35 0 1 48
105872 0 0 1 1 50
Point estimate:-0.1418057
Bootstrap iteration 8...
worked hours morekids samesex yob
68627 0 0 0 1 50
49733 0 0 1 1 52
20330 0 0 1 0 47
126841 0 0 1 1 51
138118 0 0 0 0 48
5507 1 35 0 0 46
Point estimate:-0.09150538
Bootstrap iteration 9...
worked hours morekids samesex yob
45793 0 0 1 1 51
128487 1 40 1 1 51
67497 1 20 1 1 45
56861 1 40 0 1 52
39611 0 0 0 1 48
166813 1 35 0 1 44
Point estimate:-0.09137915
Bootstrap iteration 10...
worked hours morekids samesex yob
58487 1 24 0 0 44
202918 0 0 0 1 52
156206 0 0 0 1 44
127119 0 0 1 1 47
58985 1 3 1 1 44
73676 1 40 0 0 45
Point estimate:-0.09069669
Bootstrap iteration 11...
worked hours morekids samesex yob
185595 0 0 0 1 45
134794 0 0 0 1 49
172585 0 0 0 1 48
198815 0 0 0 0 51
2213 1 40 0 1 46
181054 0 0 0 1 50
Point estimate:-0.1506921
Bootstrap iteration 12...
worked hours morekids samesex yob
132069 1 40 0 1 49
1108 0 0 1 0 45
37698 0 0 0 0 56
140701 1 24 0 0 47
108864 0 0 1 0 48
69530 1 40 0 0 50
Point estimate:-0.1328115
Bootstrap iteration 13...
worked hours morekids samesex yob
6889 0 0 1 0 49
8737 0 0 0 0 47
190877 1 20 0 1 44
125023 1 40 0 1 53
52278 0 0 1 1 46
116206 1 40 0 1 44
Point estimate:-0.1032961
Bootstrap iteration 14...
worked hours morekids samesex yob
2692 0 0 0 1 52
11749 0 0 0 0 46
145764 1 32 0 0 47
70416 0 0 1 1 48
190341 0 0 1 0 50
20891 0 0 0 1 48
Point estimate:-0.09230597
Bootstrap iteration 15...
worked hours morekids samesex yob
157770 1 20 0 1 49
190004 0 0 1 1 55
28442 0 0 0 0 51
155215 0 0 0 1 47
33004 0 0 0 1 49
59551 1 20 1 1 50
Point estimate:-0.08989487
Bootstrap iteration 16...
worked hours morekids samesex yob
50947 0 0 0 0 47
138564 1 38 0 1 51
2850 1 40 0 1 48
149978 1 40 0 1 50
47416 1 40 0 0 47
78418 0 0 1 1 45
Point estimate:-0.0909519
Bootstrap iteration 17...
worked hours morekids samesex yob
14508 0 0 0 0 46
94489 0 0 0 0 49
34328 0 0 0 1 50
196918 1 20 0 1 47
107993 1 2 0 0 53
74663 1 40 0 0 49
Point estimate:-0.1372912
Bootstrap iteration 18...
worked hours morekids samesex yob
191236 1 26 1 0 53
40886 1 34 0 0 47
68994 1 40 0 0 50
66318 1 10 0 1 46
188208 0 0 1 1 46
71492 0 0 1 0 47
Point estimate:-0.1328116
Bootstrap iteration 19...
worked hours morekids samesex yob
112070 1 30 0 0 46
33518 1 20 0 1 46
27855 0 0 0 1 45
189567 1 35 0 1 47
11277 0 0 0 1 48
57139 0 0 0 1 49
Point estimate:-0.1327473
Bootstrap iteration 20...
worked hours morekids samesex yob
116822 0 0 0 0 51
195976 1 40 0 0 45
14198 0 0 1 1 53
86404 1 37 0 1 51
34785 0 0 1 0 46
135760 0 0 0 0 48
Point estimate:-0.132745
--------------------------------------------------
Results
--------------------------------------------------
Point estimate of the target parameter: -0.09162913
Number of bootstraps: 20
Bootstrapped confidence intervals (nonparametric):
90%: [-0.1506921, -0.09069669]
95%: [-0.1506921, -0.08989487]
99%: [-0.1506921, -0.08989487]
p-value: 0 (No J-test output since we are just identified) |
How are the random draws being done? |
I'm just using the Lines 1631 to 1636 in ce1045d
|
Hmmm...can you try a couple of different things just to see whether things look different?
|
All three methods generate different results, but do exhibit runs of similar estimates. Change the overall starting seed from which I set the seed of the function to LP solver: Gurobi ('gurobi')
Obtaining propensity scores...
Generating target moments...
Integrating terms for control group...
Integrating terms for treated group...
Generating IV-like moments...
Moment 1...
Moment 2...
Moment 3...
Moment 4...
Point estimate of the target parameter: -0.09162913
Bootstrap iteration 1...
Point estimate:-0.1327445
Bootstrap iteration 2...
Point estimate:-0.1351167
Bootstrap iteration 3...
Point estimate:-0.1293295
Bootstrap iteration 4...
Point estimate:-0.09226189
Bootstrap iteration 5...
Point estimate:-0.09095696
Bootstrap iteration 6...
Point estimate:-0.09225218
Bootstrap iteration 7...
Point estimate:-0.1371488
Bootstrap iteration 8...
Point estimate:-0.1327441
Bootstrap iteration 9...
Point estimate:-0.1501948
Bootstrap iteration 10...
Point estimate:-0.09098966
Bootstrap iteration 11...
Point estimate:-0.1325365
Bootstrap iteration 12...
Point estimate:-0.0922624
Bootstrap iteration 13...
Point estimate:-0.09161383
Bootstrap iteration 14...
Point estimate:-0.1164453
Bootstrap iteration 15...
Point estimate:-0.1125589
Bootstrap iteration 16...
Point estimate:-0.0907676
Bootstrap iteration 17...
Point estimate:-0.1424088
Bootstrap iteration 18...
Point estimate:-0.09137832
Bootstrap iteration 19...
Point estimate:-0.1310918
Bootstrap iteration 20...
Point estimate:-0.1357936
--------------------------------------------------
Results
--------------------------------------------------
Point estimate of the target parameter: -0.09162913
Number of bootstraps: 20
Bootstrapped confidence intervals (nonparametric):
90%: [-0.1501948, -0.09095696]
95%: [-0.1501948, -0.0907676]
99%: [-0.1501948, -0.0907676]
p-value: 0
Instead of using > r <- do.call(ivmte, args)
LP solver: Gurobi ('gurobi')
Obtaining propensity scores...
Generating target moments...
Integrating terms for control group...
Integrating terms for treated group...
Generating IV-like moments...
Moment 1...
Moment 2...
Moment 3...
Moment 4...
Point estimate of the target parameter: -0.09162913
Bootstrap iteration 1...
Point estimate:-0.1292164
Bootstrap iteration 2...
Point estimate:-0.09175264
Bootstrap iteration 3...
Point estimate:-0.09090694
Bootstrap iteration 4...
Point estimate:-0.1501942
Bootstrap iteration 5...
Point estimate:-0.09188939
Bootstrap iteration 6...
Point estimate:-0.1498179
Bootstrap iteration 7...
Point estimate:-0.09158588
Bootstrap iteration 8...
Point estimate:-0.1291656
Bootstrap iteration 9...
Point estimate:-0.09163886
Bootstrap iteration 10...
Point estimate:-0.1365796
Bootstrap iteration 11...
Point estimate:-0.1326229
Bootstrap iteration 12...
Point estimate:-0.132833
Bootstrap iteration 13...
Point estimate:-0.1327524
Bootstrap iteration 14...
Point estimate:-0.1294383
Bootstrap iteration 15...
Point estimate:-0.132808
Bootstrap iteration 16...
Point estimate:-0.09226911
Bootstrap iteration 17...
Point estimate:-0.1313839
Bootstrap iteration 18...
Point estimate:-0.09254063
Bootstrap iteration 19...
Point estimate:-0.1328078
Bootstrap iteration 20...
Point estimate:-0.09096197
--------------------------------------------------
Results
--------------------------------------------------
Point estimate of the target parameter: -0.09162913
Number of bootstraps: 20
Bootstrapped confidence intervals (nonparametric):
90%: [-0.1501942, -0.09096197]
95%: [-0.1501942, -0.09090694]
99%: [-0.1501942, -0.09090694]
p-value: 0 Instead of using sample, just draw some integers. LP solver: Gurobi ('gurobi')
Obtaining propensity scores...
Generating target moments...
Integrating terms for control group...
Integrating terms for treated group...
Generating IV-like moments...
Moment 1...
Moment 2...
Moment 3...
Moment 4...
Point estimate of the target parameter: -0.09162913
Bootstrap iteration 1...
Point estimate:-0.1504226
Bootstrap iteration 2...
Point estimate:-0.09181055
Bootstrap iteration 3...
Point estimate:-0.09153195
Bootstrap iteration 4...
Point estimate:-0.09222055
Bootstrap iteration 5...
Point estimate:-0.09122626
Bootstrap iteration 6...
Point estimate:-0.1261849
Bootstrap iteration 7...
Point estimate:-0.1258882
Bootstrap iteration 8...
Point estimate:-0.09180798
Bootstrap iteration 9...
Point estimate:-0.1349838
Bootstrap iteration 10...
Point estimate:-0.09159996
Bootstrap iteration 11...
Point estimate:-0.1325613
Bootstrap iteration 12...
Point estimate:-0.09158434
Bootstrap iteration 13...
Point estimate:-0.09225638
Bootstrap iteration 14...
Point estimate:-0.0913753
Bootstrap iteration 15...
Point estimate:-0.1288676
Bootstrap iteration 16...
Point estimate:-0.09243477
Bootstrap iteration 17...
Point estimate:-0.1258887
Bootstrap iteration 18...
Point estimate:-0.09138309
Bootstrap iteration 19...
Point estimate:-0.1372906
Bootstrap iteration 20...
Point estimate:-0.129436
--------------------------------------------------
Results
--------------------------------------------------
Point estimate of the target parameter: -0.09162913
Number of bootstraps: 20
Bootstrapped confidence intervals (nonparametric):
90%: [-0.1504226, -0.0913753]
95%: [-0.1504226, -0.09122626]
99%: [-0.1504226, -0.09122626]
p-value: 0 |
The fact that the bootstrap distribution is bimodal really doesn't make sense to me. This is a linear MTR case. I did the following experiment:
Lo and behold, looks almost perfectly normal. I think there is something quite seriously wrong going on in either the bootstrap or the point identification procedure. Here my is code and the histrogram. rm(list = ls())
library("ivmte")
# Compute the ATT using my worked out calculations
att <- function(df) {
p1 <- mean(df[df$samesex == 1, "morekids"])
p0 <- mean(df[df$samesex == 0, "morekids"])
ey1gd1 <- mean(df[df$morekids == 1, "worked"])
beta0 <- 2*( mean(df[(df$morekids == 0) & (df$samesex == 1), "worked"])
- mean(df[(df$morekids == 0) & (df$samesex == 0), "worked"])
) / (p1 - p0)
alpha0 <- mean(df[(df$morekids == 0) & (df$samesex == 1), "worked"]) -
beta0/2 - (beta0/2)*p1
prz1gd1 <- mean(df[df$morekids == 1, "samesex"])
ey0gd1 <- alpha0 + (beta0/2)*(p1*prz1gd1 + p0*(1 - prz1gd1))
att <- ey1gd1 - ey0gd1
return(att)
}
# Compute the ATT using regressions as in BMW
att_bmw <- function(df) {
lrres <- lm(data = df, morekids ~ samesex)
df$p <- predict(lrres, type = "response")
lrd1 <- lm(data = df, worked ~ p, (morekids == 1))
lrd0 <- lm(data = df, worked ~ p, (morekids == 0))
beta0bmw <- 2*lrd0$coefficients["p"]
alpha0bmw <- lrd0$coefficients[1] - beta0bmw/2
p1 <- mean(df[df$samesex == 1, "morekids"])
p0 <- mean(df[df$samesex == 0, "morekids"])
ey0gd1bmw <- alpha0bmw + (beta0bmw/2)*(p1*prz1gd1 + p0*(1 - prz1gd1))
attbmw <- ey1gd1 - ey0gd1bmw
}
ae <- AE
args <- list(data = ae,
target = "att",
m0 = ~ u,
m1 = ~ u,
ivlike = worked ~ morekids + samesex + morekids*samesex,
propensity = morekids ~ samesex,
point = TRUE)
res <- do.call(ivmte, args)
# EVERYTHING MATCHES -- GREAT
print(res$pointestimate)
print(att(ae))
print(att_bmw(ae))
# NOW BOOTSTRAP -- SHOULD GET TWO MODES?
B <- 500
attbs = NaN*c(1:B)
for (b in 1:B) {
set.seed(b)
ids <- sample(x = seq(1, nrow(ae)), size = nrow(ae), replace = TRUE)
bdata <- ae[ids, ]
attbs[b] <- att(bdata)
}
hist(attbs, breaks = 20) # Looks pretty normal to me |
I wonder if maybe the propensity score is not being recalculated on each bootstrap draw? |
It looks like it's the optimization in Below, I estimate the treatment effect in three ways:
The treatment effect estimates from (1) and (3) always match. > r <- do.call(ivmte, args)
LP solver: Gurobi ('gurobi')
Obtaining propensity scores...
Generating target moments...
Integrating terms for control group...
Integrating terms for treated group...
Generating IV-like moments...
Moment 1...
Moment 2...
Moment 3...
Moment 4...
[1] "OLS MTR COEFFICIENTS"
[,1] [,2] [,3] [,4]
[1,] 0.5145152 0.1017559 0.4160259 0.1429137
[1] "TE ESTIMATE USING OLS COEFFICIENTS"
[1] -0.09160436
[1] "GMM MTR COEFFICIENT ESTIMATES"
Method
One step GMM with W = identity
Objective function value: 3.663935e-08
Theta[1] Theta[2] Theta[3] Theta[4]
0.51473 0.10135 0.41630 0.14197
Convergence code = 0
[1] "TE ESTIMATE USING GMM COEFFICIENTS"
[1] -0.09162913
Point estimate of the target parameter: -0.09162913
[1] "AT's code:"
[1] -0.09160436
Bootstrap iteration 1...
[1] "OLS MTR COEFFICIENTS"
[,1] [,2] [,3] [,4]
[1,] 0.5331028 0.07443749 0.405718 0.2137902
[1] "TE ESTIMATE USING OLS COEFFICIENTS"
[1] -0.1040566
[1] "GMM MTR COEFFICIENT ESTIMATES"
Method
One step GMM with W = identity
Objective function value: 6.744737e-06
Theta[1] Theta[2] Theta[3] Theta[4]
0.567508 0.020094 0.432941 0.030981
Convergence code = 0
[1] "TE ESTIMATE USING GMM COEFFICIENTS"
[1] -0.1327445
[1] "AT's code:"
[1] -0.1040566
Point estimate:-0.1327445
Bootstrap iteration 2...
[1] "OLS MTR COEFFICIENTS"
[,1] [,2] [,3] [,4]
[1,] 0.6234279 -0.06156888 0.3655344 0.4308433
[1] "TE ESTIMATE USING OLS COEFFICIENTS"
[1] -0.1752616
[1] "GMM MTR COEFFICIENT ESTIMATES"
Method
One step GMM with W = identity
Objective function value: 6.537698e-06
Theta[1] Theta[2] Theta[3] Theta[4]
0.571073 0.015019 0.434586 0.023184
Convergence code = 0
[1] "TE ESTIMATE USING GMM COEFFICIENTS"
[1] -0.1351167
[1] "AT's code:"
[1] -0.1752616
Point estimate:-0.1351167
Bootstrap iteration 3...
[1] "OLS MTR COEFFICIENTS"
[,1] [,2] [,3] [,4]
[1,] 0.4166834 0.2471149 0.4151487 0.1530698
[1] "TE ESTIMATE USING OLS COEFFICIENTS"
[1] -0.01721885
[1] "GMM MTR COEFFICIENT ESTIMATES"
Method
One step GMM with W = identity
Objective function value: 5.652358e-06
Theta[1] Theta[2] Theta[3] Theta[4]
0.563034 0.026957 0.431246 0.041694
Convergence code = 0
[1] "TE ESTIMATE USING GMM COEFFICIENTS"
[1] -0.1293295
[1] "AT's code:"
[1] -0.01721885
Point estimate:-0.1293295 Since we're not using the |
Ok, great work finding the problem! But it's really concerning that I think this means we need to just ditch Could you write me a PDF of how you are setting this up as OLS? My recollection (and correct me if I am wrong) was that initially we tried to do the point identified case by hand and ran into some problems. Do you remember what those problems were? |
I guess the issue is the overidentified case...we won't be able to set that up as OLS. |
Here's the GMM formulation |
Sorry for being slow on this. |
No worries I do not remember what went wrong last time...but presumably it had to do with inverting something. After that, the next culprit is the efficient weighting matrix. But again, I think that should be fine since it's constructed by taking the average of an outer product. So I don't really see what the problem was from the first try. |
I have it setup for the specific example above, and everything runs just fine. For my own peace of mind, I'm going to try figure out what went wrong previously. |
Okay, we were running into problems in the past because we weren't running GMM, but FGLS. This comment also indicates I was inverting a matrix for every observation, which explains why it previously took so long. Fortunately, it doesn't look like we have that anymore. |
Ok, great. The G matrices are basically the same as the "gamma" terms in the linear programming formulation, aren't they? So it shouldn't take long to construct them. What about the optimally weighted version? Does that work too? |
Yep, everything looks to be working. --------------------------------------------------
Results
--------------------------------------------------
Point estimate of the target parameter: -0.09143571
Number of bootstraps: 50
Bootstrapped confidence intervals (nonparametric):
90%: [-0.15318, -0.01884546]
95%: [-0.1619731, -0.01628884]
99%: [-0.1815772, -0.01391194]
p-value: 0
Bootstrapped J-test p-value: 0 The bootstrap distribution certainly looks more normal, but with only 50 draws it's hard to be sure. Not quite sure what the best way to make sure the Bootstrapped J-test is working properly. |
Why does it take a long time to run? The bootstrapped J-test can be implemented same as what we talked about before in #66 |
Hm, why does shaping the matrices take time? Is that some R quirk? |
I would say the quirk is to do with me, i.e. inefficient. I've rewritten the the code so the matrices are now shaped via matrix multiplication. |
Ok, that makes a lot of sense. |
The Monte Carlo simulations seem to check out. For N = 1000, we reject 1% of the time at the alpha = 0.05 level. And for added details: each simulation had 2,500 iterations, and 500 bootstraps. args <- list(data = dt,
+ target = "att",
+ m0 = ~ 1,
+ m1 = ~ 1,
+ propensity = d ~ 1 + factor(z),
+ link = "linear",
+ ivlike = c(y ~ d,
+ y ~ d | z,
+ y ~ d | factor(z),
+ y ~ z,
+ y ~ factor(z)),
+ point = TRUE,
+ noisy = FALSE,
+ bootstraps = bootstraps) |
Hmm, seems strange but could be the case. |
Oh, and how many support points does z have in this example? |
Ah, just 2.
I just realized I forgot to set Nevertheless, I follow #66:
|
Ok, but you should definitely investigate why the two step doesn't work in this DGP. It should work generally, so this may be a problem with how it is implemented. I'll expand my note to cover the bootstrap procedure so we can be sure we are on the same page. I think now that we are solving this explicitly, we might as well use the recentered criterion for the estimator as well. Will try to put that up later today. |
If I understand things correctly, the problem with inverting matrices stems from collinear S-functions. We can probably remove all the redundant components for the user, and throw out a message informing them of this. |
Thanks for the document -- that's perfect. The structure of what you have written is like an IV model with S as a vector of instruments for \tilde{G}. Is there a way to extend that logic to specifications with D in them? I had tried last weekend to set up the GMM problem like an IV problem, but couldn't figure out how to accommodate such specifications. Would be great if that were possible, since then we could just have one option that does TSLS with S as the instrument for G. We could use the AER package for that and life would be much easier. Regarding your example above, I guess the solution then is just to change |
I was trying to do this, but I don't think it's possible.
Yep, that did the trick, another simulation with two-step GMM is being run now. |
That was my reasoning too. Anyway, what is your proposal for checking whether the moments are redundant? Also, here is an expanded writeup that works out the bootstrap estimator and the J-test. You should check it before implementing to make sure it makes sense (and that I didn't make an algebra mistake)! |
Yes I agree.
Perhaps I'm wrong, but I was thinking of just checking for collinearity when constructing the weight matrix. |
Wouldn't we want to drop the redundant moments before doing the first step estimation, though? Or this doesn't depend on \theta? |
Ah you're right, this doesn't depend on theta. |
Or better yet, decompose H and look at what part is the problem |
Yes, I think that's nice, so I gave it a shot.
Hm, I thought I need H'H, since I care about the errors? But currently, the function does the following.
The output looks as follows: LP solver: Gurobi ('gurobi')
Obtaining propensity scores...
Generating target moments...
Integrating terms for control group...
Integrating terms for treated group...
Generating IV-like moments...
Moment 1...
Moment 2...
Moment 3...
Moment 4...
Moment 5...
Moment 6...
Moment 7...
Moment 8...
Moment 9...
Point estimate of the target parameter: 6.007976
Warning message:
The following components have been dropped due to collinearity:
IV-like Spec. Component
----------------------------------------------
2 factor(z)2
3 (Intercept)
3 d
4 (Intercept)
4 d
4 z And the simulation results suggest the bootstrap J test for GMM (now using optimal weighting) is working as it should, albeit we still need large samples for the rejection rates to look correct. |
…oment conditions. However, the bootstrap is now failing when dropping moments---this is being investigated.
Yeah I guess you do need the whole matrix H. And just to check -- now we are using the recentered moments also for computing the bootstrapped estimates, right? I will try to play around with this in simulations soon just to make sure I can't break it. |
Ah, sorry, not yet, I had forgotten about that, I will put that in. But I am curious: I thought recentering was not necessary for the parameter estimates, but only for the J-test. I also suspect I'm being too aggressive when dropping redundant moments... |
Okay, everything has been implemented. |
Yes, it's true that it is not necessary to recenter the moments for the bootstrapped parameter estimates to be consistent. |
Haven't been able to break this, so let's call it done! |
MWE:
The output looks suspiciously non-random to me.
For example, iterations 8,9, and 10 are all similar and much larger than the other ones.
Then so are 14, 15 and 16.
Maybe I'm just seeing patterns where there aren't any, but this should be double checked...
The text was updated successfully, but these errors were encountered: