-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
summary() should not print anything #1007
Comments
Should work like Checklist:
|
I started working on in |
Fixing As far as we use generic
If we make |
That sounds great. This will make the methods consistent with other predict methods, which is what I had intended all along. We can use the same sorts of arguments as in But would be good to have a |
OK. I will change |
Sounds good. I can review that before we proceed to the others. |
On the S4 question above, the main reason was inheritance. But if we revert the predicted objects to a single list-style structure, the inheritance arguments may not be as compelling. The move may very well break some users' old code, however. Accessing the slots in an S4 object is something we would as a principle discourage, but for some functions this has been the only way to extract the desired quantities (at least until #108 is resolved). I'm happy to go with what works most consistently with other Suggest we call the fitted variant classes the same name as the |
I designed summary objects as a list of small classed objects:
I removed prediction objects, and predictions are either a named vector or a list:
I am not sure if we should include CI, because user can calculate with SE. |
Let's say the following, which is consistent with what you have outlined above (and largely just summarizes what you have been proposing): fittedA (fitted)
summaryA predicted
## S3 method for class 'textmodel_wordscores'
predict(object, newdata, se.fit = FALSE,
interval = c("none", "confidence"), level = 0.95,
rescaling = c("none", "lbg", "mv"), ...) (we remove the Return: A (predicted) x <- rnorm(5)
y <- x + rnorm(5)
predict(lm(y ~ x))
# 1 2 3 4 5
# -0.8375490 -0.8598590 -1.0842958 -0.2708943 -1.5676246
predict(lm(y ~ x), interval = "confidence")
# fit lwr upr
# 1 -0.8375490 -2.216833 0.5417345
# 2 -0.8598590 -2.226422 0.5067040
# 3 -1.0842958 -2.530503 0.3619111
# 4 -0.2708943 -2.772334 2.2305456
# 5 -1.5676246 -4.043157 0.9079075
class(predict(lm(y ~ x), interval = "confidence"))
# [1] "matrix"
predict(lm(y ~ x), se.fit = TRUE)
# $fit
# 1 2 3 4 5
# -0.8375490 -0.8598590 -1.0842958 -0.2708943 -1.5676246
#
# $se.fit
# [1] 0.4334036 0.4294065 0.4544324 0.7860116 0.7778708
#
# $df
# [1] 3
#
# $residual.scale
# [1] 0.949114 Not perhaps an ideal design, but it's an established model. In addition to the above, we would also class the predicted objects beyond their base class, so that we could add a |
It's more similar to
|
Should
Coefficients are in a matrix in |
Yes. Great stuff on the new formatting. |
|
It is |
coef.textmodel can return a named vector if the textmodel has just one set of coefficients, or a matrix if (similar to lm) has multiple coefficients that are for the same dimension (features, in the word scores case). For wordfish, a list is probably the natural output, since the coefficients are different lengths depending on whether they are document or feature coefficients. |
Let's specify the detail as we move onto other functions. Shall we tackle Wordfish next? |
Agreed, the natural choices for output formats will become more apparent once we do a few more formats. CA and wordfish will be very similar to one another. I can do NB and affinity if you tackle the others. |
After working on |
Suggest the following: the For wordscores and wordfish:
|
Just like the older version of
head.dfm()
,summary.textmodel_wordfish_fitted()
prints model parameters. It should only return values just likesummary.lm()
.summary.textmodel_wordscores_fitted()
is also wrong in the same way.The text was updated successfully, but these errors were encountered: