Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

srr stats updates #149

Open
mpadge opened this issue Mar 4, 2024 · 3 comments
Open

srr stats updates #149

mpadge opened this issue Mar 4, 2024 · 3 comments

Comments

@mpadge
Copy link

mpadge commented Mar 4, 2024

Sorry @s3alfisc that this has taken a while, but here would be my minor TODO tasks to finalise your standards compliance:


You've documented the following with srrstats tags, but these should all be moved to srrstatsNA:

  • G2.6
  • G2.7
  • G2.10
  • G2.11
  • G2.12
  • G2.14
  • RE1.1
  • RE1.4
  • RE6.1 - because your plot method is a generic, and that standard only applies to packages without generic plot methods.
  • RE6.2
  • RE6.3

You've also still got one srrstatsTODO.

You currently comply wih 126 / 175 standards; that will reduce to 115, which is still 65% compliance.


It's okay to have some general responses within the srr-stats-standards.R file. In your case, these would include:

  • G3.0
  • G3.1
  • G3.1a
  • G5.2a
  • RE1.4

Please just update the wording of those to general statements of compliance.


The following statements in srr-stats-standards.R suggest places where they might better be moved to:

  • RE1.4 - to vignette, but maybe also left repeated in srr-stats-standards.R
  • RE4.17 - to methods.R, which you've already done, so can be removed from srr-stats-standards.R.

Additional comments:

#' @srrstats {RE5.0} *Scaling relationships between sizes of input data
#'  (numbers of observations, with potential extension to numbers of
#'  variables/columns) and speed of algorithm.* I don't really understand
#' this requirement.

In your case, it would mean adding some kind of documentation or test to show how, for example, times taken for the boottest() function scale with increasing numbers of rows of input data. Your examples use the voters data which has 300 rows. How does execution time increase for 3,000 rows? For 30,000 rows? Compliance here can be as simple as a statement that scaling is exponential (ideally with an estimate of coefficient). In that case, such a statement should be clearly given in the main function documentation. Or you can have some kind of a test that expects supra-linear scaling, or ... how you comply is up to you, but there should be some form of compliance here.


Hope that helps!

@s3alfisc
Copy link
Owner

s3alfisc commented Mar 4, 2024

Awesome, thank you @mpadge! I'll start working on this tomorrow and am very positive that I'll be able to address all your points by the end of the weekend!

@s3alfisc
Copy link
Owner

Hi @mpadge ,

  • I have moved all non applicable standards to NA section: b80f9c7
  • I moved RE1.4 and RE4.17 into the suggested spots 517b46e
  • I have added details of computational complexity of the most costly operation of the algorithm 523bae4

Once all unit tests (via GHA) pass, I will merge version 0.15.0 in to main =)

@mpadge
Copy link
Author

mpadge commented Apr 17, 2024

Great news - just ping over in the review thread when that's done, and it'll hopefully get moving again. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants