-
-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
submission: awardFindR #432
Comments
Hello @mccallc, thanks for submitting. We were discussing this submission with the Associate Editors, but we need more information. In particular, we want to know "how much this package solves the problem". Say, does it cover all possible sources? The top-10 sources? The most prestigious sources? Also, it would be nice to know not just that other packages cover some sources, but more precisely what proportion of the covered sources are already provided by other packages. If all sources are already considered, maybe the benefit of this package is that it might provide a consistent interface across sources? Please, edit your answer on the submission post, and let me know once it is done. |
Hello @melvidoni, thank you so much for your consideration. I believe I have now addressed your questions in the original post. |
Thanks @mccallc. @annakrystalli will be your Handling Editor. She will run the initial checks soon. |
@ropensci-review-bot assign @annakrystalli as editor |
Assigned! @annakrystalli is now the editor |
Editor checks:
Editor commentsDear @mccallc Thanks for submitting this super interesting package! I've now completed initial editor checks and there are a few issues to sort out before we proceed. goodpracticeThere are a number of issues returned from running It is good practice to
TestsA bit more detail on the single failing test when running
Test failure means I can't run SpellcheckOnly a single typo thrown up by
Documentation questionIn your docs, you only demonstrate the use of the @mccallc, could you also please add the rOpenSci under review badge to your README?
|
Hello @annakrystalli, I believe I have addressed each of your points regarding good practice, tests, spellcheck, documentation, and adding the rOpenSci badge. These are all reflected in the latest commit to the repository.
|
Hello @annakrystalli, Hope I'm not bothering, but I'm not sure if I messed up any automatic processes when I accidentally closed this ticket prematurely. I believe I addressed your previous comments, and I wanted to indicate I'm still interested in continuing the review process. |
Hello @mccallc ! Thanks for confirming and for addressing the initial comments. 99% coverage is absolutely fine. I'll start looking for reviewers. |
@ropensci-review-bot seeking reviewers |
Please add this badge to the README of your package repository: [![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/432_status.svg)](https://github.com/ropensci/software-review/issues/432) Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news |
@ropensci-review-bot add @karawoo to reviewers |
@karawoo added to the reviewers list. Review due date is 2021-06-01. Thanks @karawoo for accepting to review! Please refer to our reviewer guide. |
@ropensci-review-bot add @zambujo to reviewers |
@zambujo added to the reviewers list. Review due date is 2021-06-01. Thanks @zambujo for accepting to review! Please refer to our reviewer guide. |
Package ReviewPlease check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
DocumentationThe package includes all the following forms of documentation:
Functionality
Estimated hours spent reviewing: 3
Review CommentsThis is a really nice and well put together package. It was very easy to get started with it and the functions are quite intuitive. I thought the contributing guide was also particularly well written. While testing the examples I found that one example threw an error, and a few others didn't error but returned different output than I expected: library("awardFindR")
## Throws an error
specific <- awardFindR(keywords=c("ethnography", "case studies"))
#> ... [removed long curl output here] ...
#> Error in data.frame(grantee = info[2], date, amount, program = info[4], :
#> arguments imply differing number of rows: 1, 0
## Returns NULL
(arnold <- arnold_get("qualitative data", 2016, 2017))
#> POST https://pyj9b8sltv-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20JavaScript%20(4.5.1)%3B%20Browser%20(lite)%3B%20instantsearch.js%20(4.8.3)%3B%20JS%20Helper%20(3.2.2)&x-algolia-api-key=bec9ead5977b11ae383f4df272c2f106&x-algolia-application-id=PYJ9B8SLTV ... OK (HTTP 200).
#> NULL
## Succeeds with a warning
sloan <- sloan_get(c("qualitative data", "case studies"), 2018, 2020)
#> GET https://sloan.org/grants-database?dynamic=1&order_by=approved_at&order_by_direction=desc&limit=3000 ... OK (HTTP 200).
#> Warning in grepl(keyword, descriptions, ignore.case = TRUE): argument 'pattern'
#> has length > 1 and only the first element will be used
## Returns a list instead of a data frame
nsf <- nsf_get("ethnography", "2020-01-01", "2020-02-01")
#> GET https://api.nsf.gov/services/v1/awards.json?keyword="ethnography"&printFields=id,date,startDate,expDate,title,awardeeName,piFirstName,piLastName,piEmail,cfdaNumber,fundsObligatedAmt,fundProgramName,abstractText,awardeeCounty&dateStart=01/01/2020&dateEnd=02/01/2020&offset=1 ... OK (HTTP 200).
class(nsf)
#> [1] "list" Created on 2021-05-28 by the reprex package (v0.3.0.9001) I noticed that the opt <- options(stringsAsFactors = FALSE)
on.exit(options(opt)) In general the functions in this package are quite consistent, however some functions ( Other minor commentsA few thoughts/ideas that I'll leave up to the package author whether they want to incorporate:
|
Package Review
DocumentationThe package includes all the following forms of documentation:
Functionality
Estimated hours spent reviewing: 3
Review Comments
I could not find any issues other than those pointed out in the previous reviews. I will nevertheless add a few minor points, mostly about design and styling, whose implementation is not required and left to the consideration of the authors.
|
Hello! Thank you very much @karawoo and @zambujo for your reviews and extensive comments. I really appreciate them. I've implemented changes that should address each individual comment, including: Based on review from @karawoo:
Based on review from @zambujo:
Again, thank you so all much. These comments really helped push this package in the right direction. Let me know what's next in the process, @annakrystalli ! |
All of my comments have been addressed and I don't have any more to add. Nice work! |
All comments have been addressed from my side as well. No further comments other than: Well done!.. and apologies for the delayed reply. |
@ropensci-review-bot approve |
Approved! Thanks @mccallc for submitting and @karawoo, @zambujo for your reviews! 😁 To-dos:
Should you want to acknowledge your reviewers in your package DESCRIPTION, you can do so by making them Welcome aboard! We'd love to host a post about your package - either a short introduction to it with an example for a technical audience or a longer post with some narrative about its development or something you learned, and an example of its use for a broader readership. If you are interested, consult the blog guide, and tag @stefaniebutland in your reply. She will get in touch about timing and can answer any questions. We've put together an online book with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding. Please tell us what could be improved, the corresponding repo is here. Last but not least, you can volunteer as a reviewer via filling a short form. |
Submitting Author: Michael McCall (@mccallc)
Due date for @karawoo: 2021-06-01Other Authors: Sebastian Karcher (@adam3smith)
Repository: https://github.com/PESData/awardFindR
Version submitted: 0.1.0
Editor: @annakrystalli
Reviewers: @karawoo, @zambujo
Due date for @zambujo: 2021-06-01
Archive: TBD
Version accepted: TBD
Scope
Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):
Explain how and why the package falls under these categories (briefly, 1-2 sentences):
Acts as an interface to web-based APIs and search engines.
Anyone interested in trends in academic funding or researchers looking for a suitable funder based on previously funded research. The package can be used for scientometrics more generally, or more narrowly field-specific lines of inquiry.
The target audience is primarily US-based researchers, and the sources of governmental research funding are all covered to reasonable expectations through the combination of NSF, NEH, NIH, Federal RePORTER and USAspending. Private funders are more difficult to evaluate, but the sources included are the most prominent in the social sciences.
Nothing that is functionally similar. Some source-specific packages exist, but none that aggregate across sources.
nihexporter provides search functionality for the NIH grant database, but is limited to an existing scrape of NIH data from 2016. awardsBot dynamically scrapes NSF awards, but lacks search functionality and is primarily designed for tracking and emailing PIs regarding their obligations. fedreporter is not actively maintained and only offers search functionality for a antiquated and limited version of the Federal Reporter API we implement here.
This package offers a unified, simple search functionality across these federal grant databases, and can include many private funders as well that are not covered by any existing packages.
Yes
Technical checks
Confirm each of the following by checking the box.
This package:
Publication options
Do you intend for this package to go on CRAN?
Do you intend for this package to go on Bioconductor?
Do you wish to submit an Applications Article about your package to Methods in Ecology and Evolution? If so:
MEE Options
Code of conduct
The text was updated successfully, but these errors were encountered: