Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

comtradr: API wrapper for UN Comtrade data #613

Closed
13 of 29 tasks
datapumpernickel opened this issue Oct 15, 2023 · 49 comments
Closed
13 of 29 tasks

comtradr: API wrapper for UN Comtrade data #613

datapumpernickel opened this issue Oct 15, 2023 · 49 comments
Assignees

Comments

@datapumpernickel
Copy link

datapumpernickel commented Oct 15, 2023

Date accepted: 2024-01-09
Submitting Author Name: Paul Bochtler
Submitting Author Github Handle: @datapumpernickel
Repository: https://github.com/ropensci/comtradr
Version submitted: 0.4.0 (not yet released)
Submission type: Standard
Editor: @noamross
Reviewers: @ernestguevarra, @potterzot

Archive: TBD
Version accepted: TBD
Language: en


  • Paste the full DESCRIPTION file inside a code block below:
Package: comtradr
Title: Interface with the United Nations Comtrade API
Version: 0.4.0.0
Authors@R: c(
            person("Paul", "Bochtler", 
              email = "paulbochtler.gh@gmail.com", 
              role = c("aut", "cre","cph"),
              comment = c(ORCID = "0000-0002-9146-6185")), 
            person("Harriet", "Goers", 
              email = "hgoers@umd.edu", 
              role = c("aut")), 
            person("Chris", "Muir", 
              email = "chrismuirRVA@gmail.com", 
              role = c("aut")), 
            person("Alicia", "Schep", 
                  role = "rev", 
                  comment = c(ORCID = "0000-0002-3915-0618", 
                              "Alicia reviewed the package for rOpenSci, 
                              see https://github.com/ropensci/onboarding/issues/141")), 
            person("Rafael", "Hellwig", 
                  role = "rev", 
                  comment = c(ORCID = "0000-0002-3092-3493", 
                              "Rafael reviewed the package for rOpenSci, 
                              see https://github.com/ropensci/onboarding/issues/141")),
            person("Juergen", "Amann", role=c("ctb")))
Description: Interface with and extract data from the United Nations Comtrade 
  API <https://comtradeplus.un.org/>. Comtrade provides country level shipping 
  data for a variety of commodities, these functions allow for easy API query 
  and data returned as a tidy data frame.
Depends: R (>= 4.1.0)
License: GPL-3
Encoding: UTF-8
LazyData: true
Imports: 
    lifecycle,
    fs,
    readr,
    askpass,
    cli,
    httr2,
    rlang,
    stringr,
    poorman,
    lubridate,
    purrr
RoxygenNote: 7.2.3
URL: https://docs.ropensci.org/comtradr/, https://github.com/ropensci/comtradr
BugReports: https://github.com/ropensci/comtradr/issues
NeedsCompilation: no
Maintainer: Paul Bochtler <paulbochtler.gh@gmail.com>
Suggests: 
    covr,
    dplyr,
    ggplot2,
    httptest2,
    knitr,
    rmarkdown,
    testthat (>= 3.0.0),
    vcr
VignetteBuilder: knitr
Roxygen: list(markdown = TRUE)
Config/testthat/edition: 3

Scope

  • Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):

    • data retrieval
    • data extraction
    • data munging
    • data deposition
    • data validation and testing
    • workflow automation
    • version control
    • citation management and bibliometrics
    • scientific software wrappers
    • field and lab reproducibility tools
    • database software bindings
    • geospatial data
    • text analysis
  • Explain how and why the package falls under these categories (briefly, 1-2 sentences):
    The package leverages httr2 to extract trade data from the UN Comtrade API, which is one of the standard sources for trade data world wide. See here for more info on the API.

  • Who is the target audience and what are scientific applications of this package?
    The package can be used for econometric research into the trade relations of countries, dependencies between countries and many other use cases for trade data. The target audience are students and scientists, as well as practitioneers in the field of econometrics, political science among others.

  • Are there other R packages that accomplish the same thing? If so, how does yours differ or meet our criteria for best-in-category?

As far as I am aware, there is no other package that wraps the UN Comtrade API.

The package has previouslz been reviewed by rOpenSci and is already part of the rOpenSci suite. However, since the API underwent fundamental changes, so did most of the functions of the package. Hence, we asked, whether we could get another review. This inquiry was answered positively here: ropensci/software-review-meta#100

  • Explain reasons for any pkgcheck items which your package is unable to pass.
    As far as I am aware, the package currently passes all the pkgcheck checks.

Technical checks

Confirm each of the following by checking the box.

This package:

Publication options

  • Do you intend for this package to go on CRAN?

  • Do you intend for this package to go on Bioconductor?

  • Do you wish to submit an Applications Article about your package to Methods in Ecology and Evolution? If so:

MEE Options
  • The package is novel and will be of interest to the broad readership of the journal.
  • The manuscript describing the package is no longer than 3000 words.
  • You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see MEE's Policy on Publishing Code)
  • (Scope: Do consider MEE's Aims and Scope for your manuscript. We make no guarantee that your manuscript will be within MEE scope.)
  • (Although not required, we strongly recommend having a full manuscript prepared when you submit here.)
  • (Please do not submit your package separately to Methods in Ecology and Evolution)

Code of conduct

@ropensci-review-bot
Copy link
Collaborator

Thanks for submitting to rOpenSci, our editors and @ropensci-review-bot will reply soon. Type @ropensci-review-bot help for help.

@ropensci-review-bot
Copy link
Collaborator

🚀

Editor check started

👋

@ropensci-review-bot
Copy link
Collaborator

Checks for comtradr (v0.4.0.0)

git hash: e82abc78

  • ✔️ Package name is available
  • ✔️ has a 'codemeta.json' file.
  • ✔️ has a 'contributing' file.
  • ✔️ uses 'roxygen2'.
  • ✔️ 'DESCRIPTION' has a URL field.
  • ✔️ 'DESCRIPTION' has a BugReports field.
  • ✔️ Package has at least one HTML vignette
  • ✔️ All functions have examples.
  • ✔️ Package has continuous integration checks.
  • ✔️ Package coverage is 79.6%.
  • ✔️ R CMD check found no errors.
  • ✔️ R CMD check found no warnings.

Package License: GPL-3


1. Package Dependencies

Details of Package Dependency Usage (click to open)

The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.

type package ncalls
internal base 78
internal comtradr 45
internal stats 15
internal graphics 7
internal methods 5
internal utils 4
imports cli 36
imports httr2 14
imports rlang 4
imports purrr 4
imports fs 3
imports lubridate 2
imports stringr 1
imports lifecycle NA
imports readr NA
imports askpass NA
imports poorman NA
suggests covr NA
suggests dplyr NA
suggests ggplot2 NA
suggests httptest2 NA
suggests knitr NA
suggests rmarkdown NA
suggests testthat NA
suggests vcr NA
linking_to NA NA

Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(<path/to/repo>)', and examining the 'external_calls' table.

base

c (44), list (4), body (3), paste0 (3), colnames (2), data.frame (2), get (2), if (2), return (2), switch (2), as.numeric (1), by (1), length (1), message (1), names (1), new.env (1), paste (1), seq.Date (1), Sys.getenv (1), tolower (1), which (1), with (1)

comtradr

ct_get_ref_table (8), check_date (2), comtrade_after (2), comtrade_error_body (2), comtrade_is_transient (2), ct_build_request (2), ct_check_params (2), ct_download_ref_table (2), ct_perform_request (2), ct_process_response (2), is_year (2), check_clCode (1), check_customsCode (1), check_flowCode (1), check_freq (1), check_motCode (1), check_partner2Code (1), check_partnerCode (1), check_reporterCode (1), check_type (1), convert_to_date (1), ct_commodity_db_type (1), ct_commodity_lookup (1), ct_country_lookup (1), ct_get_data (1), ct_get_remaining_hourly_queries (1), ct_get_reset_time (1), ct_register_token (1)

cli

cli_inform (34), cli_warn (2)

stats

update (9), frequency (5), time (1)

httr2

resp_body_json (4), resp_header (4), request (2), req_error (1), req_headers (1), req_retry (1), req_throttle (1)

graphics

text (7)

methods

isGroup (4), new (1)

purrr

map (2), map_chr (1), map_int (1)

rlang

arg_match (4)

utils

data (4)

fs

path_package (3)

lubridate

year (2)

stringr

str_extract (1)

NOTE: Some imported packages appear to have no associated function calls; please ensure with author that these 'Imports' are listed appropriately.


2. Statistical Properties

This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.

Details of statistical properties (click to open)

The package has:

  • code in R (100% in 10 files) and
  • 3 authors
  • 1 vignette
  • 2 internal data files
  • 11 imported packages
  • 16 exported functions (median 5 lines of code)
  • 81 non-exported functions in R (median 5 lines of code)

Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages
The following terminology is used:

  • loc = "Lines of Code"
  • fn = "function"
  • exp/not_exp = exported / not exported

All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by the checks_to_markdown() function

The final measure (fn_call_network_size) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile.

measure value percentile noteworthy
files_R 10 59.0
files_vignettes 1 68.4
files_tests 10 90.7
loc_R 742 59.6
loc_vignettes 232 54.7
loc_tests 316 65.4
num_vignettes 1 64.8
data_size_total 6739 68.8
data_size_median 3369 72.0
n_fns_r 97 75.7
n_fns_r_exported 16 60.6
n_fns_r_not_exported 81 79.5
n_fns_per_file_r 8 81.5
num_params_per_fn 1 1.6 TRUE
loc_per_fn_r 5 8.1
loc_per_fn_r_exp 5 7.3
loc_per_fn_r_not_exp 5 9.7
rel_whitespace_R 21 64.9
rel_whitespace_vignettes 37 58.2
rel_whitespace_tests 15 55.5
doclines_per_fn_exp 16 7.0
doclines_per_fn_not_exp 0 0.0 TRUE
fn_call_network_size 35 58.7

2a. Network visualisation

Click to see the interactive network visualisation of calls between objects in package


3. goodpractice and other checks

Details of goodpractice checks (click to open)

3a. Continuous Integration Badges

R-CMD-check.yaml

GitHub Workflow Results

id name conclusion sha run_number date
6522795714 pkgcheck success e82abc 5 2023-10-15
6522795715 R-CMD-check NA e82abc 50 2023-10-15
6522795709 test-coverage success e82abc 30 2023-10-15

3b. goodpractice results

R CMD check with rcmdcheck

R CMD check generated the following note:

  1. checking data for non-ASCII characters ... NOTE
    Note: found 11 marked UTF-8 strings

R CMD check generated the following check_fail:

  1. rcmdcheck_non_ascii_characters_in_data

Test coverage with covr

Package coverage: 79.64

Cyclocomplexity with cyclocomp

No functions have cyclocomplexity >= 15

Static code analyses with lintr

lintr found the following 183 potential issues:

message number of times
Avoid library() and require() calls in packages 14
Lines should not be more than 80 characters. 169


Package Versions

package version
pkgstats 0.1.3.9
pkgcheck 0.1.2.9


Editor-in-Chief Instructions:

This package is in top shape and may be passed on to a handling editor

@noamross
Copy link
Contributor

@ropensci-review-bot assign @noamross as editor

@ropensci-review-bot
Copy link
Collaborator

Assigned! @noamross is now the editor

@noamross
Copy link
Contributor

@ropensci-review-bot seeking reviewers

@ropensci-review-bot
Copy link
Collaborator

Please add this badge to the README of your package repository:

[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/613_status.svg)](https://github.com/ropensci/software-review/issues/613)

Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news

@noamross
Copy link
Contributor

Thanks @datapumpernickel for your submission (and your work taking over maintenance and bringing this package up to speed). As the bot says, this package is in nice shape, and I'll be seeking reviewers.

@datapumpernickel
Copy link
Author

Thank you, I am looking forward to the review and feedback!

@noamross
Copy link
Contributor

@ropensci-review-bot assign @ernestguevarra as reviewer

@ropensci-review-bot
Copy link
Collaborator

@ernestguevarra added to the reviewers list. Review due date is 2023-11-06. Thanks @ernestguevarra for accepting to review! Please refer to our reviewer guide.

rOpenSci’s community is our best asset. We aim for reviews to be open, non-adversarial, and focused on improving software quality. Be respectful and kind! See our reviewers guide and code of conduct for more.

@ropensci-review-bot
Copy link
Collaborator

@ernestguevarra: If you haven't done so, please fill this form for us to update our reviewers records.

@ropensci-review-bot
Copy link
Collaborator

📆 @ernestguevarra you have 2 days left before the due date for your review (2023-11-06).

@ernestguevarra
Copy link

Hi @datapumpernickel. Just a quick note to say that I am almost done with a comprehensive pass at reviewing your package! Sorry that I have been delayed. When I accepted the review role, I forgot where I should look to see the review process and my GitHub notifications were getting archived on my email.

Otherwise, I will be posting my review notes here in the next couple of hours.

@datapumpernickel
Copy link
Author

Hi @ernestguevarra, no worries. You are just in time! Looking forward to the review. Thank you for the work to look into everything already!

@noamross
Copy link
Contributor

noamross commented Nov 6, 2023

@ropensci-review-bot assign @potterzot as reviewer

@ropensci-review-bot
Copy link
Collaborator

@potterzot added to the reviewers list. Review due date is 2023-11-27. Thanks @potterzot for accepting to review! Please refer to our reviewer guide.

rOpenSci’s community is our best asset. We aim for reviews to be open, non-adversarial, and focused on improving software quality. Be respectful and kind! See our reviewers guide and code of conduct for more.

@ropensci-review-bot
Copy link
Collaborator

@potterzot: If you haven't done so, please fill this form for us to update our reviewers records.

@ernestguevarra
Copy link

ernestguevarra commented Nov 6, 2023

@datapumpernickel, here is my review:

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • Briefly describe any working relationship you have (had) with the package authors.
  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (if you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need: clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s): demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions
  • Examples: (that run successfully locally) for all exported functions
  • Community guidelines: including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software have been confirmed.
  • Performance: Any performance claims of the software have been confirmed.
  • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines.

Estimated hours spent reviewing:

  • Should the author(s) deem it appropriate, I agree to be acknowledged as a package reviewer ("rev" role) in the package DESCRIPTION file.

Review Comments

Test installation

Both local install and remote install (via GitHub) of the package proceeded without issues and as expected/documented.

Check package integrity

  • No issues on {devtools} checks. Check results were:
── R CMD check results ──────────────────────────────────────────────────────────────────────────── comtradr 0.4.0.0 ────
Duration: 19.1s

0 errors| 0 warnings| 0 notes
  • No issues on {devtools} test. Test results were:
══ Results ══════════════════════════════════════════════════════════════════════════════════════════════════════════════
Duration: 2.0 s

[ FAIL 0 | WARN 0 | SKIP 0 | PASS 78 ]
  • Following issues raised on {goodpractice} test/check:
── GP comtradr ──────────────────────────────────────────────────────────────────────────────────────────────────────────

It is good practice towrite unit tests for all functions, and all package code in general. 79% of code lines are
    covered by test cases.

    R/ct_build_request.R:52:NA
    R/ct_check_params.R:296:NA
    R/ct_check_params.R:300:NA
    R/ct_check_params.R:301:NA
    R/ct_check_params.R:302:NA
    ... and 97 more linesavoid long code lines, it is bad for readability. Also, many people prefer editor windows that
    are about 80 characters wide. Try make your lines shorter than 80 characters

    data-raw/cmd_codes.R:7:81
    data-raw/country_codes.R:7:81
    data-raw/country_codes.R:22:81
    data-raw/country_codes.R:53:81
    data-raw/DATASET.R:11:81
    ... and 164 more linesfix this R CMD check NOTE: Note: found 11 marked UTF-8 strings
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 
  • On unit tests for all functions, as per rOpenSci Packaging Guide documentation on tests, a minimum coverage of 75% is required so this should be OK. I looked at a few of the lines that are not covered and it seems to be those that tend to be hard to test with CRAN and rhub in mind. Maybe for the sake of being thorough and complete, a final look at non-covered lines of code can be looked at and see if some of them can be covered with just some adjustments to existing test suites and/or adding straightforward tests to get them covered.

  • On avoiding long lines, a set of these are from your scripts inside the data-raw directory which processes your reference data that you use in the package. I am not 100% sure about this but for those, I think the strict linting may not be necessary. I think the data-raw directory can be skipped in the lint checks. For others, the lines that exceed tend to be long URLs and long messages that guide the user regarding a warning or an error. It is always challenging for these as URLs can't really be split/wrapped into more than one line whilst error/warning/messages may not show up on console in a tidy/pretty way. For this, I might suggest adding # nolint at the end of the long line to quiet down these messages from the good practice check.

  • On finding 11 marked UTF-8 strings, this NOTE doesn't come up on devtools::check() so maybe it comes out on rhub checks? I think these are likely coming from the datasets included in the package particulary country names with accented characters. I am not fully certain what best practice is for this. I think it is ideal that the actual names provided by Comtrade are used in the dataset but to be able quiet down this issue, it might be needed to convert text to ASCII. For this stringi::stri_enc_toascii() might be useful.

Check package metadata files

  • on devtools::spell_check(), I see:
  WORD          FOUND INREADME.md:49,102,124
              README.Rmd:40,83,106
accomodate    README.md:28
              README.Rmd:30
arg           ct_commodity_lookup.Rd:40
              NEWS.md:26,56,57,58
              comtradr.Rmd:98,194,211
args          ct_commodity_lookup.Rd:45
              NEWS.md:50
ASEAN         country_codes.Rd:16
CMD           README.md:10
              README.Rmd:18
Codecov       README.md:12
              README.Rmd:19
comtrade      ct_build_request.Rd:10
              ct_perform_request.Rd:10,17
              get_primary_comtrade_key.Rd:5,10
              set_primary_comtrade_key.Rd:10,13
              README.md:91
              README.Rmd:75
              comtradr.Rmd:61
Comtrade      comtradr-package.Rd:7,11
              country_codes.Rd:17,18,28
              ct_commodity_lookup.Rd:5,52,54,57
              ct_get_data.Rd:5,53,69,77
              ct_perform_request.Rd:5,20
              get_primary_comtrade_key.Rd:13
              set_primary_comtrade_key.Rd:5,16
              title:1
              description:1,2
              NEWS.md:29,54,77
              README.md:15,16,24
              README.Rmd:22,22,28
              comtradr.Rmd:29,29,119,183,203,211
COMTRADE      get_primary_comtrade_key.Rd:10,13
              set_primary_comtrade_key.Rd:16
env           get_primary_comtrade_key.Rd:13
              set_primary_comtrade_key.Rd:16
func          NEWS.md:26,32,34,36
github        comtradr-package.Rd:33,34
https         comtradr-package.Rd:33,34
httr          ct_build_request.Rd:17,20
              ct_get_data.Rd:47,66
              ct_process_response.Rd:10,20
ℹ️        README.md:95
              README.Rmd:77
              comtradr.Rmd:63
iso           ct_country_lookup.Rd:16
              ct_process_response.Rd:20
json          ct_perform_request.Rd:17
              ct_process_response.Rd:20
              NEWS.md:34
natively      comtradr.Rmd:192
onboarding    comtradr-package.Rd:33,34
ORCID         comtradr-package.Rd:23,33,34
Readme        NEWS.md:4
repo          README.md:51
              README.Rmd:42
ropensci      comtradr-package.Rd:33,34
              README.md:150
              README.Rmd:130
rOpenSci      comtradr-package.Rd:33,34
              README.md:9
              README.Rmd:17
stringified   NEWS.md:36
testthat      NEWS.md:83,99
twi           ct_pretty_cols.Rd:9
useability    README.md:36
              README.Rmd:32
  • For the README, please address some of the non-false positive results produced by devtools::spell_check(). After editing the spelling of non-false positive results, please run usethis::use_spell_check() which will allow you to record the words that have been false-positvely flagged as misspelled into a WORDLIST kept in the inst folder. Once they are on this list, they will not anymore be flagged as misspelled. Please note that when running usethis::use_spell_check(), language will default to en-US and will be added to your DESCRIPTION file. Also, the spelling package will be added in the Suggests.

  • For the README, in the sentence of the first paragraph, it might be good to make this consistent with the text used in the DESCRIPTION file so something like "R package for interfacing with and extracting data from ..." to make the two consistent and coherent with each other. Otherwise, README clearly states the problems the software is designed to solve and its target audience

  • Maybe organise the entries on the DESCRIPTION file more coherently. For example, move the line for Maintainer right after the line/lines for Authors@R and the entries for Suggests be put right after the entries for Imports. The DESCRIPTION file will still work appropriately without changing these but the file will read better if this change is made.

  • No issues with the NAMESPACE

Check documentation

  • Key functions of the package have help files that are generally appropriate and describe in fair detail what the function does and what it needs to produce such output.

  • Supporting functions or utility functions, however, tend to be less specific in the description of their functionality. In most cases, these are functions that users will most likely not use directly as they are used by the package within the key/main functions. As such, it might be good to note this with the description of the function.

  • Some other functions are legacy functions that are lined up for deprecation or have been superseded. For these, it might be good to have text in the description that indicates what the deprecation plan is for these functions.

  • In the package website built using {pkgdown}, the references page for the function help files might benefit from some organisation of the functions. Groupings will depend on how you want to organise them but generally, they are groupings based on related functionalities. Also, this a good way to group data elements, legacy functions and functions meant for deprecation. This can be done by just creating _pkgdown.yml file by issuing the following command pkgdown::use_pkgdown(). Once that file is created, you can edit/customise it in many ways including grouping of the reference pages for each function. To see how this is done, look at the _pkgdown.yml file of the {pkgdown} package website - https://github.com/r-lib/pkgdown/blob/main/pkgdown/_pkgdown.yml.

  • The package currently has one vignette which gets produced as the Getting Started vignette. It expands the content of the README with more focused examples. This is good as it re-emphasises the main use case of {comatradr} package.

  • It might be good to consider adding at least one other vignette aimed at those who have used or are familiar with the earlier iteration of the package and explain/differentiate through a specific use case (i.e., retrieving data) the previous approach as compared to the current approach. In this, links to the new data structure for Comtrade provided by the UN will be helpful as this will give the user an idea of what new fields they can expect in the output of the function that retrieves data.

  • Another vignette that might be useful specifically for those who used the previous version to bulk download data from Comtrade is about what users need to consider if they are to going to do a bulk download and what features of the new package help with this (limits to number of years, limits to number of characters in API calls, throttling of the API calls). These elements are in different places in the documentation but might be good to be put together in a vignette using a worked example of a use case of performing a bulk download (i.e., multiple years, lots of commodities).

Test functionality

I have been using the new {comtradr} package since June of this year for a project I am involved in that is downloading data on 963 commodities for all countries from 2000 to 2023. The team I am working with had original/previous code that used the {tradestatistics} package to do this at a time of the previous API specification for the UN Comtrade. When I joined the team for the second phase of the project, the API changed and the both the {tradestatistics} package and the legacy {comtradr} package underwent changes to accommodate the new API specifiations. For the purposes that we need it for (bulk download), the updated {comtradr} package was fit for our needs as its main function for getting data factored in all the rate-limits that the new API has put in place and also includes API call throttling (which I really think is very well thought of and well implemented). Whilst there are still sometimes 500 error issues due to the UN Comtrade server timing out (probably due to heavy query loads), the {comtradr} package has really been able to perform its single most important functionality of retrieving data from UN Comtrade.

So, based on this experience since June, I can say that I've been able to push the {comtradr} functionality to its limits and given restrictions on a free API account, we are happy with what we are able to retrieve the data we need using this package.

A few things that maybe considered by the author/maintainer (probably as a new feature in upcoming updates) are:

  • Better mechanism to catch the infrequent 500 errors that popup secondary to the UN Comtrade server timing out and an appropriate error message that gives information regarding this. Because of how good {comtradr} deals with the throttling, if you are doing a reasonbly sized bulk download, the 500 errors don't come up and the waiting time messaging on console is informative enough. But for instances as I described above for what we used it for, it might be useful to have a bit more information messaging when indeed a 500 error comes up.

  • Maybe consider adding an argument that allows user to specify how much to throttle the API calls. From what I can see in the functions of the package, the throttling is fixed to 6 calls per minute. In our use case above, we would have liked to have had the capability to throttle the calls a little bit more.

  • Maybe consider functions that deal with the added parameters for paid accounts. I think this is not easy to implement as it would mean having a paid account yourself so this might be down the priority list. But I think it would be good in the future to have these particularly for the bulk download capabilities of a paid for account.

@ernestguevarra
Copy link

@datapumpernickel you may notice that a big chunk of my review has been on the documentation. As I was doing the review, I forked your package and have been making edits on the docs/help files/vignettes as I was reviewing them. I am happy to make a pull request with my edits so you can have a look at what these changes may look like. Let me know if you think this would be helpful.

Otherwise, thank you for your good work on getting {comtradr} revived. It's great work that you have put in and I think we can polish it even more! Well done!

@datapumpernickel
Copy link
Author

Hi @ernestguevarra!

first of all, thank you again for your thorough review and helpful comments - this is great! I am also really happy to hear that you were able to already put the package to use and found it useful.

  1. Yes, I would be very happy to have a pull request with your edits and corrections and merge it into the package. Maybe you could pull into the dev branch. Thanks!

  2. I will subsequently work through your comments and either make the respective changes or add it to the list of future features! Already now I can say, allowing the user to control throttling is a great idea and easy to implement. I was hesitant to add more arguments to the already long list, but I totally see how that is necessary. Paid features are indeed on the list for future development, I hope to have premium access next year.

@ernestguevarra
Copy link

@datapumpernickel I will finalise my edits today on the package (mainly documentation) and will make a PR to dev as recommended.

Also, you can just use the PR as a reference for what I am suggesting to change and you can implement the changes yourself as you update the package. I think sometimes that might be easier so you have full control of the changes. So, do with it as you deem best and most efficient for your workflow.

@noamross
Copy link
Contributor

👋 Hello @potterzot, just a friendly reminder your review is due.

@potterzot
Copy link

@ropensci-review-bot check package

@ropensci-review-bot
Copy link
Collaborator

Logged review for potterzot (hours: 4)

@ropensci ropensci deleted a comment from ropensci-review-bot Dec 11, 2023
@noamross
Copy link
Contributor

@ropensci-review-bot submit review #613 (comment) time 4

@ropensci-review-bot
Copy link
Collaborator

Logged review for ernestguevarra (hours: 4)

@noamross
Copy link
Contributor

Thanks for your review @potterzot! @datapumpernickel, now both reviews are in. Let us know when you've implemented and/or have a response to the requested changes

@datapumpernickel
Copy link
Author

datapumpernickel commented Dec 12, 2023

Hi @noamross @ernestguevarra and @potterzot,

thanks to all again for your valuable time and the thorough review! Your comments, additions and tests have already been of great help!

I have made an issue with all the requested changes and am tracking which ones I have already implemented there: ropensci/comtradr#69

Some of the larger documentation tasks (new vignettes for transitioning from older iteration and on how to loop for bulk data) might take me a little longer, but some I might get done in the next week. I will report back once I am done with all the changes. Most I have no further comments on, as I find them sensible and helpful!

@ernestguevarra Would you mind submitting that pull request you wanted to do? It would also be helpful to me, if you have any example code that led you to get 500 Errors! Thanks!

There is two issues, where I think I could need some more input:

Question 1

Currently, I use the ... to allow for users to add arbitrary parameters to the URL call in order to have a certain flexibility, should the API add new parameters that I have not implemented yet. These are passed on, as is without any checking. Using ... has been very easy to implement, but I had at least one user, who had the following issue:
Upon specifying a call, like:

comtradr::ct_get_data(reporter = "DEU", 
                      patner ='ARG', 
                      start_date = 2020, 
                      end_date = 2020)

The API did not work as expected. Instead of returning partner Argentina, it returned partner "World" without an error, because the mistyped patner argument was just catched by the .... So this seems to be very error prone.
I am now tending towards just excluding the ..., because the main functionality is covered and should there be new parameters that are really urgent, I am fairly confident we can implement them relatively quickly.

However, do you have any other idea on how to allow the passing on of arbitrary parameters to the function? e.g. an argument that takes a named list? I am a bit our of my depth here.

Question 2 - Confusion around NULL, "all" and default values.

As has been noted by @potterzot there is some confusion around this. Here is whats happening:

Argument in Blank
comtradr::ct_get_data(reporter = "DEU", 
                      start_date = 2020, 
                      end_date = 2020)

--> This returns Germanys trade with "World", as the argument is unspecified, it defaults to "World".

Argument set to NULL
comtradr::ct_get_data(reporter = "DEU",
                      partner = NULL,
                      start_date = 2020, 
                      end_date = 2020)

--> This returns Germanys trade with all possible partners, notably, including World!

Argument set to 'all'
comtradr::ct_get_data(reporter = "DEU",
                      partner ='all',
                      start_date = 2020, 
                      end_date = 2020)

This returns Germanys trade with all partner countries that are not groups (e.g. ASEAN is not included, World is not included). This is what you would use, if you would want to aggregate the World values yourself.

When I implemented this, it seemed clear to me, but I can now see how this confuses people. I am thinking about changing the default from "World" to NULL, because it is more intuitive in the documentation to think about leaving it in blank as similar to as setting it to NULL. However, for the parameters custom_code, and mode_of_transport, it is quite confusing, when all of a sudden for one trade relationship you get a bunch of weird entries with different modes of transport and custom codes, when in 99% of all cases, what you want is only the total (Data is realtively new in comtrade and not that reliable yet either). Hence the default "total".

Do you have any ideas/suggestions on how to best solve this?

Maybe I should just include it as clear as here in the documentation somewhere?

Thank you again for your time, really highly appreciated!

@datapumpernickel
Copy link
Author

datapumpernickel commented Dec 12, 2023

@noamross One more question for you: Currently, since this package has been reviewed in the past, there is two people who have reviewed the package in the past in the Contributors file. I would just add the two new reviewers to this list, right? We can also keep the old reviewers in, although technically, they have not reviewed the package as is, because we virtually re-wrote 90% of the functions. Not sure what is the policy here. If none, I would just keep everybody in the list.
Thanks!

@noamross
Copy link
Contributor

@datapumpernickel This a new one for us, but I would say yes, keep the previous reviewers, just as you retain the original author. Their contribution to the lineage of the package remains, even if much of the code they reviewed does not.

@noamross
Copy link
Contributor

Q1: I think an extra_params argument that takes a list of additional parameters would serve. I think having this option is useful from the perspective of reducing future maintainer burden.

Q2: I agree that NULL is a common default value. If NULL is an option, I would have it be the default, and not an uncommonly used option. I would suggest (a) the default return smaller data, and (b) that all the options be equivalent. So in this case my suggestion is to retain 'world' as default and remove NULL as an option. Instead come up with another descriptive named option like 'everything' that does the equivalent of NULL.

@potterzot
Copy link

@datapumpernickel just a couple of thoughts on Q1 and Q2:

  • Q1 I have run into this as a user of packages before as well. It can be tricky to debug because it's easy for your eyes to gloss over the mispelling of partner. In the rnassqs package, we handle it by checking all parameters for validity before querying. In this way a misspelled parameter will return an error that essentially says "Error: patner is not a valid parameter."
#' Expand a list handling null list as well
expand_list <- function(...){
  x <- list(...)
  if(length(x) == 0) { NULL } else { if(is.null(names(x))) x[[1]] else x }
}

#' Check validity of functions
parameter_is_valid <- function(param) {
  valid_params <- toupper(c(nassqs_params(), "param"))
  param2 <- gsub("__LE|__LT|__GT|__GE|__LIKE|__NOT_LIKE|__NE", "", toupper(param))
  
  if(!param2 %in% valid_params) {
    stop("Parameter '", param, "' is not a valid parameter. Use `nassqs_params()`
    for a list of valid parameters.")
  }
  return(invisible())
}

#' Make the query, checking parameter validity beforehand
nassqs_GET <- function(...,
                       api_path = c("api_GET",
                                    "get_param_values",
                                    "get_counts"),
                       progress_bar = TRUE,
                       format = c("csv", "json", "xml")) {
  
  # match args
  api_path <- match.arg(api_path)
  format <- match.arg(format)

  params <- expand_list(...)

  # Check that names of the parameters are in the valid parameter list
  for(x in names(params)) { parameter_is_valid(x) }

  ...
}

Here are links to the above functions to see the code in entirety: expand_list(), parameter_is_valid(), nassqs_GET

There is a function that helps with this that will just return all valid parameters and their description to help with usability as well.

  • Q2: My inclination would be one of two options:
    • (1) set the default argument for all these parameters to total and to check in the function to return an error if the value is NULL. total would return the totals, all would return all subcategories but not the total, and NULL would return an error;
    • (2) set the default argument to NULL, which returns the total by default, and allow an all option that would return all values

In general, as a user I prefer a default behavior that fetches less data, so would not prefer the option of having a default that fetches all subcategory values.

@yabellini
Copy link
Member

Hi @datapumpernickel. rOpenSci Community Manager here. If you want to join our Slack, I can send an invitation for you. Please let me know. You can write me to yabellini@ropensci.org

@datapumpernickel
Copy link
Author

datapumpernickel commented Dec 23, 2023

Dear @ernestguevarra, @potterzot and @noamross,

I have now implemented all changes to the best of my abilities. Thank you also for the helpful feedback on my additional questions. This has introduced a minor breaking change, as now all is not available anymore. But since the package is not on CRAN yet and the change is beneficial, I think that is ok.

I have basically kept the Defaults as restrictive as is, to not return too much data. There is no more NULL option, instead you can pass everything and in the case of reporter and partner you can also pass all_countries (formerly all), which makes it clear that not every parameter, but only countries are returned.

I have also gotten rid of ... in favor of extra_params which now makes the package more robust for misspelled arguments - thanks also for your input potterzot!

Implemented changes are listed here: ropensci/comtradr#69

As a next step, I would publish to CRAN, I suppose.

All the best and thank you again to everyone for the helpful comments and your time!

@ropensci-review-bot submit response #613 (comment)

@datapumpernickel
Copy link
Author

Dear @yabellini,

I am already on the Slack-channel, but thanks for the invite!

@ropensci-review-bot
Copy link
Collaborator

@datapumpernickel: please post your response with @ropensci-review-bot submit response <url to issue comment> if you haven't done so already (this is an automatic reminder).

Here's the author guide for response. https://devguide.ropensci.org/authors-guide.html

@ropensci ropensci deleted a comment from ropensci-review-bot Dec 27, 2023
@noamross
Copy link
Contributor

Thanks you for your follow-up, @datapumpernickel! @potterzot and @ernestguevarra, please let us know if you agree the changes address the comments in your review.

And Happy Holidays to all!

@potterzot
Copy link

@noamross I agree that changes address comments and think the package is ready. Thanks!

@ernestguevarra
Copy link

@datapumpernickel sorry for dropping the ball on this.

Thank you for all the progress achieved since my review. I agree with the changes made in relation to my review and also those that have been made in response to @potterzot and @noamross feedback.

I've worked with the latest version a couple of days ago and specifically tried the functions to which changes have been made. All looks good and I agree that the package is ready!

Really appreciate your efforts into this. I think a lot of people will benefit from this!

@noamross
Copy link
Contributor

noamross commented Jan 9, 2024

@ropensci-review-bot approve comtradr

@ropensci-review-bot
Copy link
Collaborator

Approved! Thanks @datapumpernickel for submitting and @ernestguevarra, @potterzot for your reviews! 😁

To-dos:

  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You will need to enable two-factor authentication for your GitHub account.
    This invitation will expire after one week. If it happens write a comment @ropensci-review-bot invite me to ropensci/<package-name> which will re-send an invitation.
  • After transfer write a comment @ropensci-review-bot finalize transfer of <package-name> where <package-name> is the repo/package name. This will give you admin access back.
  • Fix all links to the GitHub repo to point to the repo under the ropensci organization.
  • Delete your current code of conduct file if you had one since rOpenSci's default one will apply, see https://devguide.ropensci.org/collaboration.html#coc-file
  • If you already had a pkgdown website and are ok relying only on rOpenSci central docs building and branding,
    • deactivate the automatic deployment you might have set up
    • remove styling tweaks from your pkgdown config but keep that config file
    • replace the whole current pkgdown website with a redirecting page
    • replace your package docs URL with https://docs.ropensci.org/package_name
    • In addition, in your DESCRIPTION file, include the docs link in the URL field alongside the link to the GitHub repository, e.g.: URL: https://docs.ropensci.org/foobar, https://github.com/ropensci/foobar
  • Skim the docs of the pkgdown automatic deployment, in particular if your website needs MathJax.
  • Fix any links in badges for CI and coverage to point to the new repository URL.
  • Increment the package version to reflect the changes you made during review. In NEWS.md, add a heading for the new version and one bullet for each user-facing change, and each developer-facing change that you think is relevant.
  • We're starting to roll out software metadata files to all rOpenSci packages via the Codemeta initiative, see https://docs.ropensci.org/codemetar/ for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.
  • You can add this installation method to your package README install.packages("<package-name>", repos = "https://ropensci.r-universe.dev") thanks to R-universe.

Should you want to acknowledge your reviewers in your package DESCRIPTION, you can do so by making them "rev"-type contributors in the Authors@R field (with their consent).

Welcome aboard! We'd love to host a post about your package - either a short introduction to it with an example for a technical audience or a longer post with some narrative about its development or something you learned, and an example of its use for a broader readership. If you are interested, consult the blog guide, and tag @ropensci/blog-editors in your reply. They will get in touch about timing and can answer any questions.

We maintain an online book with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding (with advice on releases, package marketing, GitHub grooming); the guide also feature CRAN gotchas. Please tell us what could be improved.

Last but not least, you can volunteer as a reviewer via filling a short form.

@noamross
Copy link
Contributor

noamross commented Jan 9, 2024

Thank you @datapumpernickel for bringing this package to this point and putting it through review after taking over maintenance! Of course most of the steps above don't apply for a package already in our suite.

I do think that the story of this package and review would make a great blog post. Taking over maintenance is a big thing to tackle and the more stories we have of it the better we are able to guide people through it! Please let us know if you are interested in writing one.

@datapumpernickel
Copy link
Author

Hi all!

Really excited to see this package fully approved back in the fold of rOpenSci! Thanks to everybody for the input and your time.
I am in general interested in writing a blog article, but I am not sure when I fill find the time to do so.

Currently I am waiting for some feedback on the caching ability I am implementing. Then I would prioritize getting the package on CRAN. As soon as that is done I will be in touch about writing up the process! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants