Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testthat #40

Closed
wants to merge 71 commits into from
Closed

Testthat #40

wants to merge 71 commits into from

Conversation

GregorDeCillia
Copy link
Contributor

Just a draft pull request so gh-actions gets triggered by this branch.

{pillar} and {vctrs} are the backbone for
customizing tibbles. They are dependencies
of the {tibble} package and therefore
"free" once {tibble} is used as a dependency
package of {STATcubeR}
try this class only with sc_table_saved_list()
for now
make sure the objects of class
<sc_table_uri> are compatible with
sc_table_saved()
- don't import {tibble} since currently,
  only {vctrs} and {pillar} is used
- export as.character() for sc_schema_uri
- re-roxygenize
this is now handled as in sc_table(), od_table()
and so on
if this package is roxygenized insie of the
STAT firewall, the documentation links generated
by sc_browse*() will point to the internal server

re-roxygenize from the outside

TODO: find a way to avoid this in the future. Maybe
write a wrapper-function around devtools::document()
which temporarily sets the env-var STATCUBER_IN_STAT
another tweak for cli::style_hyperlink(). Hopefully,
this will get easier once these features mature
ad some notes that instead of VALUE and VALUESET
it is also possible to use uris for COUNT
resources in the "measures" parameter of
sc_table_custom()
this error was overlooked when the
error handling vignette was first written

fortunately, the API does a good job
of explaining the error in the json
body of the response so the error
handlerst do not need an upgrade

[ci skip]
the sc_table article now showcases the
print methods for all the example
datasets in german

[skip ci]
add those entries to the metadata.
NOTE: columns 5 and 7 are not used
in data.csv according the OGD standard
but some internale datasets provide these columns and therefore they are
imported as the description of the
measure/classification
add a patch release since the
additional metadata are needed for a
deployment

NEWS for 0.5.0.1 and 0.5.1 will be
merged when 0.5.1 is released
* since json-downloads requir a login,
  link to the login page
* link to the documentation page instead
  of the manual
- remove @Keywords internal
- add documentation for missing params

[skip ci]
first attempt to resolve #33. Recodes
can now be defined with an additional
parameter. However, type-checking is
very minimal.

TODO:
- better error handling when the
  request is constructed. This way
  users get quick and useful error
  messages - at least for semantic
  errors such as invalid usage of
  parameters
- with this implementation, users will
  have to make sure that the parameters
  "recodes" and "dimensions" are
  consistent. Maybe simplify the usage
- The naming sc_recode is almost
  conflicting with the class
  sc_recoder. Possibly rename this
  function
- extend the custom tables article to
  showcase some usecases for recodes
  and add a short discussion about
  usage limits
- maybe add sc_filter which only allows
  filter-type recodes and performs
  stricter type-checks?
showcase the usage of sc_recode in the
web documentation.
there are now several checks in place
that throw warnings if inputs in
sc_table_custom() or sc_recode() are
of the wrong schema-type or if other
inconsistencies are suspected. See
the section called "error handling"
in ?sc_table_custom for more details

some of those warnings might be
replaced with errors in the future

part of #33
add a minimum requirement to pillar
for the version from 2021-02-22
to make sure the S3 generics
format_tbl_footer() is available
don't use the .onLoad hook with
base::registerS3method but use the
import via NAMESPACE (roxygen)
instead

[skip ci]
reimplements #36 with a slightly
different approach in regards to
naming
links to cache files are now clickable and
last_modified and cached can will be
abbreviated if there is not enough horizontal
space
the resouce uris are now displayed similar
to sc_schema()
re-sync the roxygen-generated files
add a new parameter `dry_run` to
sc_table_custom() which allows to see what
request is generated without actually sending
it to the API

with this option, all type-checks are still
applied
add a new helper function data_frame()
which generates data frame objects
and automatically takes care of some
common points

- to avoid problems with stringsAsFactors,
  use the vctrs constructor
- add the "tbl" class to enable printing
  the data frames similar to tibbles

this makes it possible to skip ssetting
strngsAsFactors repedetly and also
avoids some places where `class<-` was used
previously
use another examplesIf clause to make
sure devtoos::check() can be run
without an API key
if map is passed as a named vector,
this is never supported. therefore,
drop the names. This can be useful
for subsetting schema objects via
single square brackets
dropping those levels causes hierachy
information to become unavailable.
some new client code now makes use
of the hierarchies. The new behavior
will generate the fiels as factor
columns with unused factor levels.
if json files are downloaded, use
cli::progres_along() to show how many
json files are still remaining
the (internal) function for parsing
open data datasets used to have
a parameter to check if levels
were dropped. This is no longer
necessary since a629296

[ci skip]
don't use registerS3Method for derived
classes of data.frame since it does not
seem to make any difference
the current palette of colors for
sc_schema is only suited for dark
editors. For now, add a light theme
for the pkgdown reference pages

TODO: once the annotation printing
is implemented, think about adding
a way to customize the color scheme
in all STATcubeR. This could also
take care of cli_theme_pkgdown()

[ci skip]
make sure the links to statcube in the
schema vignette open in a new tab

TODO: do this globally in R/df_print.R
an also use target=_blank for OGD links

[ci skip]
assume that the environment where document()
runs can be characterized by the NOT_CRAN
flag being present (not necessarily FALSE or
TRUE). This is to avoid having links to the
editing server in the manpages and on pkgdown

this will only matter if the docs are built
inside the STAT firewall. If at some point,
the docs are built via gh-actions, this can be
reverted
the following expression will now show all
folders from the catalogue

sc_schema_catalogue() %>%
   sc_schema_flatten("FOLDER")

previously, this would only have returned
one entry containig the root folder because
the recursion was stopped as soon as the
appropriate schema type was detected

this change also affects the schema types
GROUP, MEASURE, FIELD and VALUESET in
sc_schema_db() which can also have child nodes

[ci skip]
check the argument against the list of
available schema types. the argument is
now also coerced via toupper() because
the spelling in schema uris uses lowercase
the nace classification in this database was
updadet. Reflect this in the example request

[ci skip]
cli_text uses the message channel to
generate the visible console outputs

this is not what to exprect from a
print method wich should always feed
into stdout

cli_text() is also used in other places
of STATcubeR but always wrapped into
cli_fmt() which means that output
channels do not matter in those
circumstances because the outputs are
captured to be formatted elsewhere
include another link to github into the
DESCRIPTION metadata. this is common
practice in most packages on CRAN

[ci skip]
first iteration of some unit tests.
for now, the goal is just to get a
bigger picture about hot/cold code
and redundancy

TODO:
- figure out how to get a reasonable
  test coverage without an API key
- set up a gh-actions job without an
  API key to make sure that the test
  run successfully without one
  (currently, they reqiure a key)
instead of getting the language from the
response headers, this is now an explicit
argument of sc_table_class$new()

the reason for that is supporting {httptest}
which will be used in this branch to
test the package based on cached API responses

the caching in httptest does not store
headers by default. This means that in order
to use the mock responses, the parsers
need to be working even if the headers only
contain a status code and a content type
there are now some additional tests which
were inspired by vignettes/sc_last_error.Rmd
and check wether the error-types are as
expected

those tests use httptest. The contents of
tests/testthat/statcube* are cached API
responses which will be used inside
the new tests.

the only modification of the caches prior to
versioning them was that the support id was
removed from some responses.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants