diff --git a/sensors/01-meteorological-data.Rmd b/sensors/01-meteorological-data.Rmd index e25f36b..718c27a 100644 --- a/sensors/01-meteorological-data.Rmd +++ b/sensors/01-meteorological-data.Rmd @@ -48,7 +48,7 @@ The key is to process each type of met data (site, reanalysis, forecast, climate ### Using the API to get data In order to access the data, we need to contruct a URL that links to where the -data is located on [Clowder](https://terraref.ncsa.illinois.edu/clowder). The data is +data is located on [Clowder](https://terraref.org/clowder). The data is then pulled down using the API, which ["receives requests and sends responses"](https://medium.freecodecamp.org/what-is-an-api-in-english-please-b880a3214a82) , for Clowder. @@ -72,7 +72,7 @@ observations taken every second. ### Creating the URLs for all data table types All URLs have the same beginning -(https://terraref.ncsa.illinois.edu/clowder/api/geostreams), +(https://terraref.org/clowder/api/geostreams), then additional information is added for each type of data table as shown below. * Station: /sensors/sensor_name=[name] @@ -85,9 +85,9 @@ For example, below are the URLs for the particular data being used in this vignette. These can be pasted into a browser to see how the data is stored as text using JSON. -* Station: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=UA-MAC+AZMET+Weather+Station -* Sensor: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors/438/streams -* Datapoints: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31 +* Station: https://terraref.org/clowder/api/geostreams/sensors?sensor_name=UA-MAC+AZMET+Weather+Station +* Sensor: https://terraref.org/clowder/api/geostreams/sensors/438/streams +* Datapoints: https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31 Possible sensor numbers for a station are found on the page for that station under "id:", and then datapoints numbers are found on the sensor page under @@ -150,7 +150,7 @@ following is typed into the command line, it will download the datapoints data that we're interested in as a file which we have chosen to call `spectra.json`. ```{sh eval=FALSE} -curl -o spectra.json -X GET https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31 +curl -o spectra.json -X GET https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31 ``` #### Using R @@ -176,7 +176,7 @@ library(ncdf.tools) ``` ```{r get-weather-fromJSON} -weather_all <- fromJSON('https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-04-01&until=2018-08-01', flatten = FALSE) +weather_all <- fromJSON('https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-04-01&until=2018-08-01', flatten = FALSE) ``` The `geometries` dataframe is then pulled out from these data, which contains @@ -210,7 +210,7 @@ Here we will download the files using the Clowder API, but note that if you have ```{r met-setup2} knitr::opts_chunk$set(eval = FALSE) -api_url <- "https://terraref.ncsa.illinois.edu/clowder/api" +api_url <- "https://terraref.org/clowder/api" output_dir <- file.path(tempdir(), "downloads") dir.create(output_dir, showWarnings = FALSE, recursive = TRUE) ``` diff --git a/sensors/01.2-pecan-met-utilities.Rmd b/sensors/01.2-pecan-met-utilities.Rmd index 4b13a43..f081d08 100644 --- a/sensors/01.2-pecan-met-utilities.Rmd +++ b/sensors/01.2-pecan-met-utilities.Rmd @@ -30,7 +30,7 @@ source("https://raw.githubusercontent.com/PecanProject/pecan/develop/models/bioc writeLines(" - terraref.ncsa.illinois.edu + terraref.org user@illinois.edu ask diff --git a/sensors/02-sensor-metadata.Rmd b/sensors/02-sensor-metadata.Rmd index c157b76..543b5f4 100644 --- a/sensors/02-sensor-metadata.Rmd +++ b/sensors/02-sensor-metadata.Rmd @@ -18,7 +18,7 @@ theme_set(theme_bw()) # Introduction -This tutorial will demonstrate how to access sensor metadata from within R. All of the sensor metadata is public, and can be queried via the API using the url `https://terraref.ncsa.illinois.edu/clowder/api/datasets//metadata.jsonld`. +This tutorial will demonstrate how to access sensor metadata from within R. All of the sensor metadata is public, and can be queried via the API using the url `https://terraref.org/clowder/api/datasets//metadata.jsonld`. For further information about sensor metadata see the [Sensor Data Standards](/sensor-data-standards.md) section. @@ -27,16 +27,16 @@ For further information about sensor metadata see the [Sensor Data Standards](/s ### Example: RSR curves for PAR, PSII and NDVI -* par: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld -* pri: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a9174f0cad7d8131b09a/metadata.jsonld -* ndvi: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld +* par: https://terraref.org/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld +* pri: https://terraref.org/clowder/api/datasets/5873a9174f0cad7d8131b09a/metadata.jsonld +* ndvi: https://terraref.org/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld ### PAR sensor metadata ```{r} -par_metadata <- jsonlite::fromJSON("https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld") +par_metadata <- jsonlite::fromJSON("https://terraref.org/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld") print(par_metadata$content) knitr::kable(par_metadata$content$rsr) @@ -57,7 +57,7 @@ ggplot(data = par_rsr, aes(x = wavelength, y = response), alpha = 0.4) + ```{r} -ndvi_metadata <- jsonlite::fromJSON("https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld") +ndvi_metadata <- jsonlite::fromJSON("https://terraref.org/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld") knitr::kable(t(ndvi_metadata$content[-21]), col.names = '') ``` diff --git a/sensors/06-list-datasets-by-plot.Rmd b/sensors/06-list-datasets-by-plot.Rmd index 920249c..dfda528 100644 --- a/sensors/06-list-datasets-by-plot.Rmd +++ b/sensors/06-list-datasets-by-plot.Rmd @@ -3,7 +3,7 @@ ## Pre-requisites: * if you have not already done so, you will need to 1) sign up for the [beta user program](https://terraref.org/beta) and 2) -sign up and be approved for access to the the [sensor data portal](https://terraref.ncsa.illinois.edu/clowder) in order to get +sign up and be approved for access to the the [sensor data portal](https://terraref.org/clowder) in order to get the API key that will be used in this tutorial. The terrautils python package has a new `products` module that aids in connecting @@ -23,7 +23,7 @@ from terrautils.products import get_file_listing, extract_file_paths The `get_sensor_list` and `get_file_listing` functions both require the *connection*, *url*, and *key* parameters. The *connection* can be 'None'. The *url* (called host in the -code) should be something like `https://terraref.ncsa.illinois.edu/clowder/`. +code) should be something like `https://terraref.org/clowder/`. The *key* is a unique access key for the Clowder API. ## Getting the sensor list @@ -36,10 +36,10 @@ sensor list and provides a list of names suitable for use in the `get_file_listing` function. To use this tutorial you will need to sign up for Clowder, have your -account approved, and then get an API key from the [Clowder web interface](https://terraref.ncsa.illinois.edu/clowder). +account approved, and then get an API key from the [Clowder web interface](https://terraref.org/clowder). ```{python eval = FALSE} -url = 'https://terraref.ncsa.illinois.edu/clowder/' +url = 'https://terraref.org/clowder/' key = 'ENTER YOUR KEY HERE' ``` @@ -94,13 +94,13 @@ TODO: move this to a separate tutorial page focused on using curl The source files behind the data are available for downloading through the API. By executing a series of requests against the API it's possible to determine the files of interest and then download them. -Each of the API URL's have the same beginning (https://terraref.ncsa.illinois.edu/clowder/api), +Each of the API URL's have the same beginning (https://terraref.org/clowder/api), followed by the data needed for a specific request. As we step through the process you will be able to see how then end of the URL changes depending upon the request. Below is what the API looks like as a URL. Try pasting it into your browser. -[https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W](https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W) +[https://terraref.org/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W](https://terraref.org/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W) This will return data for the requested plot including its id. This id (or identifier) can then be used for additional queries against the API. @@ -127,7 +127,7 @@ we use the variable name of SENSOR_DATA to indicate the name of the plot. ``` {sh eval=FALSE} SENSOR_NAME="MAC Field Scanner Season 1 Field Plot 101 W" -curl -o plot.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=${SENSOR_NAME}" +curl -o plot.json -X GET "https://terraref.org/clowder/api/geostreams/sensors?sensor_name=${SENSOR_NAME}" ``` This creates a file named *plot.json* containing the JSON object returned by the API. The JSON object has an @@ -141,7 +141,7 @@ the stream id. The names of streams are are formatted as " Dataset ``` {sh eval=FALSE} SENSOR_ID=3355 STREAM_NAME="Thermal IR GeoTIFFs Datasets (${SENSOR_ID})" -curl -o stream.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/streams?stream_name=${STREAM_NAME}" +curl -o stream.json -X GET "https://terraref.org/clowder/api/geostreams/streams?stream_name=${STREAM_NAME}" ``` A file named *stream.json* will be created containing the returned JSON object. This JSON object has an 'id' parameter that @@ -153,7 +153,7 @@ We now have a stream ID that we can use to list our datasets. The datasets in tu ``` {sh eval=FALSE} STREAM_ID=11586 -curl -o datasets.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}" +curl -o datasets.json -X GET "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}" ``` After the call succeeds, a file named *datasets.json* is created containing the returned JSON object. As part of the @@ -162,7 +162,7 @@ JSON object there are one or more `properties` fields containing *source_dataset ```{javascript eval=FALSE} properties: { dataset_name: "Thermal IR GeoTIFFs - 2016-05-09__12-07-57-990", - source_dataset: "https://terraref.ncsa.illinois.edu/clowder/datasets/59fc9e7d4f0c3383c73d2905" + source_dataset: "https://terraref.org/clowder/datasets/59fc9e7d4f0c3383c73d2905" }, ``` @@ -171,7 +171,7 @@ The URL of each **source_dataset** can be used to view the dataset in Clowder. The datasets can also be filtered by date. The following filters out datasets that are outside of the range of January 2, 2017 through June 20, 2017. ``` {sh eval=FALSE} -curl -o datasets.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}&since=2017-01-02&until=2017-06-10" +curl -o datasets.json -X GET "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}&since=2017-01-02&until=2017-06-10" ``` ### Getting file paths from dataset @@ -181,7 +181,7 @@ Now that we know what the dataset URLs are, we can use the URLs to query the API Note the the URL has changed from our previous examples now that we're using the URLs returned by the previous call. ``` {sh eval=FALSE} -SOURCE_DATASET="https://terraref.ncsa.illinois.edu/clowder/datasets/59fc9e7d4f0c3383c73d2905" +SOURCE_DATASET="https://terraref.org/clowder/datasets/59fc9e7d4f0c3383c73d2905" curl -o files.json -X GET "${SOURCE_DATASET}/files" ``` @@ -219,7 +219,7 @@ For each file to be retrieved, the unique file ID is needed on the URL. ``` {sh eval=FALSE} FILE_NAME="ir_geotiff_L1_ua-mac_2016-05-09__12-07-57-990.tif" FILE_ID=59fc9e844f0c3383c73d2980 -curl -o "${FILE_NAME}" -X GET "https://terraref.ncsa.illinois.edu/clowder/api/files/${FILE_ID}" +curl -o "${FILE_NAME}" -X GET "https://terraref.org/clowder/api/files/${FILE_ID}" ``` This call will cause the server to return the contents of the file identified in the URL. This file is then stored locally in *ir_geotiff_L1_ua-mac_2016-05-09__12-07-57-990.tif*. diff --git a/traits/00-BETYdb-getting-started.Rmd b/traits/00-BETYdb-getting-started.Rmd index 5baea47..f8e90d7 100644 --- a/traits/00-BETYdb-getting-started.Rmd +++ b/traits/00-BETYdb-getting-started.Rmd @@ -10,7 +10,7 @@ It contains trait (phenotype) data at the plot or plant level as well as meta da ### Introduction to BETYdb -The TERRA REF trait database (terraref.ncsa.illinois.edu/bety) uses the BETYdb data schema (structure) and web application. +The TERRA REF trait database (terraref.org/bety) uses the BETYdb data schema (structure) and web application. The BETYdb software is actively used and developed by the [TERRA Reference](http://terraref.org) program as well as by the [PEcAn project](http://pecanproject.org). For more information about BETYdb, see the following: diff --git a/traits/01-web-access.Rmd b/traits/01-web-access.Rmd index c46ee5a..2512da7 100644 --- a/traits/01-web-access.Rmd +++ b/traits/01-web-access.Rmd @@ -4,10 +4,10 @@ ### Web interface -* Sign up for an account at https://terraref.ncsa.illinois.edu/bety +* Sign up for an account at https://terraref.org/bety * Sign up for the TERRA REF [beta user program](https://docs.google.com/forms/d/e/1FAIpQLScIUJL_OSL9BvBOdlczErds3aOg5Lwz4NIdNQnUiXdsLsYdhw/viewform) * Wait for database access to be granted -* Your API key will be sent in the email. It can also be found - and regenerated - by navigating to the Users page (data --> [users](https://terraref.ncsa.illinois.edu/bety/users)) in the web interface. +* Your API key will be sent in the email. It can also be found - and regenerated - by navigating to the Users page (data --> [users](https://terraref.org/bety/users)) in the web interface. TODO add signup info from handout @@ -18,7 +18,7 @@ On the Welcome page there is a search option for trait and yield data. This tool ### Download search results as as csv file from the web interface -* Point your browser to https://terraref.ncsa.illinois.edu/bety/ +* Point your browser to https://terraref.org/bety/ * login * enter "NDVI" in the search box * on the next page you will see the results of this search diff --git a/traits/02-betydb-api-access.Rmd b/traits/02-betydb-api-access.Rmd index 129ff67..bbee679 100644 --- a/traits/02-betydb-api-access.Rmd +++ b/traits/02-betydb-api-access.Rmd @@ -45,7 +45,7 @@ An API key is not needed to access public data includes sample datasets and meta ### Components of a URL query -* Base url: `terraref.ncsa.illinois.edu/bety` +* Base url: `terraref.org/bety` * Path to the api: `/api/v1` * API endpoint: `/search` or `traits` or `sites`. For BETYdb, these are the names of database tables. * Query parameters: `genus=Sorghum` @@ -57,23 +57,23 @@ An API key is not needed to access public data includes sample datasets and meta First, lets construct a query by putting together a URL. -1. start with the database url: `terraref.ncsa.illinois.edu/bety` +1. start with the database url: `terraref.org/bety` * this url brings you to the home page 2. Add the path to the API, `/api/v1` - * now we have terraref.ncsa.illinois.edu/bety/api/v1, which points to the API documentation for additional detail on available options + * now we have terraref.org/bety/api/v1, which points to the API documentation for additional detail on available options 3. Add the name of the table you want to query. Lets start with `variables` - * terraref.ncsa.illinois.edu/bety/api/v1/variables + * terraref.org/bety/api/v1/variables 4. Add query terms by appending a `?` and combining with `&`. These can be done in any order. For example: * `type=trait` where the variable type is 'trait' * `name=~height` where the variable name contains 'height' 5. Assembling all of this, you have a complete query: - * `terraref.ncsa.illinois.edu/bety/api/v1/variables?type=trait&name=~height` + * `terraref.org/bety/api/v1/variables?type=trait&name=~height` * This will query all trait variables that have 'height' in their name. * Does it return the expected values? There should be two. ## Your Turn -> What will the URL https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum return? +> What will the URL https://terraref.org/bety/api/v1/species?genus=Sorghum return? > Write a URL that will query the database for sites with "Field Scanner" in the name field. Hint: combine two terms with a `+` as in `Field+Scanner` @@ -86,14 +86,14 @@ Type the following command into a bash shell (the `-o` option names the output f ```sh curl -o sorghum.json \ - "https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum" + "https://terraref.org/bety/api/v1/species?genus=Sorghum" ``` If you want to write the query without exposing the key in plain text, you can construct it like this: ```sh curl -o sorghum.json \ - "https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum" + "https://terraref.org/bety/api/v1/species?genus=Sorghum" ``` ## Using the R jsonlite package to access the API with a URL query @@ -107,7 +107,7 @@ library(jsonlite) ```{r text-api, warning = FALSE} sorghum.json <- readLines( - paste0("https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum&key=", + paste0("https://terraref.org/bety/api/v1/species?genus=Sorghum&key=", readLines('.betykey'))) ## print(sorghum.json) diff --git a/traits/03-access-r-traits.Rmd b/traits/03-access-r-traits.Rmd index f7af5cc..d061346 100644 --- a/traits/03-access-r-traits.Rmd +++ b/traits/03-access-r-traits.Rmd @@ -62,7 +62,7 @@ sorghum_info <- betydb_query(table = 'species', genus = "Sorghum", api_version = 'v1', limit = 'none', - betyurl = "https://terraref.ncsa.illinois.edu/bety/") + betyurl = "https://terraref.org/bety/") ``` @@ -71,7 +71,7 @@ sorghum_info <- betydb_query(table = 'species', Notice all of the arguments that the `betydb_query` function requires? We can change this by setting the default connection options thus: ```{r 03-set-up, echo = TRUE} -options(betydb_url = "https://terraref.ncsa.illinois.edu/bety/", +options(betydb_url = "https://terraref.org/bety/", betydb_api_version = 'v1') ``` diff --git a/traits/04-danforth-indoor-phenotyping-facility.Rmd b/traits/04-danforth-indoor-phenotyping-facility.Rmd index a049a81..25ab2ac 100644 --- a/traits/04-danforth-indoor-phenotyping-facility.Rmd +++ b/traits/04-danforth-indoor-phenotyping-facility.Rmd @@ -19,12 +19,12 @@ library(traits) ## Connect to the TERRA REF Trait database -Unlike the first two tutorials, now we will be querying real data from the public TERRA REF database. So we will use a new URL, https://terraref.ncsa.illinois.edu/bety/, and we will need to use our own private key. +Unlike the first two tutorials, now we will be querying real data from the public TERRA REF database. So we will use a new URL, https://terraref.org/bety/, and we will need to use our own private key. ```{r terraref-connect-options} options(betydb_key = readLines('.betykey', warn = FALSE), - betydb_url = "https://terraref.ncsa.illinois.edu/bety/", + betydb_url = "https://terraref.org/bety/", betydb_api_version = 'v1') ``` ### Query data from the Danforth Phenotyping Facility diff --git a/traits/05-maricopa-field-scanner.Rmd b/traits/05-maricopa-field-scanner.Rmd index 37d6da9..ee489cd 100644 --- a/traits/05-maricopa-field-scanner.Rmd +++ b/traits/05-maricopa-field-scanner.Rmd @@ -13,7 +13,7 @@ library(leaflet) options(betydb_key = readLines('.betykey', warn = FALSE), - betydb_url = "https://terraref.ncsa.illinois.edu/bety/", + betydb_url = "https://terraref.org/bety/", betydb_api_version = 'v1') ``` diff --git a/traits/06-agronomic-metadata.Rmd b/traits/06-agronomic-metadata.Rmd index f90f445..c39e48d 100644 --- a/traits/06-agronomic-metadata.Rmd +++ b/traits/06-agronomic-metadata.Rmd @@ -1,6 +1,6 @@ # Querying Agronomic Meta-data -In previous tutorials you have learned how to query trait data using a variety of different methods, including the web interface, an API, and the R traits package. Here you will continue to use the R traits package, and learn how to access meta-data from other tables in the database. +In previous tutorials you have learned how to query trait data using a variety of different methods, including the web interface, an API, and the R traits package. Here you will continute to use the R traits package, and learn how to access meta-data from other tables in the database. While the basic search query that we have used in previous sections provides the key information that you may need for an analysis - the genotype name, the location, date, and method, there are other tables that contain more specific metadata. @@ -14,7 +14,7 @@ While the main search results provide the latitude and longitude of the center o ![](https://raw.githubusercontent.com/ebimodeling/betydb_manuscript/master/figures/gcbb12420-fig-0001.png) -An interactive schema can be found at [terraref.ncsa.illinois.edu/schemas](https::terraref.ncsa.illinois.edu/schemas) +An interactive schema can be found at [terraref.org/schemas](https::terraref.org/schemas) ### Tables @@ -82,7 +82,7 @@ library(leaflet) year <- lubridate::year options(betydb_key = readLines('.betykey', warn = FALSE), - betydb_url = "https://terraref.ncsa.illinois.edu/bety/", + betydb_url = "https://terraref.org/bety/", betydb_api_version = 'v1') ``` diff --git a/traits/07-betydb-sql-access.Rmd b/traits/07-betydb-sql-access.Rmd index c51b2db..672554e 100644 --- a/traits/07-betydb-sql-access.Rmd +++ b/traits/07-betydb-sql-access.Rmd @@ -5,9 +5,9 @@ will be derived from https://github.com/pi4-uiuc/2017-bootcamp/blob/master/conte ## Using PostgresSQL Studio -Lets connect to the terraref instance of betydb. Until now we have been accessing betydb.org. Now we will access (a copy of) the database behind `terraref.ncsa.illinois.edu/bety` +Lets connect to the terraref instance of betydb. Until now we have been accessing betydb.org. Now we will access (a copy of) the database behind `terraref.org/bety` -This connection is only available on the local *ncsa.illinois.edu network, and can be accessed through the NDS Labs workbench. +This connection is only available on the local network, so will require either installing a local copy of the database or ssh access. ``` Host: bety6.ncsa.illinois.edu diff --git a/traits/08-brapi_r_tutorial.Rmd b/traits/08-brapi_r_tutorial.Rmd new file mode 100644 index 0000000..20392c0 --- /dev/null +++ b/traits/08-brapi_r_tutorial.Rmd @@ -0,0 +1,85 @@ +--- +title: "Accessing TERRA REF data using the brapi R package" +author: "Reinhard Simon" +author: "David LeBauer" +date: "`r Sys.Date()`" +output: rmarkdown::html_vignette +--- + +# Objective + +Here demonstrate the use of the brapi R package to query data from the TERRA REF traits database (terraref.org/bety) using the Breeder's API (BrAPI). + +What is BrAPI? + +Data are public and no login credentials are needed. + +```{r, message=TRUE, warning=TRUE} + +library(brapi) +terraref <- ba_db()$terraref +print(terraref) + + +# show verbose server feedback +ba_show_info(TRUE) + +``` + +## Listing available calls + +The BrAPI specification does not require all endpoints to be implemented, and TERRA REF provides a subset of endpoints focused on genotypes, experimental metadata, and phenotypes. The `ba_calls()` function lists the functionality supported by the server. + +```{r} +z <- ba_calls(terraref) +``` + +## Function Arguments + +* `con`: Always the first argument, provides database connection information as a list. To query terraref, we will use `con = terraref`, which we returned above from `ba_db()$terraref` (try `print(ba_db())` to see some of the other crop databases that you can query). +* `rclass`: the last argument is always the class of object returned. The default type is a 'tibble', althought you can also request `data.frame`, `json`, or `list`. +* Other parameters are of class 'character'. Exceptions are: the con parameter is always a list; the parameters 'page' and 'pageSize' if applicable are integers. For details see individual functions. + +## Getting phenotypic data + +The brapi models trial data in a three layer hierarchy: a) breeding program which has b) trials that c) may consist of one or more studies at one or more locations. A study at one location is also often referred to as a fieldbook. + +### Which breeding programs are there? + +```{r} +ba_crops(terraref) +``` + + +### Which studies are there? + +```{r} +ba_studies_search(sp_base, programDbId = "140") +``` + +### Get a study (or fieldbook) + +```{r, message=FALSE, warning=FALSE} +# Currently not working!!! +#dt = ba_studies_table(sp_base, +# studyDbId = "151") +``` + +```{r, echo=FALSE} +#library(DT) +#datatable( +# dt, +# options=list(pageLength = 5, scrollX = TRUE) +# ) +``` + + + + + + + + + + + diff --git a/videos/first_walkthrough.Rmd b/videos/first_walkthrough.Rmd index a1de7b7..111d84e 100644 --- a/videos/first_walkthrough.Rmd +++ b/videos/first_walkthrough.Rmd @@ -67,9 +67,8 @@ Specify: - Using public API key to access data, will show later how to access your own API key to get to more data ```{r} -options(betydb_url = "https://terraref.ncsa.illinois.edu/bety/", - betydb_api_version = 'beta', - betydb_key = '9999999999999999999999999999999999999999') +options(betydb_url = "https://terraref.org/bety/", + betydb_api_version = 'v1') ``` Function from `traits` is `betydb_query`. Getting first 1000 rows of data from fourth season. Can get all by not setting `limit` argument, but it takes a while. @@ -176,7 +175,7 @@ First specify URL where data comes from. Parts are from NCSA Clowder, using API Then read into `fromJSON`. This gets all available weather data for 2017. ```{r} -weather_url <- "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31" +weather_url <- "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31" weather <- fromJSON(weather_url, flatten = FALSE) ``` @@ -238,7 +237,7 @@ Now pull down different weather and trait data, combine them, and model their re First pull down weather data for the entire year of 2018. Create new correctly formatted date column like before too. ```{r} -weather_2018_url <- "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-01-01&until=2018-12-31" +weather_2018_url <- "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-01-01&until=2018-12-31" weather_2018 <- fromJSON(weather_2018_url, flatten = FALSE) weather_all <- weather_2018$properties %>% diff --git a/videos/second_walkthrough.Rmd b/videos/second_walkthrough.Rmd index 0e481e3..4a50228 100644 --- a/videos/second_walkthrough.Rmd +++ b/videos/second_walkthrough.Rmd @@ -11,7 +11,7 @@ The purpose of this walkthrough is to review the experimental design of TERRA RE ## Video 1: Explore Data with Traitvis Web App -Go to [website](https://traitvis.workbench.terraref.org/), which takes a minute to load. Displays plots of various data across collection time. +Go to [website](https://terraref.org/traitvis), which takes a minute to load. Displays plots of various data across collection time. As example, we'll look at data from Season 6, which we looked at last week, by going to "MAC Season 6" tab. @@ -74,7 +74,7 @@ Globus is good for downloading a bunch of images, from a particular date and/or ## Video 4: Downloading RGB Files from Clowder -Second platform is Clowder, which is better for browsing through files. Website is [here](https://terraref.ncsa.illinois.edu/clowder/). You can follow along, these are publicly accessible files. Clowder is an interface on top of Globus. +Second platform is Clowder, which is better for browsing through files. Website is [here](https://terraref.org/clowder/). You can follow along, these are publicly accessible files. Clowder is an interface on top of Globus. All data are organized in several ways. In spaces, collections, or datasets. We can find same RGB tif from before. Under "Explore" tab, select "Collections". Look at "Season 6 (2018)", which takes a minute to load. Then "RGB Camera Data (Season6 Samples)". Scroll down to third file, can see it's the one from the same date and time as before. @@ -109,7 +109,7 @@ Can see if it works by running object, should return a 200 message. ```{python, eval=FALSE} import requests -file_url = 'https://terraref.ncsa.illinois.edu/clowder/files/5c5488fa4f0c4b0cbe7af98a/blob' +file_url = 'https://terraref.org/clowder/files/5c5488fa4f0c4b0cbe7af98a/blob' api_key = {'key': ''} file_request = requests.get(file_url, api_key) file_request diff --git a/videos/third_walkthrough.Rmd b/videos/third_walkthrough.Rmd index 43b8973..da8e920 100644 --- a/videos/third_walkthrough.Rmd +++ b/videos/third_walkthrough.Rmd @@ -25,7 +25,7 @@ plot(single_RGB) That's just a single image taken in that particular plot on the first of May. If we want all the images from a day combined, there's processed data files that hold that in Clowder. -Go to [https://terraref.ncsa.illinois.edu/clowder/](https://terraref.ncsa.illinois.edu/clowder/) to access Clowder. We're using all public images so do not need to log in. Navigate to a file by going Spaces -> Sample Data 2019 -> (may have to "View All Datasets") Season 6 May 2018 full field RGB Geotiffs -> click on first one. +Go to [https://terraref.org/clowder/](https://terraref.org/clowder/) to access Clowder. We're using all public images so do not need to log in. Navigate to a file by going Spaces -> Sample Data 2019 -> (may have to "View All Datasets") Season 6 May 2018 full field RGB Geotiffs -> click on first one. This is called a mosiac. Combines all the images from that day, even if they aren't continous. This is a lower resolution version of all images, it would be a much larger file if not. The file we just plotted is within this. @@ -37,7 +37,7 @@ Go to Vice app. In command line (Terminal tab), start up Python. First set up co python3 import requests -file_url = 'https://terraref.ncsa.illinois.edu/clowder/api/files/5c81a03e4f0c0ca8052b2635?dataset=5c81709a4f0c78f6486d686c&space=5c50512a4f0c436195b9ad67' +file_url = 'https://terraref.org/clowder/api/files/5c81a03e4f0c0ca8052b2635?dataset=5c81709a4f0c78f6486d686c&space=5c50512a4f0c436195b9ad67' api_key = {'key': ''} file_request = requests.get(file_url, api_key) file_request @@ -73,9 +73,8 @@ First get plot vector. Now pulling site info from Bety, like we did for trait da ```{r, eval=FALSE} library(traits) -options(betydb_url = "https://terraref.ncsa.illinois.edu/bety/", - betydb_api_version = 'beta', - betydb_key = '9999999999999999999999999999999999999999') +options(betydb_url = "https://terraref.org/bety/", + betydb_api_version = 'v1') plot_data <- betydb_query(table = "sites", sitename = "MAC Field Scanner Season 6 Range 19 Column 10") @@ -163,7 +162,7 @@ This will take a minute because it's doing what we did before for three files. ```{python, eval=FALSE} for file in files: - file_request = requests.get('https://terraref.ncsa.illinois.edu/clowder/api/files/' + file["id"], api_key) + file_request = requests.get('https://terraref.org/clowder/api/files/' + file["id"], api_key) with open(file["filename"], 'wb') as object: for chunk in file_request.iter_content(chunk_size=2014): if chunk: diff --git a/vignettes/01-get-trait-data-R.Rmd b/vignettes/01-get-trait-data-R.Rmd index 8566586..edf361e 100644 --- a/vignettes/01-get-trait-data-R.Rmd +++ b/vignettes/01-get-trait-data-R.Rmd @@ -39,9 +39,8 @@ library(knitr) The function that is used to query BETYdb is called `betydb_query`. To reduce the number of arguments needed to pass into this function, we can set some global options using `options`. In this case, we will set the URL used in the query, and the API version. ```{r traits-vig-bety-opt} -options(betydb_url = "https://terraref.ncsa.illinois.edu/bety/", - betydb_api_version = 'beta', - betydb_key = '9999999999999999999999999999999999999999') +options(betydb_url = "https://terraref.org/bety/", + betydb_api_version = 'v1') ``` diff --git a/vignettes/02-get-weather-data-R.Rmd b/vignettes/02-get-weather-data-R.Rmd index f238ddd..b131a77 100644 --- a/vignettes/02-get-weather-data-R.Rmd +++ b/vignettes/02-get-weather-data-R.Rmd @@ -22,7 +22,7 @@ library(jsonlite) library(lubridate) library(tidyr) -weather_all <- fromJSON('https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31', flatten = FALSE) +weather_all <- fromJSON('https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31', flatten = FALSE) ``` The `geometries` dataframe is then pulled out from these data, which contains the datapoints from this stream.