Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions sensors/01-meteorological-data.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The key is to process each type of met data (site, reanalysis, forecast, climate
### Using the API to get data

In order to access the data, we need to contruct a URL that links to where the
data is located on [Clowder](https://terraref.ncsa.illinois.edu/clowder). The data is
data is located on [Clowder](https://terraref.org/clowder). The data is
then pulled down using the API, which ["receives requests and sends responses"](https://medium.freecodecamp.org/what-is-an-api-in-english-please-b880a3214a82)
, for Clowder.

Expand All @@ -72,7 +72,7 @@ observations taken every second.
### Creating the URLs for all data table types

All URLs have the same beginning
(https://terraref.ncsa.illinois.edu/clowder/api/geostreams),
(https://terraref.org/clowder/api/geostreams),
then additional information is added for each type of data table as shown below.

* Station: /sensors/sensor_name=[name]
Expand All @@ -85,9 +85,9 @@ For example, below are the URLs for the particular data being used in this
vignette. These can be pasted into a browser to see how the data is stored as
text using JSON.

* Station: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=UA-MAC+AZMET+Weather+Station
* Sensor: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors/438/streams
* Datapoints: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
* Station: https://terraref.org/clowder/api/geostreams/sensors?sensor_name=UA-MAC+AZMET+Weather+Station
* Sensor: https://terraref.org/clowder/api/geostreams/sensors/438/streams
* Datapoints: https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31

Possible sensor numbers for a station are found on the page for that station
under "id:", and then datapoints numbers are found on the sensor page under
Expand Down Expand Up @@ -150,7 +150,7 @@ following is typed into the command line, it will download the datapoints data
that we're interested in as a file which we have chosen to call `spectra.json`.

```{sh eval=FALSE}
curl -o spectra.json -X GET https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
curl -o spectra.json -X GET https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
```

#### Using R
Expand All @@ -176,7 +176,7 @@ library(ncdf.tools)
```

```{r get-weather-fromJSON}
weather_all <- fromJSON('https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-04-01&until=2018-08-01', flatten = FALSE)
weather_all <- fromJSON('https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-04-01&until=2018-08-01', flatten = FALSE)
```

The `geometries` dataframe is then pulled out from these data, which contains
Expand Down Expand Up @@ -210,7 +210,7 @@ Here we will download the files using the Clowder API, but note that if you have

```{r met-setup2}
knitr::opts_chunk$set(eval = FALSE)
api_url <- "https://terraref.ncsa.illinois.edu/clowder/api"
api_url <- "https://terraref.org/clowder/api"
output_dir <- file.path(tempdir(), "downloads")
dir.create(output_dir, showWarnings = FALSE, recursive = TRUE)
```
Expand Down
2 changes: 1 addition & 1 deletion sensors/01.2-pecan-met-utilities.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ source("https://raw.githubusercontent.com/PecanProject/pecan/develop/models/bioc
writeLines("
<pecan>
<clowder>
<hostname>terraref.ncsa.illinois.edu</hostname>
<hostname>terraref.org</hostname>
<user>user@illinois.edu</user>
<password>ask</password>
</clowder>
Expand Down
12 changes: 6 additions & 6 deletions sensors/02-sensor-metadata.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ theme_set(theme_bw())

# Introduction

This tutorial will demonstrate how to access sensor metadata from within R. All of the sensor metadata is public, and can be queried via the API using the url `https://terraref.ncsa.illinois.edu/clowder/api/datasets/<id>/metadata.jsonld`.
This tutorial will demonstrate how to access sensor metadata from within R. All of the sensor metadata is public, and can be queried via the API using the url `https://terraref.org/clowder/api/datasets/<id>/metadata.jsonld`.

For further information about sensor metadata see the [Sensor Data Standards](/sensor-data-standards.md) section.

Expand All @@ -27,16 +27,16 @@ For further information about sensor metadata see the [Sensor Data Standards](/s

### Example: RSR curves for PAR, PSII and NDVI

* par: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld
* pri: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a9174f0cad7d8131b09a/metadata.jsonld
* ndvi: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld
* par: https://terraref.org/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld
* pri: https://terraref.org/clowder/api/datasets/5873a9174f0cad7d8131b09a/metadata.jsonld
* ndvi: https://terraref.org/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld


### PAR sensor metadata

```{r}

par_metadata <- jsonlite::fromJSON("https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld")
par_metadata <- jsonlite::fromJSON("https://terraref.org/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld")
print(par_metadata$content)
knitr::kable(par_metadata$content$rsr)

Expand All @@ -57,7 +57,7 @@ ggplot(data = par_rsr, aes(x = wavelength, y = response), alpha = 0.4) +

```{r}

ndvi_metadata <- jsonlite::fromJSON("https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld")
ndvi_metadata <- jsonlite::fromJSON("https://terraref.org/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld")
knitr::kable(t(ndvi_metadata$content[-21]), col.names = '')

```
Expand Down
26 changes: 13 additions & 13 deletions sensors/06-list-datasets-by-plot.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## Pre-requisites:

* if you have not already done so, you will need to 1) sign up for the [beta user program](https://terraref.org/beta) and 2)
sign up and be approved for access to the the [sensor data portal](https://terraref.ncsa.illinois.edu/clowder) in order to get
sign up and be approved for access to the the [sensor data portal](https://terraref.org/clowder) in order to get
the API key that will be used in this tutorial.

The terrautils python package has a new `products` module that aids in connecting
Expand All @@ -23,7 +23,7 @@ from terrautils.products import get_file_listing, extract_file_paths

The `get_sensor_list` and `get_file_listing` functions both require the *connection*,
*url*, and *key* parameters. The *connection* can be 'None'. The *url* (called host in the
code) should be something like `https://terraref.ncsa.illinois.edu/clowder/`.
code) should be something like `https://terraref.org/clowder/`.
The *key* is a unique access key for the Clowder API.

## Getting the sensor list
Expand All @@ -36,10 +36,10 @@ sensor list and provides a list of names suitable for use in the
`get_file_listing` function.

To use this tutorial you will need to sign up for Clowder, have your
account approved, and then get an API key from the [Clowder web interface](https://terraref.ncsa.illinois.edu/clowder).
account approved, and then get an API key from the [Clowder web interface](https://terraref.org/clowder).

```{python eval = FALSE}
url = 'https://terraref.ncsa.illinois.edu/clowder/'
url = 'https://terraref.org/clowder/'
key = 'ENTER YOUR KEY HERE'
```

Expand Down Expand Up @@ -94,13 +94,13 @@ TODO: move this to a separate tutorial page focused on using curl
The source files behind the data are available for downloading through the API. By executing a series
of requests against the API it's possible to determine the files of interest and then download them.

Each of the API URL's have the same beginning (https://terraref.ncsa.illinois.edu/clowder/api),
Each of the API URL's have the same beginning (https://terraref.org/clowder/api),
followed by the data needed for a specific request. As we step through the process you will be able
to see how then end of the URL changes depending upon the request.

Below is what the API looks like as a URL. Try pasting it into your browser.

[https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W](https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W)
[https://terraref.org/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W](https://terraref.org/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W)

This will return data for the requested plot including its id. This id (or identifier) can then be used for
additional queries against the API.
Expand All @@ -127,7 +127,7 @@ we use the variable name of SENSOR_DATA to indicate the name of the plot.

``` {sh eval=FALSE}
SENSOR_NAME="MAC Field Scanner Season 1 Field Plot 101 W"
curl -o plot.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=${SENSOR_NAME}"
curl -o plot.json -X GET "https://terraref.org/clowder/api/geostreams/sensors?sensor_name=${SENSOR_NAME}"
```

This creates a file named *plot.json* containing the JSON object returned by the API. The JSON object has an
Expand All @@ -141,7 +141,7 @@ the stream id. The names of streams are are formatted as "<Sensor Group> Dataset
``` {sh eval=FALSE}
SENSOR_ID=3355
STREAM_NAME="Thermal IR GeoTIFFs Datasets (${SENSOR_ID})"
curl -o stream.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/streams?stream_name=${STREAM_NAME}"
curl -o stream.json -X GET "https://terraref.org/clowder/api/geostreams/streams?stream_name=${STREAM_NAME}"
```

A file named *stream.json* will be created containing the returned JSON object. This JSON object has an 'id' parameter that
Expand All @@ -153,7 +153,7 @@ We now have a stream ID that we can use to list our datasets. The datasets in tu

``` {sh eval=FALSE}
STREAM_ID=11586
curl -o datasets.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}"
curl -o datasets.json -X GET "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}"
```

After the call succeeds, a file named *datasets.json* is created containing the returned JSON object. As part of the
Expand All @@ -162,7 +162,7 @@ JSON object there are one or more `properties` fields containing *source_dataset
```{javascript eval=FALSE}
properties: {
dataset_name: "Thermal IR GeoTIFFs - 2016-05-09__12-07-57-990",
source_dataset: "https://terraref.ncsa.illinois.edu/clowder/datasets/59fc9e7d4f0c3383c73d2905"
source_dataset: "https://terraref.org/clowder/datasets/59fc9e7d4f0c3383c73d2905"
},
```

Expand All @@ -171,7 +171,7 @@ The URL of each **source_dataset** can be used to view the dataset in Clowder.
The datasets can also be filtered by date. The following filters out datasets that are outside of the range of January 2, 2017 through June 20, 2017.

``` {sh eval=FALSE}
curl -o datasets.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}&since=2017-01-02&until=2017-06-10"
curl -o datasets.json -X GET "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}&since=2017-01-02&until=2017-06-10"
```

### Getting file paths from dataset
Expand All @@ -181,7 +181,7 @@ Now that we know what the dataset URLs are, we can use the URLs to query the API
Note the the URL has changed from our previous examples now that we're using the URLs returned by the previous call.

``` {sh eval=FALSE}
SOURCE_DATASET="https://terraref.ncsa.illinois.edu/clowder/datasets/59fc9e7d4f0c3383c73d2905"
SOURCE_DATASET="https://terraref.org/clowder/datasets/59fc9e7d4f0c3383c73d2905"
curl -o files.json -X GET "${SOURCE_DATASET}/files"
```

Expand Down Expand Up @@ -219,7 +219,7 @@ For each file to be retrieved, the unique file ID is needed on the URL.
``` {sh eval=FALSE}
FILE_NAME="ir_geotiff_L1_ua-mac_2016-05-09__12-07-57-990.tif"
FILE_ID=59fc9e844f0c3383c73d2980
curl -o "${FILE_NAME}" -X GET "https://terraref.ncsa.illinois.edu/clowder/api/files/${FILE_ID}"
curl -o "${FILE_NAME}" -X GET "https://terraref.org/clowder/api/files/${FILE_ID}"
```

This call will cause the server to return the contents of the file identified in the URL. This file is then stored locally in *ir_geotiff_L1_ua-mac_2016-05-09__12-07-57-990.tif*.
Expand Down
2 changes: 1 addition & 1 deletion traits/00-BETYdb-getting-started.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ It contains trait (phenotype) data at the plot or plant level as well as meta da

### Introduction to BETYdb

The TERRA REF trait database (terraref.ncsa.illinois.edu/bety) uses the BETYdb data schema (structure) and web application.
The TERRA REF trait database (terraref.org/bety) uses the BETYdb data schema (structure) and web application.
The BETYdb software is actively used and developed by the [TERRA Reference](http://terraref.org) program as well as by the [PEcAn project](http://pecanproject.org).

For more information about BETYdb, see the following:
Expand Down
6 changes: 3 additions & 3 deletions traits/01-web-access.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@

### Web interface

* Sign up for an account at https://terraref.ncsa.illinois.edu/bety
* Sign up for an account at https://terraref.org/bety
* Sign up for the TERRA REF [beta user program](https://docs.google.com/forms/d/e/1FAIpQLScIUJL_OSL9BvBOdlczErds3aOg5Lwz4NIdNQnUiXdsLsYdhw/viewform)
* Wait for database access to be granted
* Your API key will be sent in the email. It can also be found - and regenerated - by navigating to the Users page (data --> [users](https://terraref.ncsa.illinois.edu/bety/users)) in the web interface.
* Your API key will be sent in the email. It can also be found - and regenerated - by navigating to the Users page (data --> [users](https://terraref.org/bety/users)) in the web interface.

TODO add signup info from handout

Expand All @@ -18,7 +18,7 @@ On the Welcome page there is a search option for trait and yield data. This tool

### Download search results as as csv file from the web interface

* Point your browser to https://terraref.ncsa.illinois.edu/bety/
* Point your browser to https://terraref.org/bety/
* login
* enter "NDVI" in the search box
* on the next page you will see the results of this search
Expand Down
18 changes: 9 additions & 9 deletions traits/02-betydb-api-access.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ An API key is not needed to access public data includes sample datasets and meta

### Components of a URL query

* Base url: `terraref.ncsa.illinois.edu/bety`
* Base url: `terraref.org/bety`
* Path to the api: `/api/v1`
* API endpoint: `/search` or `traits` or `sites`. For BETYdb, these are the names of database tables.
* Query parameters: `genus=Sorghum`
Expand All @@ -57,23 +57,23 @@ An API key is not needed to access public data includes sample datasets and meta

First, lets construct a query by putting together a URL.

1. start with the database url: `terraref.ncsa.illinois.edu/bety`
1. start with the database url: `terraref.org/bety`
* this url brings you to the home page
2. Add the path to the API, `/api/v1`
* now we have terraref.ncsa.illinois.edu/bety/api/v1, which points to the API documentation for additional detail on available options
* now we have terraref.org/bety/api/v1, which points to the API documentation for additional detail on available options
3. Add the name of the table you want to query. Lets start with `variables`
* terraref.ncsa.illinois.edu/bety/api/v1/variables
* terraref.org/bety/api/v1/variables
4. Add query terms by appending a `?` and combining with `&`. These can be done in any order. For example:
* `type=trait` where the variable type is 'trait'
* `name=~height` where the variable name contains 'height'
5. Assembling all of this, you have a complete query:
* `terraref.ncsa.illinois.edu/bety/api/v1/variables?type=trait&name=~height`
* `terraref.org/bety/api/v1/variables?type=trait&name=~height`
* This will query all trait variables that have 'height' in their name.
* Does it return the expected values? There should be two.

## Your Turn

> What will the URL https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum return?
> What will the URL https://terraref.org/bety/api/v1/species?genus=Sorghum return?


> Write a URL that will query the database for sites with "Field Scanner" in the name field. Hint: combine two terms with a `+` as in `Field+Scanner`
Expand All @@ -86,14 +86,14 @@ Type the following command into a bash shell (the `-o` option names the output f

```sh
curl -o sorghum.json \
"https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum"
"https://terraref.org/bety/api/v1/species?genus=Sorghum"
```

If you want to write the query without exposing the key in plain text, you can construct it like this:

```sh
curl -o sorghum.json \
"https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum"
"https://terraref.org/bety/api/v1/species?genus=Sorghum"
```

## Using the R jsonlite package to access the API with a URL query
Expand All @@ -107,7 +107,7 @@ library(jsonlite)

```{r text-api, warning = FALSE}
sorghum.json <- readLines(
paste0("https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum&key=",
paste0("https://terraref.org/bety/api/v1/species?genus=Sorghum&key=",
readLines('.betykey')))

## print(sorghum.json)
Expand Down
4 changes: 2 additions & 2 deletions traits/03-access-r-traits.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ sorghum_info <- betydb_query(table = 'species',
genus = "Sorghum",
api_version = 'v1',
limit = 'none',
betyurl = "https://terraref.ncsa.illinois.edu/bety/")
betyurl = "https://terraref.org/bety/")

```

Expand All @@ -71,7 +71,7 @@ sorghum_info <- betydb_query(table = 'species',
Notice all of the arguments that the `betydb_query` function requires? We can change this by setting the default connection options thus:

```{r 03-set-up, echo = TRUE}
options(betydb_url = "https://terraref.ncsa.illinois.edu/bety/",
options(betydb_url = "https://terraref.org/bety/",
betydb_api_version = 'v1')
```

Expand Down
4 changes: 2 additions & 2 deletions traits/04-danforth-indoor-phenotyping-facility.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@ library(traits)

## Connect to the TERRA REF Trait database

Unlike the first two tutorials, now we will be querying real data from the public TERRA REF database. So we will use a new URL, https://terraref.ncsa.illinois.edu/bety/, and we will need to use our own private key.
Unlike the first two tutorials, now we will be querying real data from the public TERRA REF database. So we will use a new URL, https://terraref.org/bety/, and we will need to use our own private key.

```{r terraref-connect-options}

options(betydb_key = readLines('.betykey', warn = FALSE),
betydb_url = "https://terraref.ncsa.illinois.edu/bety/",
betydb_url = "https://terraref.org/bety/",
betydb_api_version = 'v1')
```
### Query data from the Danforth Phenotyping Facility
Expand Down
2 changes: 1 addition & 1 deletion traits/05-maricopa-field-scanner.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ library(leaflet)


options(betydb_key = readLines('.betykey', warn = FALSE),
betydb_url = "https://terraref.ncsa.illinois.edu/bety/",
betydb_url = "https://terraref.org/bety/",
betydb_api_version = 'v1')
```

Expand Down
Loading