Skip to content
General-use HAPI server front-end implemented in node.js.
JavaScript Python HTML Makefile Shell Dockerfile
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
bin
conf
lib
metadata
pkg
public
test
.dockerignore
.gitignore
README.md
TODO.md
hapi-server
license.txt
package-lock.json
package.json
server.js
verify.js

README.md

HAPI Server Front-End

A generic HAPI front-end server.

Contents

  1. About
  2. Installation
  3. Examples
  4. Server Configuration
  5. Metadata
  6. Development
  7. Contact

1. About

The intended use case for this server is for a data provider that has

  1. HAPI metadata, in one of a variety of forms, for a collection of datasets and
  2. a command line program that returns at least headerless HAPI CSV for all parameters in the dataset over the full timerange of available data. Optionally, the command line program can take inputs of a start and stop time, a list of one or more parameters to output, and an output format.

This server handles

  1. HAPI metadata validation,
  2. request validation and error responses,
  3. logging and alerts,
  4. time and parameter subsetting (as needed), and
  5. generation of HAPI JSON or HAPI binary (as needed).

A list of catalogs that are served using this sofware is given at http://hapi-server.org/servers.

2. Installation

Binary packages are available for OS-X x64, Linux x64, and Linux ARMv7l (e.g., Rasberry Pi).

A Docker image is also available.

Installation and startup commands are given below the binary packages and docker image. See the Development section for instructions on installing from source.

OS-X x64:

 curl -L https://github.com/hapi-server/server-nodejs/releases/download/v0.9.5/hapi-server-v0.9.5-darwin-x64.tgz | tar zxf -
 cd hapi-server-v0.9.5
 ./hapi-server --open

Linux x64:

 curl -L https://github.com/hapi-server/server-nodejs/releases/download/v0.9.5/hapi-server-v0.9.5-linux-x64.tgz | tar zxf -
 cd hapi-server-v0.9.5
 ./hapi-server --open

Linux ARMv7l:

 curl -L https://github.com/hapi-server/server-nodejs/releases/download/v0.9.5/hapi-server-v0.9.5-linux-armv7l.tgz | tar zxf -
 cd hapi-server-v0.9.5
 ./hapi-server --open

Docker:

docker pull rweigel/hapi-server:v0.9.5
docker run -dit --name hapi-server-v0.9.5 --expose 8999 -p 8999:8999 rweigel/hapi-server:v0.9.5
docker exec -it hapi-server-v0.9.5 ./hapi-server
# Open http://localhost:8999/TestData/hapi in a web browser

2. Examples

List of Included Examples

The following examples are included in the metadata directory. The examples can be run using

./hapi-server -f metadata/FILENAME.json

where FILENAME.json is one of the file names listed below (e.g., Example0.json).

  • Example0.json - A Python program dumps a full dataset in the headerless HAPI CSV format; the server handles time and parameter subsetting and creation of HAPI Binary and JSON. See section 2.1.
  • Example1.json - Same as Example0 except the Python program handles time subsetting.
  • Example2.json - Same as Example0 except the Python program handles time and parameter subsetting and creation of HAPI CSV and Binary. See section 2.2.
  • Example3.json - Same as Example2 except HAPI info metadata for each dataset is stored in external file.
  • Example4.json - Same as Example2 except HAPI info metadata for each dataset is generated by a command line command.
  • Example5.json - Same as Example2 except catalog metadata is stored in an external file.
  • Example6.json - Same as Example2 except catalog metadata is generated by a command line command.
  • Example7.json - Same as Example2 except that catalog metadata is returned from a URL.
  • Example8.json - A dataset in headerless HAPI CSV format is stored in a single file; the server handles parameter and time subsetting and creation of HAPI JSON and Binary.
  • Example9.json - A dataset in headerless HAPI CSV format is returned by a URL; the server handles parameter and time subsetting and creation of HAPI JSON and Binary.
  • AutoplotExample1.json - A dataset is stored in multiple files and AutoplotDataServer is used to subset in time. See section 2.6.
  • AutoplotExample2.json - A dataset is stored in a CDF file and AutoplotDataserver is used to generate HAPI CSV. See section 2.6.
  • TestData.json - A test dataset used to test HAPI clients.
  • SSCWeb.json - Data from a non-HAPI web service is made available from a HAPI server. See section 2.3.
  • INTERMAGNET.json - Data in ASCII files in a FTP site is made available from a HAPI server. See section 2.4.
  • QinDenton.json - Data in a single ASCII file is converted to headerless HAPI CSV by a Python program. See section 2.5.

2.1 Serve data from a minimal Python program

In this example, we assume that the command line program that returns a dataset has the minimal capabilities required - when executed, it generates a headerless HAPI CSV file with all parameters in the dataset over the full timerange of available data. The server handles time and parameter subsetting and the generation of HAPI Binary and JSON.

The Python script Example.py returns HAPI-formatted CSV data (with no header) with two parameters. To serve this data, only a configuration file, Example0.json, is needed. The configuration file has information that is used to call the command line program and it also has HAPI metadata that describes the output of Example.py. Details about the configuration file format are described in the Metadata section.

The Python calling syntax of Example.py is

python Example.py

To run this example locally after installation, execute

./hapi-server --file metdata/Example0.json

and then open http://localhost:8999/Example1/hapi. You should see the same landing page as that at http://hapi-server.org/servers/Example0/hapi. Note that the --open command line switch can be used to automatically open the landing page, e.g.,

./hapi-server --file metdata/Example0.json --open

2.2 Serve data from a enhanced Python program

The Python script Example.py actually has the ability to subset parameters and time and to provide binary output. To force the server to use these capabilities, we need to modify the server configuration metadata in Example1.json. The changes are replacing

"command": "python bin/Example.py"

with

"command": "python bin/Example.py --params ${parameters} --start ${start} --stop ${stop} --fmt ${format}"

and adding

"formats": ["csv","binary"]

The modified file is Example2.json. To run this example locally after installation, execute

./hapi-server --file metadata/Example2.json

and then open http://localhost:8999/Example2/hapi. The command line program now produces binary output and performs parameter subsetting as needed and the response time for data should decrease.

The server responses will be idential to that in the previous example. You should see the same landing page as that at http://hapi-server.org/servers/Example2/hapi.

2.3 Serve data from a non-HAPI web service

A non-HAPI server can be quickly made HAPI compliant by using this server as a pass-through. Data from SSCWeb, which is available from a REST API, has been made avaliable through a HAPI API at http://hapi-server.org/servers/SSCWeb/hapi. The configuration file is SSCWeb.json and the command line program is SSCWeb.js. Note that the metadata file SSCWeb.json was created using code in metadata/SSCWeb.

To run this example locally after installation, execute

./hapi-server --file metadata/SSCWeb.json --open

You should see the same landing page as that at http://hapi-server.org/servers/SSCWeb/hapi.

2.4 Serve data stored in a single file

The Qin-Denton dataset contains multiple parameters stored in a single large file.

The command line program that produces HAPI CSV from this file is QinDenton.py and the metadata is in QinDenton.json.

To run this example, use

./hapi-server --file metadata/QinDenton.json

2.5 Serve data stored in multiple files

INTERMAGNET has ground magnetometer data stored in daily files from over 150 magnetometer stations at 1-minute and 1-second cadence made available from a FTP site.

The command line program that produces HAPI CSV is INTERMAGNET.py and the metadata is in INTERMAGNET.json. The code that produces the metadata is in metadata/INTERMAGNET. To run this example, execute

./hapi-server --file metadata/INTERMAGNET.json --open

2.6 Serve data read by Autoplot

Nearly any data file that can be read by Autoplot can be served using this server.

Serving data requires at most two steps:

  1. Genering an Autoplot URI for each parameter; and (in some cases)
  2. Writing (by hand) metadata for each parameter.

Example 1

The first example serves data stored in a single CDF file. The configuration file is AutoplotExample1.json.

In this example, step 2. above (writing metadata by hand) is not required because the data file has metadata that is in a format that Autoplot can translate to HAPI metadata.

To run this example locally, execute

./hapi-server --file metadata/AutoplotExample1.json

Example 2

The second example serves data stored in in multiple ASCII files. The configuration file is AutoplotExample2.json.

To run this example locally, execute

./hapi-server --file metadata/AutoplotExample2.json

3. Usage

List command line options:

./hapi-server -h

  --help, -h    Show help 
  --file, -f    Catalog configuration file
  --port, -p    Server port [default:8999]             
  --conf, -c    Server configuration file
  --ignore, -i  Start server even if metadata errors
  --open, -o    Open web page on start
  --test, -t    Run URL tests and exit
  --verify, -v  Run verification tests and exit

Basic usage:

./hapi-server --file metdata/TestData.json

Starts HAPI server at http://localhost:8999/TestData/hapi and serves datasets specified in the catalog ./metadata/TestData.json.

Mutiple catalogs can be served by providing multiple catalog files on the command line:

./hapi-server --file CATALOG1.json --file CATALOG2.json

For example

./hapi-server --file metadata/TestData.json --file metadata/Example1.json

will serve the two datasets at

http://localhost:8999/TestData/hapi
http://localhost:8999/Example1/hapi

And the page at http://localhost:8999/ will point to these two URLs.

4. Server Configuration

4.1 conf/config.json

The variables HAPISERVERPATH, HAPISERVERHOME, NODEEXE, and PYTHONEXE can be set in conf/config.json or as environment variables. These variables can be used in commands, files, and URLs in the server metadata (the file passed using the command-line --file switch).

The default configuration file is conf/config.json and this location changed using a command line argument, e.g.,

./hapiserver -c /tmp/config.json

To set variables using environment variables, use, e.g.,

PYTHONEXE=/opt/python/bin/python ./hapi-server

Variables set as environment variable take precendence over those set in conf/config.json.

HAPISERVERPATH and HAPISERVERHOME

These two variables can be used in metadata to reference a directory. For example,

"catalog": "$HAPISERVERHOME/mymetadata/Data.json"

By default, $HAPISERVERPATH is the installation directory (the directory containing the shell launch script hapi-server) and should not be changed as it is referenced in the demonstration metadata files. Modify HAPISERVERHOME in conf/config.json to use a custom path.

All relative paths in commands in metadata files are relative to the directory where hapi-server was executed.

For example, if

/tmp/hapi-server

is executed from /home/username, the file

/home/username/metadata/TestData.json`

is read and relative paths in TestData.json have /home/username/ prepended.

PYTHONEXE

This is the command used to call Python. By default, it is python. If python is not in the path, this can be set using a relative or absolute path. Python is used by several of the demonstration catalogs.

Example:

"command": "$PYTHONEXE $HAPISERVERHOME/mybin/Data.py"

NODEEXE

This is the command used to call NodeJS. By default, it is the command used to start the server. The start-up script looks for a NodeJS executable in $HAPISERVERPATH/bin and then tries node and then nodejs.

4.2 Apache

To expose a URL through Apache, (1) enable mod_proxy and mod_proxy_http, (2) add the following in a <VirtualHost> node in a Apache Virtual Hosts file

<VirtualHost *:80>
	ProxyPass /TestData http://localhost:8999/TestData retry=1
	ProxyPassReverse /TestData http://localhost:8999/TestData
</VirtualHost>

and (3) Include ths file in the Apache start-up configuration file.

If serving multiple catalogs, use

<VirtualHost *:80>
	ProxyPass /servers http://localhost:8999/servers retry=1
	ProxyPassReverse /servers http://localhost:8999/servers
</VirtualHost>

4.3 Nginx

For Nginx, add the following to nginx.conf

location /TestData {
    proxy_pass http://localhost:8999/TestData;
}

If serving multiple catalogs, use

location /servers {
    proxy_pass http://localhost:8999/servers;
}

5. Metadata

The metadata required for this server is similar to the /catalog and /info response of a HAPI server.

The server requires that the /catalog response is combined with the /info response for all datasets in the catalog in single JSON catalog configuration file. Additional information about how to generate data must also be included in this JSON file.

The top-level structure of the configuration file is

{
	"server": { // See section 5.1
		"id": "",
		"prefix": "",
		"landing": ""
	},
	"catalog": array or string // See section 5.2 
	"data": { // See section 5.3
	    "command": "Command line template",
	     or
	    "file": "HAPI CSV file"
	    "fileformat": "one of 'csv', 'binary', 'json'"
	     or
	    "url": "URL that returns HAPI data"
	    "urlformat": "one of 'csv', 'binary', 'json'"
	    "contact": "Email address if error in command line program",
	    "testcommands": [
	    		{
		    		"command": string,  
		    		"Nlines": integer,
		    		"Nbytes": integer,
		    		"Ncommas", integer
	    		},
	    		...
	    	]
	    "testurls": [
	    		{
		    		"url": string,  
		    		"Nlines": integer, 
		    		"Nbytes": integer,  
		    		"Ncommas": integer
	    		},
	    		...
	    	]
	},

}

A variety of examples are given in ./metadata and described below along with options for the catalog property.

The string command in the data node is a command that produces a headerless HAPI data response and can have placeholders for time range of data to return (using start (${start}) and stop (${stop})), a dataset id (${id}), a comma-separated list of parameters (${parameters}) and an output format (${format}). For example,

python ./bin/Example.py --dataset ${id} --parameters \
	${parameters} --start ${start} --stop ${stop} --format ${format}"`

5.1 server

The server node has the form

"server": {
	"id": "",
	"prefix": "",
	"landing": ""
}

The id is by default the name of the server configuration file, e.g.,

./hapi-server --file metadata/TestData.json

then id=TestData and prefix=TestData.

By default, this catalog would be served from

http://localhost:8999/TestData/hapi

TestData in the URL can be changed to TestData2 by using prefix=TestData2.

landing is the path of the page to serve at

http://localhost:8999/TestData/hapi

By default, the page served is $HAPISERVERPATH/public/default.htm.

5.2 catalog

The catalog node can be either a string or an array.

In the case that it is an array, it should contain either the combined HAPI /catalog and /info response (5.2.1) or a /catalog response with references to the \info response (5.2.1).

In the case that it is a string (5.2.3), the string is either a file containing a catalog array or a command line template that returns a catalog array.

5.2.1 Combined HAPI /catalog and /info object

If catalog is an array, it should have the same format as a HAPI /catalog response (each object in the array has an id property and and optional title property) with the addition of an info property that is the HAPI response for that id, e.g., /info?id=dataset1.

"catalog":
 [
	{
		"id": "dataset1",
		"title": "a dataset",
		"info": {
				"startDate": "2000-01-01Z",
				"stopDate": "2000-01-02Z",
				"parameters": [...]
		}
	},
	{
		"id": "dataset2",
		"title": "another dataset",
		"info": {
			"startDate": "2000-01-01Z",
			"stopDate": "2000-01-02Z",
			"parameters": [...]
		}
	}
 ]

In the following subsections, this type of JSON structure is referred to as a fully resolved catalog.

Examples of this type of catalog include

5.2.2 /catalog response with file or command template for info object

The info value can be a path to a info JSON file

"catalog": 
 [
	{
		"id": "dataset1",
		"title": "a dataset",
		"info": "relativepath/to/dataset2/info_file.json"
	},
	{
		"id": "dataset2",
		"title": "another dataset",
		"info": "/absolutepath/to/dataset2/info_file.json"
	}
 ]

See also Example3.json.

Alternatively, the metadata for each dataset may be produced by execution of a command line program for each dataset. For example, in the following, program1 should result in a HAPI JSON response from /info?id=dataset1 to stdout. Before execution, the string ${id}, if found, is replaced with the requested dataset ID. Execution of program2 should produce the HAPI JSON corresponding to the query /info?id=dataset2.

"catalog":
 [
	{
		"id": "dataset1",
		"title": "a dataset",
		"info": "bin/program --id ${id}" 
	},
	{
		"id": "dataset2",
		"title": "another dataset",
		"info": "program2"
	}
 ]

See also Example4.json.

5.2.3 References to a command line template or file

The catalog value can be a command line program that generates a fully resolved catalog, e.g.,

"catalog": "program --arg1 val1 ..."

The command line command should return the response of an /info query (with no id argument).

The path to a fully resolved catalog can also be given. See also Example5.json.

5.3 data

6. Development

6.1 Installation

Install nodejs (tested with v6) using either the standard installer or NVM.

Show NVM installation notes
# Install Node Version Manager
curl https://raw.githubusercontent.com/creationix/nvm/v0.9.5/install.sh | bash

# Open new shell (see displayed instructions from above command)

# Install and use node.js version 6
nvm install 6
# Clone the server respository
git clone https://github.com/hapi-server/server-nodejs

# Install dependencies
cd server-nodejs; npm install

# Start server
node server.js

# Run tests; Python 2.7+ required for certain tests.
npm test

7. Contact

Bob Weigel rweigel@gmu.edu

Please submit bugs and feature requests to the issue tracker

You can’t perform that action at this time.