Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
24 changes: 0 additions & 24 deletions docs/sphinx/content/users/use_field.rst

This file was deleted.

60 changes: 60 additions & 0 deletions docs/sphinx/content/users/user_adhoc.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
.. user_adhoc - description of how to use the processor for ad hoc sequence processing
Author: seh2
Email: sam.hunt@npl.co.uk
Created: 23/3/20

.. _user_adhoc:

User Guide - Ad-hoc Sequence Processing
=======================================

This section provides a user guide for running the **hypernets_processor** module to process specified field acquisitions, or sequences, on an ad-hoc basis outside of any automated processing.

Prerequisites
-------------

**hypernets_processor** is distributed using Git, from the project's `GitHub <https://github.com/HYPERNETS/hypernets_processor>`_ repository. Git can be installed from the from the `Git website <https://git-scm.com>`_.

Python 3 is required to run the software, the `Anaconda <https://www.anaconda.com>`_ distribution provides a convenient way to install this.

Installation
------------

First clone the project repository from GitHub::

$ git clone https://github.com/HYPERNETS/hypernets_processor.git

Then install the module with pip::

$ pip install hypernets_processor/

This will also automatically install any dependencies.

If you are installing the module to contribute to its development, it is recommended you follow the install instructions on the :ref:`developers` page.

Sequence Processing
-------------------

Once installed the `hypernets_sequence_processor` command-line tool provides the means to process raw sequence data, it is run as follows::

$ hypernets_sequence_processor -i <input_directory> -o <output_directory> -n <network>

where:

* `input_directory` - directory of raw sequence product, or directory containing a number of raw sequence products, to process.
* `output_directory` - directory to write output data to.
* `network` - network name, land or water. The default configuration for this network is applied for the processing.

To see more options, try::

$ hypernets_sequence_processor --help

Alternatively, the processing can be specified with a job configuration file as follows::

$ hypernets_sequence_processor -j <job_config_path>

where:

* `job_config_path` - path of a job configuration file. See :ref:`user_processor-job_setup` for information on initialising a job configuration file.

Specifying processing with a custom job configuration file allows non-network-default configuration values to be set, for example, chosen calibration function.
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,22 @@
Email: sam.hunt@npl.co.uk
Created: 22/10/20

.. _user_processor:
.. _user_automated:

User Guide - Automated Processing
=================================

This section provides a user guide for running the `hypernets_processor` module as an automated processor of incoming field data. In this scenario, a set of field hypstar systems are regularly syncing raw data to a server. Running on this server, the `hypernets_processor` processes the data and adds it to an archive that can be accessed through a user portal.
This section provides a user guide for running the **hypernets_processor** module as an automated processor of incoming field data. In this scenario, a set of field hypstar systems are regularly syncing raw data to a server. Running on this server, the **hypernets_processor** processes the data and adds it to an archive that can be accessed through a user portal.

Covered in this section is installing and setting up the processor, setting up specific job (e.g. field site) and running the automated job scheduler.

Prerequisites
-------------

**hypernets_processor** is distributed using Git, from the project's `GitHub <https://github.com/HYPERNETS/hypernets_processor>`_ repository. This can be installed on your Linux server using your package manager of choice, following the instruction on the `Git website <https://git-scm.com/download/linux>`_.

Python 3 is required to run the software, the `Anaconda <https://www.anaconda.com>`_ distribution provides a convenient way to install this.

Server Installation
-------------------

Expand All @@ -38,6 +45,7 @@ Finally, commit any changes to the module made during set up and push::
Any future changes to the processor configuration should be committed, to ensure appropriate version control. Updates to the processor are then made by merging release branches onto the operational branch (see :ref:`user_processor-updates`).

.. _user_processor-processor_setup:

Processor Configuration
-----------------------

Expand All @@ -53,6 +61,7 @@ For further configuration one can directly edit the processor configuration file


.. _user_processor-job_setup:

Job Setup
---------

Expand All @@ -74,6 +83,7 @@ As well as defining required job configuration information, the job configuratio
For all jobs, it is important relevant metadata be added to the metadata database, so it can be added to the data products.

.. _user_processor-scheduler:

Running Job Scheduler
---------------------

Expand All @@ -96,6 +106,7 @@ To amend the list of scheduled jobs, edit the list of job configuration files li
$ vim <installation_directory>/hypernets_processor/etc/jobs.txt

.. _user_processor-updates:

Updates
-------

Expand Down
9 changes: 4 additions & 5 deletions docs/sphinx/content/users/users.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,11 @@
User Guide
==========

There are two main use cases for the hypernets_processor package. The primary function of the software is the automated preparation of data retrieved from network sites for distribution to users. Additionally, the software may also be used for ad-hoc processing of particular field acquisitions, for example for testing instrument operation in the field. For information on each these use cases click on one of the following links:
There are two main use cases for the **hypernets_processor** module. The primary function of the software is the automated preparation of data retrieved from network sites for distribution to users. Additionally, the software may also be used for ad-hoc processing of particular field acquisitions, for example for testing instrument operation in the field. For information on each these use cases read the following sections.


.. toctree::
:maxdepth: 2
:maxdepth: 1

use_field
user_processor
atbd
user_adhoc
user_automated
3 changes: 2 additions & 1 deletion docs/sphinx/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ For Users:
:maxdepth: 2

content/users/users
content/atbd/atbd

For Developers:
~~~~~~~~~~~~~~~
Expand All @@ -30,7 +31,7 @@ API Documentation
~~~~~~~~~~~~~~~~~

.. toctree::
:maxdepth: 3
:maxdepth: 2

content/API/hypernets_processor

Expand Down