Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 16 additions & 3 deletions docs/sphinx/content/users/use_field.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,20 @@
.. _use_field:

Using hypernets_processor in the Field
======================================
Field Processing User Guide
===========================

TBC
Installation
------------

First clone the project repository from GitHub::

$ git clone https://github.com/HYPERNETS/hypernets_processor.git

Then install the module with pip::

$ pip install hypernets_processor/

This should automatically install the dependencies.

If you are installing the module to contribute to developing it is recommended you follow the install instructions on the :ref:`developers` page.
11 changes: 0 additions & 11 deletions docs/sphinx/content/users/use_processing.rst

This file was deleted.

102 changes: 102 additions & 0 deletions docs/sphinx/content/users/user_processor.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
.. use_processing - description of running the processor in an automated manner
Author: seh2
Email: sam.hunt@npl.co.uk
Created: 22/10/20

.. _user_processor:

Automated Processing User Guide
===============================

This section provides a user guide for running the `hypernets_processor` module as an automated processor of incoming field data. In this scenario, a set of field hypstar systems are regularly syncing raw data to a server. Running on this server, the `hypernets_processor` processes the data and adds it to an archive that can be accessed through a user portal.

Covered in this section is installing and setting up the processor, setting up specific job (e.g. field site) and running the automated job scheduler.

Server Installation
-------------------

First clone the project repository from GitHub::

$ git clone https://github.com/HYPERNETS/hypernets_processor.git

To facilitate proper version control of processor configuration, create a new branch for your installed code::

$ git checkout -b <installation_name>_operational

Then install the module with setup tools, including the option to setup the processor::

$ python setup.py develop --setup-processor

This automatically installs the processor and its the dependencies, followed by the running a processor setup routine (see :ref:`user_processor-processor_setup` more details).

Finally, commit any changes to the module made during set up and push::

$ git add -A
$ git commit -m "initial setup on server"
$ git push

Any future changes to the processor configuration should be committed, to ensure appropriate version control. Updates to the processor are then made by merging release branches onto the operational branch (see :ref:`user_processor-updates`).

.. _user_processor-processor_setup:
Processor Configuration
-----------------------

To set the processor configuration, a setup routine is run upon installation. This can be rerun at any time as::

$ hypernets_processor_setup

This sets up the processor configuration such that it correctly points to the appropriate log file, directories and databases, creating any as necessary. By default any created log file or databases are added to the defined processor working directory.

For further configuration one can directly edit the processor configuration file, e.g.::

$ vim <installation_directory>/hypernets_processor/etc/processor.config


.. _user_processor-job_setup:
Job Setup
---------

In the context of the `hypernets_processor`, processing a particular data stream from a given field site is defined as a job.

To initialise a new job to run in the processor, run the following::

$ hypernets_processor_job_init -n <job_name> -w <job_working directory> -i <raw_data_directory> --add-to-scheduler

where:

* `job_name` - is the name of the job within the context of the hypernets processor (could, for example, be set as the site name)
* `job_working_directory` - the working directory of the job. A job configuration file is created in this directory, called `<job_name>.config`.
* `raw_data_directory` - the directory the field data is synced to.
* `add_to_scheduler` - option to add the job to the list of scheduled jobs, should be set.

As well as defining required job configuration information, the job configuration file can also be used to override any processor configuration defaults (e.g. chosen calibration function, which file levels to write), except the set of protected processor configuration defaults (e.g. processor version number). To see what configuration values may be set review the processor configuration file.

For all jobs, it is important relevant metadata be added to the metadata database, so it can be added to the data products.

.. _user_processor-scheduler:
Run Scheduler
-------------

Once setup the automated processing scheduler can be started with::

$ hypernets_processor_scheduler

To see options, try::

$ hypernets_processor_scheduler --help

All jobs are run regularly, processing any new data synced to the server from the field since the last run. The run schedule is defined in the scheduler config, which may be edited as::

$ vim <installation_directory>/hypernets_processor/etc/scheduler.config

Processed products are added to the data archive and listed in the archive database. Any anomolies are add to the anomoly database. More detailed job related log information is added to the job log file. Summary log information for all jobs is added to the processor log file.

To amend the list of scheduled jobs, edit the list of job configuration files listed in the processor jobs file as::

$ vim <installation_directory>/hypernets_processor/etc/jobs.txt

.. _user_processor-updates:
Updates
-------

Updates to the processor are then made by merging release branches onto the operational branch.
13 changes: 9 additions & 4 deletions docs/sphinx/content/users/users.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,18 @@
.. _users:

Users
=====
User Guide
==========

Usage
-----

There are two main use cases for the hypernets_processor package. The primary function of the software is the automated preparation of data retrieved from network sites for distribution to users. Additionally, the software may also be used for ad-hoc processing of particular field acquisitions, for example for testing instrument operation in the field. For information on each these use cases click on one of the following links:


.. toctree::
:maxdepth: 2

users_getting_started
use_field
use_processing
user_processor
atbd
43 changes: 0 additions & 43 deletions docs/sphinx/content/users/users_getting_started.rst

This file was deleted.