Skip to content

Commit

Permalink
update barebones metadata type requirements
Browse files Browse the repository at this point in the history
  • Loading branch information
Ariana Barzinpour committed Jun 13, 2023
1 parent f8f29c0 commit 1a07383
Show file tree
Hide file tree
Showing 6 changed files with 19 additions and 16 deletions.
2 changes: 1 addition & 1 deletion docs/about-core-concepts/metadata-types.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ Metadata Types

Metadata type yaml file must contain name, description and dataset keys.

Dataset key must contain id, sources, creation_dt, label and search_fields keys.
Dataset key must contain id, sources, grid_spatial, measurements, creation_dt, label, format, and search_fields keys.
9 changes: 7 additions & 2 deletions docs/config_samples/metadata_types/bare_bone.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,15 @@
name: barebone
description: A minimalist metadata type file
dataset:
id: [id] # No longer configurable in newer ODCs.
sources: [lineage, source_datasets] # No longer configurable in newer ODCs.
id: [id] # No longer configurable in newer ODCs.
sources: [lineage, source_datasets] # No longer configurable in newer ODCs.

grid_spatial: [grid_spatial, projection]
measurements: [measurements]
creation_dt: [properties, 'odc:processing_datetime']
label: [label]
format: [properties, 'odc:file_format']

search_fields:
platform:
description: Platform code
Expand Down
10 changes: 5 additions & 5 deletions docs/installation/data-preparation-scripts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ Download the USGS Collection 1 landsat scenes from any of the links below:

The prepare script for collection 1 - level 1 data is available in
`ls_usgs_prepare.py
<https://github.com/opendatacube/datacube-dataset-config/blob/master/old-prep-scripts/ls_usgs_prepare.py>`_.
<https://github.com/opendatacube/datacube-dataset-config/blob/main/old-prep-scripts/ls_usgs_prepare.py>`_.

::

$ wget https://github.com/opendatacube/datacube-dataset-config/raw/master/old-prep-scripts/ls_usgs_prepare.py
$ wget https://github.com/opendatacube/datacube-dataset-config/raw/main/old-prep-scripts/ls_usgs_prepare.py
$ python ls_usgs_prepare.py --help
Usage: ls_usgs_prepare.py [OPTIONS] [DATASETS]...

Expand Down Expand Up @@ -85,14 +85,14 @@ For Landsat collection 1 level 1 product:
To prepare downloaded USGS LEDAPS Landsat scenes for use with the Data Cube, use
the script provided in
`usgs_ls_ard_prepare.py
<https://github.com/opendatacube/datacube-dataset-config/blob/master/agdcv2-ingest/prepare_scripts/landsat_collection/usgs_ls_ard_prepare.py>`_
<https://github.com/opendatacube/datacube-dataset-config/blob/main/agdcv2-ingest/prepare_scripts/landsat_collection/usgs_ls_ard_prepare.py>`_

The following example generates the required Dataset Metadata files, named
`agdc-metadata.yaml` for three landsat scenes.

::

$ wget https://github.com/opendatacube/datacube-dataset-config/raw/master/agdcv2-ingest/prepare_scripts/landsat_collection/usgs_ls_ard_prepare.py
$ wget https://github.com/opendatacube/datacube-dataset-config/raw/main/agdcv2-ingest/prepare_scripts/landsat_collection/usgs_ls_ard_prepare.py
$ python USGS_precollection_oldscripts/usgslsprepare.py --help
Usage: usgslsprepare.py [OPTIONS] [DATASETS]...

Expand Down Expand Up @@ -134,7 +134,7 @@ Then :ref:`index the data <indexing>`.
To view an example of how to `index Sentinel-2 data from S3`_ check out the documentation
available in the datacube-dataset-config_ repository.

.. _`index Sentinel-2 data from S3`: https://github.com/opendatacube/datacube-dataset-config/blob/master/sentinel-2-l2a-cogs.md
.. _`index Sentinel-2 data from S3`: https://github.com/opendatacube/datacube-dataset-config/blob/main/sentinel-2-l2a-cogs.md
.. _datacube-dataset-config: https://github.com/opendatacube/datacube-dataset-config/

Custom Prepare Scripts
Expand Down
7 changes: 4 additions & 3 deletions docs/installation/metadata-types.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,16 @@ A Metadata Type defines which fields should be searchable in your product or dat

Three metadata types are added by default called ``eo``, ``telemetry`` and ``eo3``.

You can see the default metadata types in the repository at `datacube/index/default-metadata-types.yaml <https://github.com/opendatacube/datacube-core/blob/develop/datacube/index/default-metadata-types.yaml>`_.

You would create a new metadata type if you want custom fields to be searchable for your products, or
if you want to structure your metadata documents differently.

You can see the default metadata type in the repository at `datacube/index/default-metadata-types.yaml <https://github.com/opendatacube/datacube-core/blob/develop/datacube/index/default-metadata-types.yaml>`_.

To add or alter metadata types, you can use commands like: ``datacube metadata add <path-to-file>``
and to update: ``datacube metadata update <path-to-file>``. Using ``--allow-unsafe`` will allow
you to update metadata types where the changes may have unexpected consequences.

Note that from version 1.9 onwards, only eo3-compatible metadata types will be accepted.

.. literalinclude:: ../config_samples/metadata_types/bare_bone.yaml
:language: yaml
Expand All @@ -22,4 +23,4 @@ you to update metadata types where the changes may have unexpected consequences.

Metadata type yaml file must contain name, description and dataset keys.

Dataset key must contain id, sources, creation_dt, label and search_fields keys.
Dataset key must contain id, sources, grid_spatial, measurements, creation_dt, label, format, and search_fields keys.
5 changes: 2 additions & 3 deletions docs/installation/setup/ubuntu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,9 @@ If createdb or psql cannot connect to server, check which postgresql installatio

which psql

If it is running the mambaforge installation, you may need to install it and ensure it is installed globally::
If it is running the mambaforge installation, you may need to run the global installation::

mamba remove postgresql
sudo apt install postgresql
/usr/bin/psql -d agdcintegration


You can now specify the database user and password for ODC integration testing. To do this::
Expand Down
2 changes: 0 additions & 2 deletions wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -350,8 +350,6 @@ pq
pre
precollection
prefetch
Preperation
preperation
PRIMEM
prog
Proj
Expand Down

0 comments on commit 1a07383

Please sign in to comment.